Feb 20 11:47:18.625192 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 20 11:47:19.296404 master-0 kubenswrapper[4180]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 11:47:19.296404 master-0 kubenswrapper[4180]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 20 11:47:19.296404 master-0 kubenswrapper[4180]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 11:47:19.296404 master-0 kubenswrapper[4180]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 11:47:19.296404 master-0 kubenswrapper[4180]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 20 11:47:19.296404 master-0 kubenswrapper[4180]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 11:47:19.298322 master-0 kubenswrapper[4180]: I0220 11:47:19.297554 4180 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 20 11:47:19.302602 master-0 kubenswrapper[4180]: W0220 11:47:19.302556 4180 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 11:47:19.302602 master-0 kubenswrapper[4180]: W0220 11:47:19.302586 4180 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 11:47:19.302602 master-0 kubenswrapper[4180]: W0220 11:47:19.302596 4180 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 11:47:19.302602 master-0 kubenswrapper[4180]: W0220 11:47:19.302605 4180 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302615 4180 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302624 4180 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302632 4180 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302640 4180 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302648 4180 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302659 4180 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302668 4180 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302678 4180 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302689 4180 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302700 4180 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302709 4180 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302717 4180 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302726 4180 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302734 4180 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302744 4180 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302751 4180 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302759 4180 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302767 4180 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 11:47:19.302831 master-0 kubenswrapper[4180]: W0220 11:47:19.302775 4180 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302783 4180 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302790 4180 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302798 4180 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302806 4180 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302813 4180 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302821 4180 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302829 4180 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302836 4180 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302845 4180 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302855 4180 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302863 4180 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302872 4180 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302882 4180 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302892 4180 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302902 4180 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302910 4180 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302919 4180 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302927 4180 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 11:47:19.303820 master-0 kubenswrapper[4180]: W0220 11:47:19.302935 4180 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.302943 4180 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.302951 4180 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.302959 4180 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.302967 4180 feature_gate.go:330] unrecognized feature gate: Example Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.302978 4180 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.302986 4180 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.302994 4180 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.303001 4180 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.303009 4180 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.303017 4180 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.303025 4180 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.303033 4180 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.303042 4180 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.303055 4180 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.303065 4180 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.303073 4180 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.303082 4180 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.303090 4180 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.303099 4180 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 11:47:19.304881 master-0 kubenswrapper[4180]: W0220 11:47:19.303107 4180 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: W0220 11:47:19.303115 4180 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: W0220 11:47:19.303125 4180 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: W0220 11:47:19.303132 4180 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: W0220 11:47:19.303140 4180 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: W0220 11:47:19.303148 4180 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: W0220 11:47:19.303155 4180 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: W0220 11:47:19.303163 4180 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: W0220 11:47:19.303171 4180 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: W0220 11:47:19.303179 4180 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: W0220 11:47:19.303186 4180 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: I0220 11:47:19.304297 4180 flags.go:64] FLAG: --address="0.0.0.0" Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: I0220 11:47:19.304320 4180 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: I0220 11:47:19.304337 4180 flags.go:64] FLAG: --anonymous-auth="true" Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: I0220 11:47:19.304354 4180 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: I0220 11:47:19.304365 4180 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: I0220 11:47:19.304374 4180 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: I0220 11:47:19.304386 4180 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: I0220 11:47:19.304397 4180 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: I0220 11:47:19.304406 4180 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: I0220 11:47:19.304415 4180 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: I0220 11:47:19.304425 4180 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 20 11:47:19.305881 master-0 kubenswrapper[4180]: I0220 11:47:19.304434 4180 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304444 4180 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304453 4180 flags.go:64] FLAG: --cgroup-root="" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304462 4180 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304471 4180 flags.go:64] FLAG: --client-ca-file="" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304480 4180 flags.go:64] FLAG: --cloud-config="" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304489 4180 flags.go:64] FLAG: --cloud-provider="" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304499 4180 flags.go:64] FLAG: --cluster-dns="[]" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304510 4180 flags.go:64] FLAG: --cluster-domain="" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304518 4180 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304550 4180 flags.go:64] FLAG: --config-dir="" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304559 4180 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304569 4180 flags.go:64] FLAG: --container-log-max-files="5" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304579 4180 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304589 4180 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304597 4180 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304607 4180 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304616 4180 flags.go:64] FLAG: --contention-profiling="false" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304625 4180 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304633 4180 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304643 4180 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304651 4180 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304663 4180 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304672 4180 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304681 4180 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 20 11:47:19.306904 master-0 kubenswrapper[4180]: I0220 11:47:19.304690 4180 flags.go:64] FLAG: --enable-load-reader="false" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304699 4180 flags.go:64] FLAG: --enable-server="true" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304707 4180 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304719 4180 flags.go:64] FLAG: --event-burst="100" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304729 4180 flags.go:64] FLAG: --event-qps="50" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304738 4180 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304747 4180 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304757 4180 flags.go:64] FLAG: --eviction-hard="" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304768 4180 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304778 4180 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304787 4180 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304796 4180 flags.go:64] FLAG: --eviction-soft="" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304806 4180 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304814 4180 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304823 4180 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304832 4180 flags.go:64] FLAG: --experimental-mounter-path="" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304841 4180 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304849 4180 flags.go:64] FLAG: --fail-swap-on="true" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304859 4180 flags.go:64] FLAG: --feature-gates="" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304870 4180 flags.go:64] FLAG: --file-check-frequency="20s" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304879 4180 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304889 4180 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304898 4180 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304908 4180 flags.go:64] FLAG: --healthz-port="10248" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304918 4180 flags.go:64] FLAG: --help="false" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304927 4180 flags.go:64] FLAG: --hostname-override="" Feb 20 11:47:19.308000 master-0 kubenswrapper[4180]: I0220 11:47:19.304936 4180 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.304945 4180 flags.go:64] FLAG: --http-check-frequency="20s" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.304954 4180 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.304963 4180 flags.go:64] FLAG: --image-credential-provider-config="" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.304972 4180 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.304981 4180 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.304990 4180 flags.go:64] FLAG: --image-service-endpoint="" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.304999 4180 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305008 4180 flags.go:64] FLAG: --kube-api-burst="100" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305017 4180 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305026 4180 flags.go:64] FLAG: --kube-api-qps="50" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305035 4180 flags.go:64] FLAG: --kube-reserved="" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305044 4180 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305053 4180 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305064 4180 flags.go:64] FLAG: --kubelet-cgroups="" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305073 4180 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305081 4180 flags.go:64] FLAG: --lock-file="" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305090 4180 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305100 4180 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305109 4180 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305121 4180 flags.go:64] FLAG: --log-json-split-stream="false" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305130 4180 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305138 4180 flags.go:64] FLAG: --log-text-split-stream="false" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305148 4180 flags.go:64] FLAG: --logging-format="text" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305156 4180 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 20 11:47:19.309162 master-0 kubenswrapper[4180]: I0220 11:47:19.305166 4180 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305175 4180 flags.go:64] FLAG: --manifest-url="" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305184 4180 flags.go:64] FLAG: --manifest-url-header="" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305196 4180 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305206 4180 flags.go:64] FLAG: --max-open-files="1000000" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305216 4180 flags.go:64] FLAG: --max-pods="110" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305225 4180 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305234 4180 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305244 4180 flags.go:64] FLAG: --memory-manager-policy="None" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305252 4180 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305262 4180 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305271 4180 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305280 4180 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305300 4180 flags.go:64] FLAG: --node-status-max-images="50" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305309 4180 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305319 4180 flags.go:64] FLAG: --oom-score-adj="-999" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305329 4180 flags.go:64] FLAG: --pod-cidr="" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305337 4180 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305351 4180 flags.go:64] FLAG: --pod-manifest-path="" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305360 4180 flags.go:64] FLAG: --pod-max-pids="-1" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305369 4180 flags.go:64] FLAG: --pods-per-core="0" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305378 4180 flags.go:64] FLAG: --port="10250" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305387 4180 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305396 4180 flags.go:64] FLAG: --provider-id="" Feb 20 11:47:19.310709 master-0 kubenswrapper[4180]: I0220 11:47:19.305404 4180 flags.go:64] FLAG: --qos-reserved="" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305414 4180 flags.go:64] FLAG: --read-only-port="10255" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305423 4180 flags.go:64] FLAG: --register-node="true" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305431 4180 flags.go:64] FLAG: --register-schedulable="true" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305441 4180 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305455 4180 flags.go:64] FLAG: --registry-burst="10" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305464 4180 flags.go:64] FLAG: --registry-qps="5" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305473 4180 flags.go:64] FLAG: --reserved-cpus="" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305482 4180 flags.go:64] FLAG: --reserved-memory="" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305492 4180 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305502 4180 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305511 4180 flags.go:64] FLAG: --rotate-certificates="false" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305519 4180 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305550 4180 flags.go:64] FLAG: --runonce="false" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305559 4180 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305570 4180 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305579 4180 flags.go:64] FLAG: --seccomp-default="false" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305588 4180 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305597 4180 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305606 4180 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305615 4180 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305624 4180 flags.go:64] FLAG: --storage-driver-password="root" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305633 4180 flags.go:64] FLAG: --storage-driver-secure="false" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305642 4180 flags.go:64] FLAG: --storage-driver-table="stats" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305651 4180 flags.go:64] FLAG: --storage-driver-user="root" Feb 20 11:47:19.312007 master-0 kubenswrapper[4180]: I0220 11:47:19.305659 4180 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305669 4180 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305678 4180 flags.go:64] FLAG: --system-cgroups="" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305687 4180 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305700 4180 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305709 4180 flags.go:64] FLAG: --tls-cert-file="" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305718 4180 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305728 4180 flags.go:64] FLAG: --tls-min-version="" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305742 4180 flags.go:64] FLAG: --tls-private-key-file="" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305750 4180 flags.go:64] FLAG: --topology-manager-policy="none" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305759 4180 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305768 4180 flags.go:64] FLAG: --topology-manager-scope="container" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305777 4180 flags.go:64] FLAG: --v="2" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305788 4180 flags.go:64] FLAG: --version="false" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305799 4180 flags.go:64] FLAG: --vmodule="" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305809 4180 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: I0220 11:47:19.305819 4180 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: W0220 11:47:19.306017 4180 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: W0220 11:47:19.306027 4180 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: W0220 11:47:19.306037 4180 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: W0220 11:47:19.306047 4180 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: W0220 11:47:19.306055 4180 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: W0220 11:47:19.306063 4180 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: W0220 11:47:19.306071 4180 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 11:47:19.313112 master-0 kubenswrapper[4180]: W0220 11:47:19.306080 4180 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306090 4180 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306130 4180 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306141 4180 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306150 4180 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306158 4180 feature_gate.go:330] unrecognized feature gate: Example Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306166 4180 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306175 4180 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306183 4180 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306191 4180 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306199 4180 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306207 4180 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306215 4180 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306223 4180 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306231 4180 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306239 4180 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306250 4180 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306258 4180 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306266 4180 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306274 4180 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 11:47:19.314270 master-0 kubenswrapper[4180]: W0220 11:47:19.306281 4180 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306291 4180 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306301 4180 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306309 4180 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306317 4180 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306325 4180 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306332 4180 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306343 4180 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306353 4180 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306362 4180 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306370 4180 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306378 4180 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306388 4180 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306397 4180 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306405 4180 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306415 4180 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306423 4180 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306431 4180 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306440 4180 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 11:47:19.315517 master-0 kubenswrapper[4180]: W0220 11:47:19.306449 4180 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306458 4180 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306466 4180 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306474 4180 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306482 4180 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306491 4180 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306499 4180 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306507 4180 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306517 4180 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306552 4180 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306561 4180 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306570 4180 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306578 4180 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306586 4180 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306594 4180 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306602 4180 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306610 4180 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306618 4180 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306626 4180 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 11:47:19.316663 master-0 kubenswrapper[4180]: W0220 11:47:19.306634 4180 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 11:47:19.317484 master-0 kubenswrapper[4180]: W0220 11:47:19.306642 4180 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 11:47:19.317484 master-0 kubenswrapper[4180]: W0220 11:47:19.306650 4180 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 11:47:19.317484 master-0 kubenswrapper[4180]: W0220 11:47:19.306658 4180 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 11:47:19.317484 master-0 kubenswrapper[4180]: W0220 11:47:19.306666 4180 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 11:47:19.317484 master-0 kubenswrapper[4180]: W0220 11:47:19.306673 4180 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 11:47:19.317484 master-0 kubenswrapper[4180]: W0220 11:47:19.306681 4180 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 11:47:19.317484 master-0 kubenswrapper[4180]: I0220 11:47:19.306693 4180 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 11:47:19.321935 master-0 kubenswrapper[4180]: I0220 11:47:19.321862 4180 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 20 11:47:19.321935 master-0 kubenswrapper[4180]: I0220 11:47:19.321909 4180 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 20 11:47:19.322090 master-0 kubenswrapper[4180]: W0220 11:47:19.322037 4180 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 11:47:19.322090 master-0 kubenswrapper[4180]: W0220 11:47:19.322052 4180 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 11:47:19.322090 master-0 kubenswrapper[4180]: W0220 11:47:19.322060 4180 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 11:47:19.322090 master-0 kubenswrapper[4180]: W0220 11:47:19.322070 4180 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 11:47:19.322090 master-0 kubenswrapper[4180]: W0220 11:47:19.322078 4180 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 11:47:19.322090 master-0 kubenswrapper[4180]: W0220 11:47:19.322086 4180 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 11:47:19.322090 master-0 kubenswrapper[4180]: W0220 11:47:19.322093 4180 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322168 4180 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322180 4180 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322189 4180 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322197 4180 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322205 4180 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322212 4180 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322220 4180 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322228 4180 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322236 4180 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322244 4180 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322254 4180 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322264 4180 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322273 4180 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322281 4180 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322289 4180 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322296 4180 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322307 4180 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322318 4180 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 11:47:19.322466 master-0 kubenswrapper[4180]: W0220 11:47:19.322327 4180 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322335 4180 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322344 4180 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322355 4180 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322368 4180 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322377 4180 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322386 4180 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322394 4180 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322402 4180 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322410 4180 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322417 4180 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322425 4180 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322433 4180 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322441 4180 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322451 4180 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322459 4180 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322467 4180 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322475 4180 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322482 4180 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 11:47:19.323359 master-0 kubenswrapper[4180]: W0220 11:47:19.322493 4180 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322504 4180 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322512 4180 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322522 4180 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322579 4180 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322589 4180 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322598 4180 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322606 4180 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322615 4180 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322624 4180 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322632 4180 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322641 4180 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322649 4180 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322657 4180 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322665 4180 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322673 4180 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322681 4180 feature_gate.go:330] unrecognized feature gate: Example Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322689 4180 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322697 4180 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322706 4180 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 11:47:19.324393 master-0 kubenswrapper[4180]: W0220 11:47:19.322714 4180 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 11:47:19.325491 master-0 kubenswrapper[4180]: W0220 11:47:19.322721 4180 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 11:47:19.325491 master-0 kubenswrapper[4180]: W0220 11:47:19.322729 4180 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 11:47:19.325491 master-0 kubenswrapper[4180]: W0220 11:47:19.322738 4180 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 11:47:19.325491 master-0 kubenswrapper[4180]: W0220 11:47:19.322745 4180 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 11:47:19.325491 master-0 kubenswrapper[4180]: W0220 11:47:19.322753 4180 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 11:47:19.325491 master-0 kubenswrapper[4180]: W0220 11:47:19.322761 4180 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 11:47:19.325491 master-0 kubenswrapper[4180]: W0220 11:47:19.322769 4180 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 11:47:19.325491 master-0 kubenswrapper[4180]: I0220 11:47:19.322782 4180 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 11:47:19.325491 master-0 kubenswrapper[4180]: W0220 11:47:19.322999 4180 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 11:47:19.325491 master-0 kubenswrapper[4180]: W0220 11:47:19.323012 4180 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 11:47:19.325491 master-0 kubenswrapper[4180]: W0220 11:47:19.323022 4180 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 11:47:19.325491 master-0 kubenswrapper[4180]: W0220 11:47:19.323030 4180 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 11:47:19.325491 master-0 kubenswrapper[4180]: W0220 11:47:19.323038 4180 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 11:47:19.325491 master-0 kubenswrapper[4180]: W0220 11:47:19.323046 4180 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 11:47:19.325491 master-0 kubenswrapper[4180]: W0220 11:47:19.323053 4180 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323061 4180 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323069 4180 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323077 4180 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323085 4180 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323095 4180 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323105 4180 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323113 4180 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323121 4180 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323130 4180 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323138 4180 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323147 4180 feature_gate.go:330] unrecognized feature gate: Example Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323156 4180 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323164 4180 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323172 4180 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323180 4180 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323188 4180 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323196 4180 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323204 4180 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 11:47:19.326236 master-0 kubenswrapper[4180]: W0220 11:47:19.323211 4180 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323222 4180 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323233 4180 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323241 4180 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323251 4180 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323259 4180 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323267 4180 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323276 4180 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323284 4180 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323292 4180 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323300 4180 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323307 4180 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323317 4180 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323328 4180 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323338 4180 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323347 4180 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323355 4180 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323363 4180 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323372 4180 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 11:47:19.327105 master-0 kubenswrapper[4180]: W0220 11:47:19.323381 4180 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323390 4180 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323398 4180 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323408 4180 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323416 4180 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323425 4180 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323432 4180 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323441 4180 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323449 4180 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323457 4180 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323466 4180 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323474 4180 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323482 4180 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323490 4180 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323497 4180 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323505 4180 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323513 4180 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323521 4180 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323549 4180 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323557 4180 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 11:47:19.327961 master-0 kubenswrapper[4180]: W0220 11:47:19.323565 4180 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 11:47:19.328952 master-0 kubenswrapper[4180]: W0220 11:47:19.323576 4180 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 11:47:19.328952 master-0 kubenswrapper[4180]: W0220 11:47:19.323585 4180 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 11:47:19.328952 master-0 kubenswrapper[4180]: W0220 11:47:19.323593 4180 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 11:47:19.328952 master-0 kubenswrapper[4180]: W0220 11:47:19.323601 4180 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 11:47:19.328952 master-0 kubenswrapper[4180]: W0220 11:47:19.323609 4180 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 11:47:19.328952 master-0 kubenswrapper[4180]: W0220 11:47:19.323617 4180 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 11:47:19.328952 master-0 kubenswrapper[4180]: W0220 11:47:19.323624 4180 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 11:47:19.328952 master-0 kubenswrapper[4180]: I0220 11:47:19.323637 4180 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 11:47:19.328952 master-0 kubenswrapper[4180]: I0220 11:47:19.324899 4180 server.go:940] "Client rotation is on, will bootstrap in background" Feb 20 11:47:19.328952 master-0 kubenswrapper[4180]: I0220 11:47:19.327848 4180 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 20 11:47:19.330166 master-0 kubenswrapper[4180]: I0220 11:47:19.330079 4180 server.go:997] "Starting client certificate rotation" Feb 20 11:47:19.330166 master-0 kubenswrapper[4180]: I0220 11:47:19.330159 4180 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 20 11:47:19.330446 master-0 kubenswrapper[4180]: I0220 11:47:19.330392 4180 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 20 11:47:19.361561 master-0 kubenswrapper[4180]: I0220 11:47:19.361440 4180 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 11:47:19.365252 master-0 kubenswrapper[4180]: I0220 11:47:19.365166 4180 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 11:47:19.367502 master-0 kubenswrapper[4180]: E0220 11:47:19.367417 4180 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:19.388068 master-0 kubenswrapper[4180]: I0220 11:47:19.388016 4180 log.go:25] "Validated CRI v1 runtime API" Feb 20 11:47:19.394089 master-0 kubenswrapper[4180]: I0220 11:47:19.394036 4180 log.go:25] "Validated CRI v1 image API" Feb 20 11:47:19.396892 master-0 kubenswrapper[4180]: I0220 11:47:19.396840 4180 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 20 11:47:19.405264 master-0 kubenswrapper[4180]: I0220 11:47:19.405160 4180 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 e4a1b3a0-c6e7-4552-b1bb-6cc9ae049a6f:/dev/vda3] Feb 20 11:47:19.405264 master-0 kubenswrapper[4180]: I0220 11:47:19.405248 4180 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Feb 20 11:47:19.441601 master-0 kubenswrapper[4180]: I0220 11:47:19.441101 4180 manager.go:217] Machine: {Timestamp:2026-02-20 11:47:19.436719278 +0000 UTC m=+0.611771168 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514149376 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:c1d3cbc82ca3451894ea40b65f988770 SystemUUID:c1d3cbc8-2ca3-4518-94ea-40b65f988770 BootID:5aa007af-ada2-4850-bae5-7cd3dd4060ba Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257074688 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:8e:d0:9c Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:ad:cf:59 Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:7e:cc:dd:86:6c:ff Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514149376 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 20 11:47:19.441772 master-0 kubenswrapper[4180]: I0220 11:47:19.441632 4180 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 20 11:47:19.441883 master-0 kubenswrapper[4180]: I0220 11:47:19.441833 4180 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 20 11:47:19.442291 master-0 kubenswrapper[4180]: I0220 11:47:19.442250 4180 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 20 11:47:19.442620 master-0 kubenswrapper[4180]: I0220 11:47:19.442557 4180 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 20 11:47:19.442937 master-0 kubenswrapper[4180]: I0220 11:47:19.442613 4180 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 20 11:47:19.443045 master-0 kubenswrapper[4180]: I0220 11:47:19.442955 4180 topology_manager.go:138] "Creating topology manager with none policy" Feb 20 11:47:19.443045 master-0 kubenswrapper[4180]: I0220 11:47:19.442976 4180 container_manager_linux.go:303] "Creating device plugin manager" Feb 20 11:47:19.443649 master-0 kubenswrapper[4180]: I0220 11:47:19.443609 4180 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 11:47:19.443649 master-0 kubenswrapper[4180]: I0220 11:47:19.443643 4180 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 11:47:19.444836 master-0 kubenswrapper[4180]: I0220 11:47:19.444797 4180 state_mem.go:36] "Initialized new in-memory state store" Feb 20 11:47:19.444978 master-0 kubenswrapper[4180]: I0220 11:47:19.444942 4180 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 20 11:47:19.450897 master-0 kubenswrapper[4180]: I0220 11:47:19.450857 4180 kubelet.go:418] "Attempting to sync node with API server" Feb 20 11:47:19.451442 master-0 kubenswrapper[4180]: I0220 11:47:19.451402 4180 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 20 11:47:19.451514 master-0 kubenswrapper[4180]: I0220 11:47:19.451447 4180 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 20 11:47:19.451514 master-0 kubenswrapper[4180]: I0220 11:47:19.451470 4180 kubelet.go:324] "Adding apiserver pod source" Feb 20 11:47:19.451514 master-0 kubenswrapper[4180]: I0220 11:47:19.451499 4180 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 20 11:47:19.458900 master-0 kubenswrapper[4180]: W0220 11:47:19.458785 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:19.458995 master-0 kubenswrapper[4180]: E0220 11:47:19.458916 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:19.458995 master-0 kubenswrapper[4180]: W0220 11:47:19.458931 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:19.459121 master-0 kubenswrapper[4180]: E0220 11:47:19.459039 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:19.459628 master-0 kubenswrapper[4180]: I0220 11:47:19.459567 4180 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 20 11:47:19.465677 master-0 kubenswrapper[4180]: I0220 11:47:19.465625 4180 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 20 11:47:19.466054 master-0 kubenswrapper[4180]: I0220 11:47:19.466009 4180 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 20 11:47:19.466116 master-0 kubenswrapper[4180]: I0220 11:47:19.466063 4180 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 20 11:47:19.466116 master-0 kubenswrapper[4180]: I0220 11:47:19.466085 4180 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 20 11:47:19.466116 master-0 kubenswrapper[4180]: I0220 11:47:19.466104 4180 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 20 11:47:19.466369 master-0 kubenswrapper[4180]: I0220 11:47:19.466123 4180 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 20 11:47:19.466369 master-0 kubenswrapper[4180]: I0220 11:47:19.466181 4180 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 20 11:47:19.466369 master-0 kubenswrapper[4180]: I0220 11:47:19.466199 4180 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 20 11:47:19.466369 master-0 kubenswrapper[4180]: I0220 11:47:19.466216 4180 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 20 11:47:19.466369 master-0 kubenswrapper[4180]: I0220 11:47:19.466237 4180 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 20 11:47:19.466369 master-0 kubenswrapper[4180]: I0220 11:47:19.466255 4180 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 20 11:47:19.466369 master-0 kubenswrapper[4180]: I0220 11:47:19.466280 4180 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 20 11:47:19.467331 master-0 kubenswrapper[4180]: I0220 11:47:19.467279 4180 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 20 11:47:19.468678 master-0 kubenswrapper[4180]: I0220 11:47:19.468634 4180 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 20 11:47:19.469448 master-0 kubenswrapper[4180]: I0220 11:47:19.469404 4180 server.go:1280] "Started kubelet" Feb 20 11:47:19.470846 master-0 kubenswrapper[4180]: I0220 11:47:19.470402 4180 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 20 11:47:19.470846 master-0 kubenswrapper[4180]: I0220 11:47:19.470685 4180 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 20 11:47:19.470846 master-0 kubenswrapper[4180]: I0220 11:47:19.470656 4180 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 20 11:47:19.471270 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 20 11:47:19.471461 master-0 kubenswrapper[4180]: I0220 11:47:19.471265 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:19.471597 master-0 kubenswrapper[4180]: I0220 11:47:19.471552 4180 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 20 11:47:19.475758 master-0 kubenswrapper[4180]: I0220 11:47:19.475701 4180 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 20 11:47:19.475931 master-0 kubenswrapper[4180]: I0220 11:47:19.475909 4180 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 20 11:47:19.476295 master-0 kubenswrapper[4180]: E0220 11:47:19.476249 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:19.482849 master-0 kubenswrapper[4180]: E0220 11:47:19.475688 4180 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1895f1ef84e6c561 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.469352289 +0000 UTC m=+0.644404149,LastTimestamp:2026-02-20 11:47:19.469352289 +0000 UTC m=+0.644404149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:19.483171 master-0 kubenswrapper[4180]: I0220 11:47:19.483124 4180 server.go:449] "Adding debug handlers to kubelet server" Feb 20 11:47:19.483364 master-0 kubenswrapper[4180]: I0220 11:47:19.483334 4180 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 20 11:47:19.484051 master-0 kubenswrapper[4180]: I0220 11:47:19.483990 4180 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 20 11:47:19.484496 master-0 kubenswrapper[4180]: I0220 11:47:19.484464 4180 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 20 11:47:19.484975 master-0 kubenswrapper[4180]: I0220 11:47:19.484929 4180 reconstruct.go:97] "Volume reconstruction finished" Feb 20 11:47:19.484975 master-0 kubenswrapper[4180]: I0220 11:47:19.484973 4180 reconciler.go:26] "Reconciler: start to sync state" Feb 20 11:47:19.486308 master-0 kubenswrapper[4180]: W0220 11:47:19.486083 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:19.486453 master-0 kubenswrapper[4180]: E0220 11:47:19.486306 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:19.486787 master-0 kubenswrapper[4180]: E0220 11:47:19.486020 4180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 20 11:47:19.487090 master-0 kubenswrapper[4180]: I0220 11:47:19.487045 4180 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 20 11:47:19.487267 master-0 kubenswrapper[4180]: I0220 11:47:19.487104 4180 factory.go:55] Registering systemd factory Feb 20 11:47:19.487267 master-0 kubenswrapper[4180]: I0220 11:47:19.487125 4180 factory.go:221] Registration of the systemd container factory successfully Feb 20 11:47:19.487693 master-0 kubenswrapper[4180]: I0220 11:47:19.487603 4180 factory.go:153] Registering CRI-O factory Feb 20 11:47:19.487693 master-0 kubenswrapper[4180]: I0220 11:47:19.487645 4180 factory.go:221] Registration of the crio container factory successfully Feb 20 11:47:19.487693 master-0 kubenswrapper[4180]: I0220 11:47:19.487679 4180 factory.go:103] Registering Raw factory Feb 20 11:47:19.487693 master-0 kubenswrapper[4180]: I0220 11:47:19.487701 4180 manager.go:1196] Started watching for new ooms in manager Feb 20 11:47:19.492979 master-0 kubenswrapper[4180]: E0220 11:47:19.492922 4180 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 20 11:47:19.493297 master-0 kubenswrapper[4180]: I0220 11:47:19.493260 4180 manager.go:319] Starting recovery of all containers Feb 20 11:47:19.518298 master-0 kubenswrapper[4180]: I0220 11:47:19.518253 4180 manager.go:324] Recovery completed Feb 20 11:47:19.533342 master-0 kubenswrapper[4180]: I0220 11:47:19.533305 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:19.534792 master-0 kubenswrapper[4180]: I0220 11:47:19.534707 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:19.534792 master-0 kubenswrapper[4180]: I0220 11:47:19.534782 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:19.534970 master-0 kubenswrapper[4180]: I0220 11:47:19.534801 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:19.535616 master-0 kubenswrapper[4180]: I0220 11:47:19.535583 4180 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 20 11:47:19.535616 master-0 kubenswrapper[4180]: I0220 11:47:19.535600 4180 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 20 11:47:19.535778 master-0 kubenswrapper[4180]: I0220 11:47:19.535632 4180 state_mem.go:36] "Initialized new in-memory state store" Feb 20 11:47:19.539189 master-0 kubenswrapper[4180]: I0220 11:47:19.539138 4180 policy_none.go:49] "None policy: Start" Feb 20 11:47:19.540235 master-0 kubenswrapper[4180]: I0220 11:47:19.540206 4180 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 20 11:47:19.540387 master-0 kubenswrapper[4180]: I0220 11:47:19.540367 4180 state_mem.go:35] "Initializing new in-memory state store" Feb 20 11:47:19.583329 master-0 kubenswrapper[4180]: E0220 11:47:19.583276 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:19.615973 master-0 kubenswrapper[4180]: I0220 11:47:19.615759 4180 manager.go:334] "Starting Device Plugin manager" Feb 20 11:47:19.616221 master-0 kubenswrapper[4180]: I0220 11:47:19.616023 4180 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 20 11:47:19.616221 master-0 kubenswrapper[4180]: I0220 11:47:19.616047 4180 server.go:79] "Starting device plugin registration server" Feb 20 11:47:19.616594 master-0 kubenswrapper[4180]: I0220 11:47:19.616562 4180 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 20 11:47:19.616775 master-0 kubenswrapper[4180]: I0220 11:47:19.616592 4180 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 20 11:47:19.617412 master-0 kubenswrapper[4180]: I0220 11:47:19.617371 4180 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 20 11:47:19.617812 master-0 kubenswrapper[4180]: I0220 11:47:19.617514 4180 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 20 11:47:19.617812 master-0 kubenswrapper[4180]: I0220 11:47:19.617572 4180 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 20 11:47:19.619711 master-0 kubenswrapper[4180]: E0220 11:47:19.619654 4180 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 20 11:47:19.660661 master-0 kubenswrapper[4180]: I0220 11:47:19.660561 4180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 20 11:47:19.670336 master-0 kubenswrapper[4180]: I0220 11:47:19.663248 4180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 20 11:47:19.670336 master-0 kubenswrapper[4180]: I0220 11:47:19.663312 4180 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 20 11:47:19.670336 master-0 kubenswrapper[4180]: I0220 11:47:19.663351 4180 kubelet.go:2335] "Starting kubelet main sync loop" Feb 20 11:47:19.670336 master-0 kubenswrapper[4180]: E0220 11:47:19.663422 4180 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 20 11:47:19.670336 master-0 kubenswrapper[4180]: W0220 11:47:19.665044 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:19.670336 master-0 kubenswrapper[4180]: E0220 11:47:19.665158 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:19.688282 master-0 kubenswrapper[4180]: E0220 11:47:19.688179 4180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 20 11:47:19.717388 master-0 kubenswrapper[4180]: I0220 11:47:19.717313 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:19.719098 master-0 kubenswrapper[4180]: I0220 11:47:19.719046 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:19.719098 master-0 kubenswrapper[4180]: I0220 11:47:19.719100 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:19.719326 master-0 kubenswrapper[4180]: I0220 11:47:19.719118 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:19.719326 master-0 kubenswrapper[4180]: I0220 11:47:19.719159 4180 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 11:47:19.720163 master-0 kubenswrapper[4180]: E0220 11:47:19.720085 4180 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 20 11:47:19.764342 master-0 kubenswrapper[4180]: I0220 11:47:19.764248 4180 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0"] Feb 20 11:47:19.764472 master-0 kubenswrapper[4180]: I0220 11:47:19.764371 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:19.765637 master-0 kubenswrapper[4180]: I0220 11:47:19.765572 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:19.765766 master-0 kubenswrapper[4180]: I0220 11:47:19.765646 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:19.765766 master-0 kubenswrapper[4180]: I0220 11:47:19.765672 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:19.765890 master-0 kubenswrapper[4180]: I0220 11:47:19.765812 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:19.766154 master-0 kubenswrapper[4180]: I0220 11:47:19.766103 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:47:19.766292 master-0 kubenswrapper[4180]: I0220 11:47:19.766168 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:19.766971 master-0 kubenswrapper[4180]: I0220 11:47:19.766903 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:19.766971 master-0 kubenswrapper[4180]: I0220 11:47:19.766964 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:19.767142 master-0 kubenswrapper[4180]: I0220 11:47:19.766988 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:19.767142 master-0 kubenswrapper[4180]: I0220 11:47:19.767132 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:19.767264 master-0 kubenswrapper[4180]: I0220 11:47:19.767174 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:19.767264 master-0 kubenswrapper[4180]: I0220 11:47:19.767215 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:19.767264 master-0 kubenswrapper[4180]: I0220 11:47:19.767233 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:19.767417 master-0 kubenswrapper[4180]: I0220 11:47:19.767330 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:47:19.767417 master-0 kubenswrapper[4180]: I0220 11:47:19.767395 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:19.768497 master-0 kubenswrapper[4180]: I0220 11:47:19.768447 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:19.768497 master-0 kubenswrapper[4180]: I0220 11:47:19.768476 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:19.768758 master-0 kubenswrapper[4180]: I0220 11:47:19.768512 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:19.768758 master-0 kubenswrapper[4180]: I0220 11:47:19.768558 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:19.768758 master-0 kubenswrapper[4180]: I0220 11:47:19.768568 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:19.768758 master-0 kubenswrapper[4180]: I0220 11:47:19.768578 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:19.769084 master-0 kubenswrapper[4180]: I0220 11:47:19.768796 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:19.769084 master-0 kubenswrapper[4180]: I0220 11:47:19.768892 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:47:19.769084 master-0 kubenswrapper[4180]: I0220 11:47:19.768935 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:19.769754 master-0 kubenswrapper[4180]: I0220 11:47:19.769715 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:19.769869 master-0 kubenswrapper[4180]: I0220 11:47:19.769776 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:19.769869 master-0 kubenswrapper[4180]: I0220 11:47:19.769797 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:19.769982 master-0 kubenswrapper[4180]: I0220 11:47:19.769899 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:19.769982 master-0 kubenswrapper[4180]: I0220 11:47:19.769972 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:19.770111 master-0 kubenswrapper[4180]: I0220 11:47:19.770000 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:19.770111 master-0 kubenswrapper[4180]: I0220 11:47:19.769996 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:19.770273 master-0 kubenswrapper[4180]: I0220 11:47:19.770141 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.770273 master-0 kubenswrapper[4180]: I0220 11:47:19.770232 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:19.771154 master-0 kubenswrapper[4180]: I0220 11:47:19.771107 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:19.771154 master-0 kubenswrapper[4180]: I0220 11:47:19.771147 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:19.771332 master-0 kubenswrapper[4180]: I0220 11:47:19.771164 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:19.771332 master-0 kubenswrapper[4180]: I0220 11:47:19.771245 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:19.771332 master-0 kubenswrapper[4180]: I0220 11:47:19.771283 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:19.771332 master-0 kubenswrapper[4180]: I0220 11:47:19.771300 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:19.771561 master-0 kubenswrapper[4180]: I0220 11:47:19.771376 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.771561 master-0 kubenswrapper[4180]: I0220 11:47:19.771422 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:19.772453 master-0 kubenswrapper[4180]: I0220 11:47:19.772400 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:19.772453 master-0 kubenswrapper[4180]: I0220 11:47:19.772452 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:19.772707 master-0 kubenswrapper[4180]: I0220 11:47:19.772474 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:19.786564 master-0 kubenswrapper[4180]: I0220 11:47:19.786496 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:47:19.786674 master-0 kubenswrapper[4180]: I0220 11:47:19.786588 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:47:19.786674 master-0 kubenswrapper[4180]: I0220 11:47:19.786641 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.786804 master-0 kubenswrapper[4180]: I0220 11:47:19.786689 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.786804 master-0 kubenswrapper[4180]: I0220 11:47:19.786777 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.786915 master-0 kubenswrapper[4180]: I0220 11:47:19.786826 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.786915 master-0 kubenswrapper[4180]: I0220 11:47:19.786863 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:47:19.787029 master-0 kubenswrapper[4180]: I0220 11:47:19.786910 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.787029 master-0 kubenswrapper[4180]: I0220 11:47:19.786963 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.787029 master-0 kubenswrapper[4180]: I0220 11:47:19.787009 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:47:19.787193 master-0 kubenswrapper[4180]: I0220 11:47:19.787055 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.787193 master-0 kubenswrapper[4180]: I0220 11:47:19.787100 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.787193 master-0 kubenswrapper[4180]: I0220 11:47:19.787147 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.787365 master-0 kubenswrapper[4180]: I0220 11:47:19.787191 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.787365 master-0 kubenswrapper[4180]: I0220 11:47:19.787240 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:47:19.787365 master-0 kubenswrapper[4180]: I0220 11:47:19.787323 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:47:19.787557 master-0 kubenswrapper[4180]: I0220 11:47:19.787374 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.887717 master-0 kubenswrapper[4180]: I0220 11:47:19.887687 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.887917 master-0 kubenswrapper[4180]: I0220 11:47:19.887732 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.887917 master-0 kubenswrapper[4180]: I0220 11:47:19.887765 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.887917 master-0 kubenswrapper[4180]: I0220 11:47:19.887853 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.887917 master-0 kubenswrapper[4180]: I0220 11:47:19.887904 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.887931 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.887937 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.887974 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.887982 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.888013 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.888042 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.888070 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.888112 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.888128 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.888187 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.888202 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.888232 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.888250 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.888219 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.888297 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.888320 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.888358 master-0 kubenswrapper[4180]: I0220 11:47:19.888363 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.889994 master-0 kubenswrapper[4180]: I0220 11:47:19.888369 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:47:19.889994 master-0 kubenswrapper[4180]: I0220 11:47:19.888423 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:47:19.889994 master-0 kubenswrapper[4180]: I0220 11:47:19.888441 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:47:19.889994 master-0 kubenswrapper[4180]: I0220 11:47:19.888474 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:47:19.889994 master-0 kubenswrapper[4180]: I0220 11:47:19.888506 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.889994 master-0 kubenswrapper[4180]: I0220 11:47:19.888565 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.889994 master-0 kubenswrapper[4180]: I0220 11:47:19.888601 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.889994 master-0 kubenswrapper[4180]: I0220 11:47:19.888600 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:47:19.889994 master-0 kubenswrapper[4180]: I0220 11:47:19.888667 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:47:19.889994 master-0 kubenswrapper[4180]: I0220 11:47:19.888677 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.889994 master-0 kubenswrapper[4180]: I0220 11:47:19.888714 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.889994 master-0 kubenswrapper[4180]: I0220 11:47:19.888780 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:19.900131 master-0 kubenswrapper[4180]: I0220 11:47:19.900083 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:19.920437 master-0 kubenswrapper[4180]: I0220 11:47:19.920382 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:19.921586 master-0 kubenswrapper[4180]: I0220 11:47:19.921554 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:19.921689 master-0 kubenswrapper[4180]: I0220 11:47:19.921593 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:19.921689 master-0 kubenswrapper[4180]: I0220 11:47:19.921610 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:19.921689 master-0 kubenswrapper[4180]: I0220 11:47:19.921662 4180 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 11:47:19.922803 master-0 kubenswrapper[4180]: E0220 11:47:19.922747 4180 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 20 11:47:20.089653 master-0 kubenswrapper[4180]: E0220 11:47:20.089428 4180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 20 11:47:20.111727 master-0 kubenswrapper[4180]: I0220 11:47:20.111659 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:47:20.136822 master-0 kubenswrapper[4180]: I0220 11:47:20.136745 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:47:20.165765 master-0 kubenswrapper[4180]: I0220 11:47:20.165693 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:47:20.188053 master-0 kubenswrapper[4180]: I0220 11:47:20.187985 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:20.323225 master-0 kubenswrapper[4180]: I0220 11:47:20.323148 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:20.324487 master-0 kubenswrapper[4180]: I0220 11:47:20.324440 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:20.324598 master-0 kubenswrapper[4180]: I0220 11:47:20.324494 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:20.324598 master-0 kubenswrapper[4180]: I0220 11:47:20.324517 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:20.324712 master-0 kubenswrapper[4180]: I0220 11:47:20.324617 4180 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 11:47:20.325788 master-0 kubenswrapper[4180]: E0220 11:47:20.325721 4180 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 20 11:47:20.348154 master-0 kubenswrapper[4180]: W0220 11:47:20.347973 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:20.348154 master-0 kubenswrapper[4180]: E0220 11:47:20.348107 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:20.474236 master-0 kubenswrapper[4180]: I0220 11:47:20.474135 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:20.493127 master-0 kubenswrapper[4180]: W0220 11:47:20.492672 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:20.493292 master-0 kubenswrapper[4180]: E0220 11:47:20.493132 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:20.593301 master-0 kubenswrapper[4180]: W0220 11:47:20.593230 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:20.593456 master-0 kubenswrapper[4180]: E0220 11:47:20.593309 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:20.649371 master-0 kubenswrapper[4180]: W0220 11:47:20.649302 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc997c8e9d3be51d454d8e61e376bef08.slice/crio-fb6f6ab6826113043c422e9cb31e951a4709e29a8f548f2f0410e49be87f511d WatchSource:0}: Error finding container fb6f6ab6826113043c422e9cb31e951a4709e29a8f548f2f0410e49be87f511d: Status 404 returned error can't find the container with id fb6f6ab6826113043c422e9cb31e951a4709e29a8f548f2f0410e49be87f511d Feb 20 11:47:20.652052 master-0 kubenswrapper[4180]: W0220 11:47:20.651991 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod687e92a6cecf1e2beeef16a0b322ad08.slice/crio-9c62c9e4d7c03ed804b559ff9f9468e7a7f91ed8870a1b6239bb6b24438d3b6a WatchSource:0}: Error finding container 9c62c9e4d7c03ed804b559ff9f9468e7a7f91ed8870a1b6239bb6b24438d3b6a: Status 404 returned error can't find the container with id 9c62c9e4d7c03ed804b559ff9f9468e7a7f91ed8870a1b6239bb6b24438d3b6a Feb 20 11:47:20.658794 master-0 kubenswrapper[4180]: I0220 11:47:20.658749 4180 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 11:47:20.669203 master-0 kubenswrapper[4180]: I0220 11:47:20.669048 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"9c62c9e4d7c03ed804b559ff9f9468e7a7f91ed8870a1b6239bb6b24438d3b6a"} Feb 20 11:47:20.673390 master-0 kubenswrapper[4180]: I0220 11:47:20.673312 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"fb6f6ab6826113043c422e9cb31e951a4709e29a8f548f2f0410e49be87f511d"} Feb 20 11:47:20.679662 master-0 kubenswrapper[4180]: W0220 11:47:20.679595 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56c3cb71c9851003c8de7e7c5db4b87e.slice/crio-d2c649879e879ea783f4e70fae9dbd4ad2a036263190c8c86c941dcd3804935b WatchSource:0}: Error finding container d2c649879e879ea783f4e70fae9dbd4ad2a036263190c8c86c941dcd3804935b: Status 404 returned error can't find the container with id d2c649879e879ea783f4e70fae9dbd4ad2a036263190c8c86c941dcd3804935b Feb 20 11:47:20.689565 master-0 kubenswrapper[4180]: W0220 11:47:20.689489 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12dab5d350ebc129b0bfa4714d330b15.slice/crio-e0641894015187b3510c96c6c6bd4f01c441dcb52b2dacc02ae9839b7ddf2146 WatchSource:0}: Error finding container e0641894015187b3510c96c6c6bd4f01c441dcb52b2dacc02ae9839b7ddf2146: Status 404 returned error can't find the container with id e0641894015187b3510c96c6c6bd4f01c441dcb52b2dacc02ae9839b7ddf2146 Feb 20 11:47:20.715728 master-0 kubenswrapper[4180]: W0220 11:47:20.715668 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9ad9373c007a4fcd25e70622bdc8deb.slice/crio-fcd9999695d850ee86685844ce22164c47296c700a3e8af3d20ba2a180990b4a WatchSource:0}: Error finding container fcd9999695d850ee86685844ce22164c47296c700a3e8af3d20ba2a180990b4a: Status 404 returned error can't find the container with id fcd9999695d850ee86685844ce22164c47296c700a3e8af3d20ba2a180990b4a Feb 20 11:47:20.840616 master-0 kubenswrapper[4180]: W0220 11:47:20.840456 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:20.840616 master-0 kubenswrapper[4180]: E0220 11:47:20.840599 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:20.890605 master-0 kubenswrapper[4180]: E0220 11:47:20.890451 4180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 20 11:47:21.126881 master-0 kubenswrapper[4180]: I0220 11:47:21.126793 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:21.128225 master-0 kubenswrapper[4180]: I0220 11:47:21.128169 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:21.128307 master-0 kubenswrapper[4180]: I0220 11:47:21.128228 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:21.128307 master-0 kubenswrapper[4180]: I0220 11:47:21.128248 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:21.128307 master-0 kubenswrapper[4180]: I0220 11:47:21.128301 4180 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 11:47:21.129237 master-0 kubenswrapper[4180]: E0220 11:47:21.129184 4180 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 20 11:47:21.472437 master-0 kubenswrapper[4180]: I0220 11:47:21.472387 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:21.494890 master-0 kubenswrapper[4180]: I0220 11:47:21.494844 4180 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 20 11:47:21.495918 master-0 kubenswrapper[4180]: E0220 11:47:21.495867 4180 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:21.678162 master-0 kubenswrapper[4180]: I0220 11:47:21.678091 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"d2c649879e879ea783f4e70fae9dbd4ad2a036263190c8c86c941dcd3804935b"} Feb 20 11:47:21.681280 master-0 kubenswrapper[4180]: I0220 11:47:21.680150 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"fcd9999695d850ee86685844ce22164c47296c700a3e8af3d20ba2a180990b4a"} Feb 20 11:47:21.681575 master-0 kubenswrapper[4180]: I0220 11:47:21.681541 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"e0641894015187b3510c96c6c6bd4f01c441dcb52b2dacc02ae9839b7ddf2146"} Feb 20 11:47:21.743809 master-0 kubenswrapper[4180]: E0220 11:47:21.743584 4180 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1895f1ef84e6c561 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.469352289 +0000 UTC m=+0.644404149,LastTimestamp:2026-02-20 11:47:19.469352289 +0000 UTC m=+0.644404149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:22.473271 master-0 kubenswrapper[4180]: I0220 11:47:22.473205 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:22.491730 master-0 kubenswrapper[4180]: E0220 11:47:22.491681 4180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 20 11:47:22.595129 master-0 kubenswrapper[4180]: W0220 11:47:22.594955 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:22.595129 master-0 kubenswrapper[4180]: E0220 11:47:22.595091 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:22.686350 master-0 kubenswrapper[4180]: I0220 11:47:22.686285 4180 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="fbba6df4a59d8edb9a6ffa0ddbac2d1f8af28cf04b9ed9d72f140a13ab377500" exitCode=0 Feb 20 11:47:22.686566 master-0 kubenswrapper[4180]: I0220 11:47:22.686358 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"fbba6df4a59d8edb9a6ffa0ddbac2d1f8af28cf04b9ed9d72f140a13ab377500"} Feb 20 11:47:22.686566 master-0 kubenswrapper[4180]: I0220 11:47:22.686388 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:22.687747 master-0 kubenswrapper[4180]: I0220 11:47:22.687718 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:22.687827 master-0 kubenswrapper[4180]: I0220 11:47:22.687758 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:22.687827 master-0 kubenswrapper[4180]: I0220 11:47:22.687768 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:22.730120 master-0 kubenswrapper[4180]: I0220 11:47:22.730077 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:22.731027 master-0 kubenswrapper[4180]: I0220 11:47:22.730983 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:22.731027 master-0 kubenswrapper[4180]: I0220 11:47:22.731027 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:22.731186 master-0 kubenswrapper[4180]: I0220 11:47:22.731040 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:22.731186 master-0 kubenswrapper[4180]: I0220 11:47:22.731101 4180 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 11:47:22.731978 master-0 kubenswrapper[4180]: E0220 11:47:22.731937 4180 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 20 11:47:23.059781 master-0 kubenswrapper[4180]: W0220 11:47:23.059668 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:23.059781 master-0 kubenswrapper[4180]: E0220 11:47:23.059764 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:23.473051 master-0 kubenswrapper[4180]: I0220 11:47:23.473002 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:23.675212 master-0 kubenswrapper[4180]: W0220 11:47:23.675160 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:23.675212 master-0 kubenswrapper[4180]: E0220 11:47:23.675209 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:23.690111 master-0 kubenswrapper[4180]: I0220 11:47:23.690064 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"f5f43068fbb5a9da164f8ee835b3b81c0e487b16f18a0855b330e8a595241a1a"} Feb 20 11:47:23.692013 master-0 kubenswrapper[4180]: I0220 11:47:23.691918 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/0.log" Feb 20 11:47:23.692395 master-0 kubenswrapper[4180]: I0220 11:47:23.692359 4180 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="c57ec0865a2ad2ffbe2e0cdc9f6db6ddb85685be0b1be832df6972f6fd3bf1fb" exitCode=1 Feb 20 11:47:23.692485 master-0 kubenswrapper[4180]: I0220 11:47:23.692392 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"c57ec0865a2ad2ffbe2e0cdc9f6db6ddb85685be0b1be832df6972f6fd3bf1fb"} Feb 20 11:47:23.692485 master-0 kubenswrapper[4180]: I0220 11:47:23.692454 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:23.693367 master-0 kubenswrapper[4180]: I0220 11:47:23.693336 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:23.693452 master-0 kubenswrapper[4180]: I0220 11:47:23.693376 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:23.693452 master-0 kubenswrapper[4180]: I0220 11:47:23.693388 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:23.693732 master-0 kubenswrapper[4180]: I0220 11:47:23.693706 4180 scope.go:117] "RemoveContainer" containerID="c57ec0865a2ad2ffbe2e0cdc9f6db6ddb85685be0b1be832df6972f6fd3bf1fb" Feb 20 11:47:23.699421 master-0 kubenswrapper[4180]: W0220 11:47:23.699341 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:23.699620 master-0 kubenswrapper[4180]: E0220 11:47:23.699421 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:24.473225 master-0 kubenswrapper[4180]: I0220 11:47:24.473171 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:24.696142 master-0 kubenswrapper[4180]: I0220 11:47:24.696093 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"96886d8a032fd5d62adc57b52a624b84e10414b0186d56899d96874f35313ca3"} Feb 20 11:47:24.696599 master-0 kubenswrapper[4180]: I0220 11:47:24.696192 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:24.697305 master-0 kubenswrapper[4180]: I0220 11:47:24.697264 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:24.697363 master-0 kubenswrapper[4180]: I0220 11:47:24.697318 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:24.697363 master-0 kubenswrapper[4180]: I0220 11:47:24.697337 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:24.698383 master-0 kubenswrapper[4180]: I0220 11:47:24.698365 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 20 11:47:24.698932 master-0 kubenswrapper[4180]: I0220 11:47:24.698906 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/0.log" Feb 20 11:47:24.699325 master-0 kubenswrapper[4180]: I0220 11:47:24.699289 4180 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="a2c683f9a896ca400275a05e822759979315bcb526b2a548a70aa7107622475e" exitCode=1 Feb 20 11:47:24.699325 master-0 kubenswrapper[4180]: I0220 11:47:24.699323 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"a2c683f9a896ca400275a05e822759979315bcb526b2a548a70aa7107622475e"} Feb 20 11:47:24.699407 master-0 kubenswrapper[4180]: I0220 11:47:24.699346 4180 scope.go:117] "RemoveContainer" containerID="c57ec0865a2ad2ffbe2e0cdc9f6db6ddb85685be0b1be832df6972f6fd3bf1fb" Feb 20 11:47:24.699407 master-0 kubenswrapper[4180]: I0220 11:47:24.699369 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:24.700509 master-0 kubenswrapper[4180]: I0220 11:47:24.700451 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:24.700509 master-0 kubenswrapper[4180]: I0220 11:47:24.700481 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:24.700509 master-0 kubenswrapper[4180]: I0220 11:47:24.700491 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:24.700818 master-0 kubenswrapper[4180]: I0220 11:47:24.700803 4180 scope.go:117] "RemoveContainer" containerID="a2c683f9a896ca400275a05e822759979315bcb526b2a548a70aa7107622475e" Feb 20 11:47:24.700940 master-0 kubenswrapper[4180]: E0220 11:47:24.700921 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 20 11:47:25.473007 master-0 kubenswrapper[4180]: I0220 11:47:25.472925 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:25.693284 master-0 kubenswrapper[4180]: E0220 11:47:25.693224 4180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 20 11:47:25.703456 master-0 kubenswrapper[4180]: I0220 11:47:25.703405 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 20 11:47:25.704265 master-0 kubenswrapper[4180]: I0220 11:47:25.704226 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:25.704363 master-0 kubenswrapper[4180]: I0220 11:47:25.704224 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:25.704927 master-0 kubenswrapper[4180]: I0220 11:47:25.704890 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:25.704927 master-0 kubenswrapper[4180]: I0220 11:47:25.704922 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:25.704927 master-0 kubenswrapper[4180]: I0220 11:47:25.704931 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:25.705147 master-0 kubenswrapper[4180]: I0220 11:47:25.705108 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:25.705147 master-0 kubenswrapper[4180]: I0220 11:47:25.705149 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:25.705259 master-0 kubenswrapper[4180]: I0220 11:47:25.705161 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:25.705391 master-0 kubenswrapper[4180]: I0220 11:47:25.705346 4180 scope.go:117] "RemoveContainer" containerID="a2c683f9a896ca400275a05e822759979315bcb526b2a548a70aa7107622475e" Feb 20 11:47:25.705664 master-0 kubenswrapper[4180]: E0220 11:47:25.705624 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 20 11:47:25.845170 master-0 kubenswrapper[4180]: I0220 11:47:25.845106 4180 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 20 11:47:25.856665 master-0 kubenswrapper[4180]: E0220 11:47:25.856612 4180 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:25.932681 master-0 kubenswrapper[4180]: I0220 11:47:25.932587 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:25.933768 master-0 kubenswrapper[4180]: I0220 11:47:25.933717 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:25.933768 master-0 kubenswrapper[4180]: I0220 11:47:25.933769 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:25.933904 master-0 kubenswrapper[4180]: I0220 11:47:25.933787 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:25.933904 master-0 kubenswrapper[4180]: I0220 11:47:25.933843 4180 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 11:47:25.934724 master-0 kubenswrapper[4180]: E0220 11:47:25.934674 4180 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 20 11:47:26.472806 master-0 kubenswrapper[4180]: I0220 11:47:26.472714 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:26.574461 master-0 kubenswrapper[4180]: W0220 11:47:26.574334 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:26.574461 master-0 kubenswrapper[4180]: E0220 11:47:26.574420 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:27.473561 master-0 kubenswrapper[4180]: I0220 11:47:27.473446 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:28.031567 master-0 kubenswrapper[4180]: W0220 11:47:28.031068 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:28.031729 master-0 kubenswrapper[4180]: E0220 11:47:28.031596 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:28.217474 master-0 kubenswrapper[4180]: W0220 11:47:28.217381 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:28.217692 master-0 kubenswrapper[4180]: E0220 11:47:28.217472 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:28.473460 master-0 kubenswrapper[4180]: I0220 11:47:28.473341 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:28.508422 master-0 kubenswrapper[4180]: W0220 11:47:28.508276 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 11:47:28.508422 master-0 kubenswrapper[4180]: E0220 11:47:28.508390 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 11:47:28.712162 master-0 kubenswrapper[4180]: I0220 11:47:28.711999 4180 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="916faa0bd31e938470f1917fc27df9d9c5c42d01e4d8c634e516e1d594156790" exitCode=0 Feb 20 11:47:28.712162 master-0 kubenswrapper[4180]: I0220 11:47:28.712134 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:28.712466 master-0 kubenswrapper[4180]: I0220 11:47:28.712166 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerDied","Data":"916faa0bd31e938470f1917fc27df9d9c5c42d01e4d8c634e516e1d594156790"} Feb 20 11:47:28.713281 master-0 kubenswrapper[4180]: I0220 11:47:28.713224 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:28.713281 master-0 kubenswrapper[4180]: I0220 11:47:28.713272 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:28.713485 master-0 kubenswrapper[4180]: I0220 11:47:28.713290 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:28.715665 master-0 kubenswrapper[4180]: I0220 11:47:28.715595 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"63a4ec3dde8f5a0e5831c20c7c43b03806a786d19e88fcb36793fe30ce83f9e5"} Feb 20 11:47:28.717450 master-0 kubenswrapper[4180]: I0220 11:47:28.717367 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:28.717827 master-0 kubenswrapper[4180]: I0220 11:47:28.717767 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"6f48bf3168ea3ca5cdb5d4b4fe30f40410c99744121d1afe1db8ccea90206a28"} Feb 20 11:47:28.717946 master-0 kubenswrapper[4180]: I0220 11:47:28.717832 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:28.719013 master-0 kubenswrapper[4180]: I0220 11:47:28.718938 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:28.719013 master-0 kubenswrapper[4180]: I0220 11:47:28.718989 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:28.719013 master-0 kubenswrapper[4180]: I0220 11:47:28.719011 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:28.719287 master-0 kubenswrapper[4180]: I0220 11:47:28.719034 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:28.719287 master-0 kubenswrapper[4180]: I0220 11:47:28.719039 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:28.719287 master-0 kubenswrapper[4180]: I0220 11:47:28.719066 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:29.619895 master-0 kubenswrapper[4180]: E0220 11:47:29.619771 4180 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 20 11:47:29.723951 master-0 kubenswrapper[4180]: I0220 11:47:29.723887 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"a1efa78f7f5d27240191b971820a5d5e18a579348d72495e656c080f9213d5fe"} Feb 20 11:47:29.726035 master-0 kubenswrapper[4180]: I0220 11:47:29.726007 4180 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="63a4ec3dde8f5a0e5831c20c7c43b03806a786d19e88fcb36793fe30ce83f9e5" exitCode=1 Feb 20 11:47:29.726105 master-0 kubenswrapper[4180]: I0220 11:47:29.726094 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:29.726591 master-0 kubenswrapper[4180]: I0220 11:47:29.726488 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"63a4ec3dde8f5a0e5831c20c7c43b03806a786d19e88fcb36793fe30ce83f9e5"} Feb 20 11:47:29.727185 master-0 kubenswrapper[4180]: I0220 11:47:29.727140 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:29.727185 master-0 kubenswrapper[4180]: I0220 11:47:29.727167 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:29.727185 master-0 kubenswrapper[4180]: I0220 11:47:29.727178 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:30.363301 master-0 kubenswrapper[4180]: I0220 11:47:30.363156 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:30.477504 master-0 kubenswrapper[4180]: I0220 11:47:30.477446 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:31.478764 master-0 kubenswrapper[4180]: I0220 11:47:31.478722 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:31.733220 master-0 kubenswrapper[4180]: I0220 11:47:31.733089 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"f1682d7b4b37ab8ab7b0e93abba0b5ee3a264e78978d6dc34d6d434f13d2a6ae"} Feb 20 11:47:31.733394 master-0 kubenswrapper[4180]: I0220 11:47:31.733239 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:31.734637 master-0 kubenswrapper[4180]: I0220 11:47:31.734590 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:31.734699 master-0 kubenswrapper[4180]: I0220 11:47:31.734644 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:31.734699 master-0 kubenswrapper[4180]: I0220 11:47:31.734663 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:31.735120 master-0 kubenswrapper[4180]: I0220 11:47:31.735080 4180 scope.go:117] "RemoveContainer" containerID="63a4ec3dde8f5a0e5831c20c7c43b03806a786d19e88fcb36793fe30ce83f9e5" Feb 20 11:47:31.753886 master-0 kubenswrapper[4180]: E0220 11:47:31.753706 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef84e6c561 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.469352289 +0000 UTC m=+0.644404149,LastTimestamp:2026-02-20 11:47:19.469352289 +0000 UTC m=+0.644404149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.760410 master-0 kubenswrapper[4180]: E0220 11:47:31.760261 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88ccea03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534766595 +0000 UTC m=+0.709818455,LastTimestamp:2026-02-20 11:47:19.534766595 +0000 UTC m=+0.709818455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.765641 master-0 kubenswrapper[4180]: E0220 11:47:31.765472 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88cd5570 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534794096 +0000 UTC m=+0.709845956,LastTimestamp:2026-02-20 11:47:19.534794096 +0000 UTC m=+0.709845956,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.771034 master-0 kubenswrapper[4180]: E0220 11:47:31.770788 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88cda189 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534813577 +0000 UTC m=+0.709865437,LastTimestamp:2026-02-20 11:47:19.534813577 +0000 UTC m=+0.709865437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.776488 master-0 kubenswrapper[4180]: E0220 11:47:31.776379 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef8dced09e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.618777246 +0000 UTC m=+0.793829106,LastTimestamp:2026-02-20 11:47:19.618777246 +0000 UTC m=+0.793829106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.784177 master-0 kubenswrapper[4180]: E0220 11:47:31.784040 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88ccea03\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88ccea03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534766595 +0000 UTC m=+0.709818455,LastTimestamp:2026-02-20 11:47:19.719080647 +0000 UTC m=+0.894132497,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.791078 master-0 kubenswrapper[4180]: E0220 11:47:31.790972 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88cd5570\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88cd5570 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534794096 +0000 UTC m=+0.709845956,LastTimestamp:2026-02-20 11:47:19.719112098 +0000 UTC m=+0.894163948,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.798232 master-0 kubenswrapper[4180]: E0220 11:47:31.798120 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88cda189\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88cda189 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534813577 +0000 UTC m=+0.709865437,LastTimestamp:2026-02-20 11:47:19.719129198 +0000 UTC m=+0.894181048,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.806305 master-0 kubenswrapper[4180]: E0220 11:47:31.806150 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88ccea03\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88ccea03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534766595 +0000 UTC m=+0.709818455,LastTimestamp:2026-02-20 11:47:19.76561923 +0000 UTC m=+0.940671090,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.813368 master-0 kubenswrapper[4180]: E0220 11:47:31.813224 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88cd5570\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88cd5570 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534794096 +0000 UTC m=+0.709845956,LastTimestamp:2026-02-20 11:47:19.765663451 +0000 UTC m=+0.940715301,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.821614 master-0 kubenswrapper[4180]: E0220 11:47:31.821060 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88cda189\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88cda189 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534813577 +0000 UTC m=+0.709865437,LastTimestamp:2026-02-20 11:47:19.765685422 +0000 UTC m=+0.940737282,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.825661 master-0 kubenswrapper[4180]: E0220 11:47:31.825479 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88ccea03\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88ccea03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534766595 +0000 UTC m=+0.709818455,LastTimestamp:2026-02-20 11:47:19.766937712 +0000 UTC m=+0.941989572,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.833597 master-0 kubenswrapper[4180]: E0220 11:47:31.833424 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88cd5570\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88cd5570 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534794096 +0000 UTC m=+0.709845956,LastTimestamp:2026-02-20 11:47:19.766979674 +0000 UTC m=+0.942031534,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.840722 master-0 kubenswrapper[4180]: E0220 11:47:31.840559 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88cda189\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88cda189 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534813577 +0000 UTC m=+0.709865437,LastTimestamp:2026-02-20 11:47:19.767002465 +0000 UTC m=+0.942054325,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.848840 master-0 kubenswrapper[4180]: E0220 11:47:31.848712 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88ccea03\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88ccea03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534766595 +0000 UTC m=+0.709818455,LastTimestamp:2026-02-20 11:47:19.767201721 +0000 UTC m=+0.942253571,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.853076 master-0 kubenswrapper[4180]: E0220 11:47:31.852846 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88cd5570\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88cd5570 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534794096 +0000 UTC m=+0.709845956,LastTimestamp:2026-02-20 11:47:19.767225432 +0000 UTC m=+0.942277292,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.857153 master-0 kubenswrapper[4180]: E0220 11:47:31.857049 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88cda189\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88cda189 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534813577 +0000 UTC m=+0.709865437,LastTimestamp:2026-02-20 11:47:19.767242462 +0000 UTC m=+0.942294322,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.861129 master-0 kubenswrapper[4180]: E0220 11:47:31.860980 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88ccea03\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88ccea03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534766595 +0000 UTC m=+0.709818455,LastTimestamp:2026-02-20 11:47:19.768485103 +0000 UTC m=+0.943536953,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.865498 master-0 kubenswrapper[4180]: E0220 11:47:31.865375 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88ccea03\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88ccea03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534766595 +0000 UTC m=+0.709818455,LastTimestamp:2026-02-20 11:47:19.768515904 +0000 UTC m=+0.943567754,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.869575 master-0 kubenswrapper[4180]: E0220 11:47:31.869414 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88cd5570\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88cd5570 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534794096 +0000 UTC m=+0.709845956,LastTimestamp:2026-02-20 11:47:19.768558115 +0000 UTC m=+0.943609965,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.874141 master-0 kubenswrapper[4180]: E0220 11:47:31.873959 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88cd5570\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88cd5570 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534794096 +0000 UTC m=+0.709845956,LastTimestamp:2026-02-20 11:47:19.768571166 +0000 UTC m=+0.943623016,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.877873 master-0 kubenswrapper[4180]: E0220 11:47:31.877756 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88cda189\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88cda189 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534813577 +0000 UTC m=+0.709865437,LastTimestamp:2026-02-20 11:47:19.768581976 +0000 UTC m=+0.943633836,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.881903 master-0 kubenswrapper[4180]: E0220 11:47:31.881800 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88cda189\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88cda189 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534813577 +0000 UTC m=+0.709865437,LastTimestamp:2026-02-20 11:47:19.768588226 +0000 UTC m=+0.943640076,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.886981 master-0 kubenswrapper[4180]: E0220 11:47:31.886867 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88ccea03\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88ccea03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534766595 +0000 UTC m=+0.709818455,LastTimestamp:2026-02-20 11:47:19.769750004 +0000 UTC m=+0.944801854,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.891228 master-0 kubenswrapper[4180]: E0220 11:47:31.891122 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895f1ef88cd5570\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895f1ef88cd5570 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:19.534794096 +0000 UTC m=+0.709845956,LastTimestamp:2026-02-20 11:47:19.769789745 +0000 UTC m=+0.944841605,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.896361 master-0 kubenswrapper[4180]: E0220 11:47:31.896187 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1efcbca4b38 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:20.658668344 +0000 UTC m=+1.833720194,LastTimestamp:2026-02-20 11:47:20.658668344 +0000 UTC m=+1.833720194,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.900021 master-0 kubenswrapper[4180]: E0220 11:47:31.899910 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895f1efcbcde7a9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:20.658905001 +0000 UTC m=+1.833956861,LastTimestamp:2026-02-20 11:47:20.658905001 +0000 UTC m=+1.833956861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.906129 master-0 kubenswrapper[4180]: E0220 11:47:31.905927 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1895f1efcd411afb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:20.683231995 +0000 UTC m=+1.858283855,LastTimestamp:2026-02-20 11:47:20.683231995 +0000 UTC m=+1.858283855,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.912008 master-0 kubenswrapper[4180]: E0220 11:47:31.911897 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895f1efce1645f4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:20.697202164 +0000 UTC m=+1.872253994,LastTimestamp:2026-02-20 11:47:20.697202164 +0000 UTC m=+1.872253994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.917617 master-0 kubenswrapper[4180]: E0220 11:47:31.917458 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895f1efcf68488a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:20.719353994 +0000 UTC m=+1.894405854,LastTimestamp:2026-02-20 11:47:20.719353994 +0000 UTC m=+1.894405854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.922172 master-0 kubenswrapper[4180]: E0220 11:47:31.921972 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f02c61f409 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" in 1.62s (1.62s including waiting). Image size: 464984427 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:22.279220233 +0000 UTC m=+3.454272063,LastTimestamp:2026-02-20 11:47:22.279220233 +0000 UTC m=+3.454272063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.925881 master-0 kubenswrapper[4180]: E0220 11:47:31.925745 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f038ee4b65 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:22.489744229 +0000 UTC m=+3.664796049,LastTimestamp:2026-02-20 11:47:22.489744229 +0000 UTC m=+3.664796049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.930338 master-0 kubenswrapper[4180]: E0220 11:47:31.930228 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f039e64d7b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:22.505997691 +0000 UTC m=+3.681049511,LastTimestamp:2026-02-20 11:47:22.505997691 +0000 UTC m=+3.681049511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.935036 master-0 kubenswrapper[4180]: E0220 11:47:31.934923 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f06632e578 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:23.24921484 +0000 UTC m=+4.424266660,LastTimestamp:2026-02-20 11:47:23.24921484 +0000 UTC m=+4.424266660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.938979 master-0 kubenswrapper[4180]: E0220 11:47:31.938750 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895f1f068f0ed7c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\" in 2.597s (2.598s including waiting). Image size: 529218694 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:23.295223164 +0000 UTC m=+4.470274984,LastTimestamp:2026-02-20 11:47:23.295223164 +0000 UTC m=+4.470274984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.940832 master-0 kubenswrapper[4180]: I0220 11:47:31.940778 4180 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:31.943913 master-0 kubenswrapper[4180]: E0220 11:47:31.943735 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f072c0d0fb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:23.459842299 +0000 UTC m=+4.634894119,LastTimestamp:2026-02-20 11:47:23.459842299 +0000 UTC m=+4.634894119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.949141 master-0 kubenswrapper[4180]: E0220 11:47:31.948972 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895f1f073806ab6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:23.47239903 +0000 UTC m=+4.647450850,LastTimestamp:2026-02-20 11:47:23.47239903 +0000 UTC m=+4.647450850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.954003 master-0 kubenswrapper[4180]: E0220 11:47:31.953853 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f07389515f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:23.472982367 +0000 UTC m=+4.648034187,LastTimestamp:2026-02-20 11:47:23.472982367 +0000 UTC m=+4.648034187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.959193 master-0 kubenswrapper[4180]: E0220 11:47:31.959084 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895f1f07443c463 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:23.485201507 +0000 UTC m=+4.660253327,LastTimestamp:2026-02-20 11:47:23.485201507 +0000 UTC m=+4.660253327,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.963993 master-0 kubenswrapper[4180]: E0220 11:47:31.963902 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895f1f07481f557 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:23.489277271 +0000 UTC m=+4.664329081,LastTimestamp:2026-02-20 11:47:23.489277271 +0000 UTC m=+4.664329081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.969648 master-0 kubenswrapper[4180]: E0220 11:47:31.969573 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895f1f080712a53 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:23.689503315 +0000 UTC m=+4.864555135,LastTimestamp:2026-02-20 11:47:23.689503315 +0000 UTC m=+4.864555135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.979472 master-0 kubenswrapper[4180]: I0220 11:47:31.979380 4180 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:31.986238 master-0 kubenswrapper[4180]: E0220 11:47:31.985950 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895f1f06632e578\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f06632e578 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:23.24921484 +0000 UTC m=+4.424266660,LastTimestamp:2026-02-20 11:47:23.69695296 +0000 UTC m=+4.872004800,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:31.992747 master-0 kubenswrapper[4180]: E0220 11:47:31.992610 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895f1f0817c4d47 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:23.707010375 +0000 UTC m=+4.882062195,LastTimestamp:2026-02-20 11:47:23.707010375 +0000 UTC m=+4.882062195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.001687 master-0 kubenswrapper[4180]: E0220 11:47:32.001476 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895f1f072c0d0fb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f072c0d0fb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:23.459842299 +0000 UTC m=+4.634894119,LastTimestamp:2026-02-20 11:47:23.901794633 +0000 UTC m=+5.076846453,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.010083 master-0 kubenswrapper[4180]: E0220 11:47:32.009176 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895f1f07389515f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f07389515f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:23.472982367 +0000 UTC m=+4.648034187,LastTimestamp:2026-02-20 11:47:23.913309362 +0000 UTC m=+5.088361182,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.016612 master-0 kubenswrapper[4180]: E0220 11:47:32.016309 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f0bcb9e0b5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:24.700901557 +0000 UTC m=+5.875953367,LastTimestamp:2026-02-20 11:47:24.700901557 +0000 UTC m=+5.875953367,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.025972 master-0 kubenswrapper[4180]: E0220 11:47:32.025722 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895f1f0bcb9e0b5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f0bcb9e0b5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:24.700901557 +0000 UTC m=+5.875953367,LastTimestamp:2026-02-20 11:47:25.705581554 +0000 UTC m=+6.880633394,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.034012 master-0 kubenswrapper[4180]: E0220 11:47:32.033844 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895f1f1789fc4a7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 7.194s (7.194s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:27.853307047 +0000 UTC m=+9.028358907,LastTimestamp:2026-02-20 11:47:27.853307047 +0000 UTC m=+9.028358907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.042069 master-0 kubenswrapper[4180]: E0220 11:47:32.041589 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895f1f17f58813d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 7.246s (7.246s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:27.966077245 +0000 UTC m=+9.141129105,LastTimestamp:2026-02-20 11:47:27.966077245 +0000 UTC m=+9.141129105,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.050227 master-0 kubenswrapper[4180]: E0220 11:47:32.049952 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1895f1f1837f9f66 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 7.352s (7.352s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:28.035749734 +0000 UTC m=+9.210801594,LastTimestamp:2026-02-20 11:47:28.035749734 +0000 UTC m=+9.210801594,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.059424 master-0 kubenswrapper[4180]: E0220 11:47:32.059274 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895f1f18ad0c162 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:28.158507362 +0000 UTC m=+9.333559192,LastTimestamp:2026-02-20 11:47:28.158507362 +0000 UTC m=+9.333559192,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.071326 master-0 kubenswrapper[4180]: E0220 11:47:32.071196 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895f1f18c195b32 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:28.180042546 +0000 UTC m=+9.355094366,LastTimestamp:2026-02-20 11:47:28.180042546 +0000 UTC m=+9.355094366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.078774 master-0 kubenswrapper[4180]: E0220 11:47:32.078506 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895f1f18c283c40 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:28.181017664 +0000 UTC m=+9.356069484,LastTimestamp:2026-02-20 11:47:28.181017664 +0000 UTC m=+9.356069484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.089790 master-0 kubenswrapper[4180]: E0220 11:47:32.089004 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895f1f18cf149c0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:28.194193856 +0000 UTC m=+9.369245676,LastTimestamp:2026-02-20 11:47:28.194193856 +0000 UTC m=+9.369245676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.096097 master-0 kubenswrapper[4180]: E0220 11:47:32.095920 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895f1f18d014a3d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:28.195242557 +0000 UTC m=+9.370294377,LastTimestamp:2026-02-20 11:47:28.195242557 +0000 UTC m=+9.370294377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.096782 master-0 kubenswrapper[4180]: E0220 11:47:32.096692 4180 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 11:47:32.104088 master-0 kubenswrapper[4180]: E0220 11:47:32.103922 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1895f1f1961c38be kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:28.348002494 +0000 UTC m=+9.523054354,LastTimestamp:2026-02-20 11:47:28.348002494 +0000 UTC m=+9.523054354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.113282 master-0 kubenswrapper[4180]: E0220 11:47:32.113137 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1895f1f1970c5854 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:28.36373922 +0000 UTC m=+9.538791080,LastTimestamp:2026-02-20 11:47:28.36373922 +0000 UTC m=+9.538791080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.120245 master-0 kubenswrapper[4180]: E0220 11:47:32.120101 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895f1f1ac1bfce0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:28.71708592 +0000 UTC m=+9.892137780,LastTimestamp:2026-02-20 11:47:28.71708592 +0000 UTC m=+9.892137780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.127463 master-0 kubenswrapper[4180]: E0220 11:47:32.127354 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895f1f1ba6009f1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:28.956426737 +0000 UTC m=+10.131478567,LastTimestamp:2026-02-20 11:47:28.956426737 +0000 UTC m=+10.131478567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.134318 master-0 kubenswrapper[4180]: E0220 11:47:32.134242 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895f1f1bb2bac5b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:28.969772123 +0000 UTC m=+10.144823953,LastTimestamp:2026-02-20 11:47:28.969772123 +0000 UTC m=+10.144823953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.140171 master-0 kubenswrapper[4180]: E0220 11:47:32.140074 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895f1f1bb3d5f3b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:28.970932027 +0000 UTC m=+10.145983857,LastTimestamp:2026-02-20 11:47:28.970932027 +0000 UTC m=+10.145983857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.145377 master-0 kubenswrapper[4180]: E0220 11:47:32.145242 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895f1f21d6d9b9b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\" in 2.422s (2.423s including waiting). Image size: 505137106 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:30.618260379 +0000 UTC m=+11.793312219,LastTimestamp:2026-02-20 11:47:30.618260379 +0000 UTC m=+11.793312219,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.172215 master-0 kubenswrapper[4180]: E0220 11:47:32.172025 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895f1f22b4c4152 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:30.850955602 +0000 UTC m=+12.026007432,LastTimestamp:2026-02-20 11:47:30.850955602 +0000 UTC m=+12.026007432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.179293 master-0 kubenswrapper[4180]: E0220 11:47:32.178507 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895f1f22bf1ff57 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:30.861817687 +0000 UTC m=+12.036869517,LastTimestamp:2026-02-20 11:47:30.861817687 +0000 UTC m=+12.036869517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.186427 master-0 kubenswrapper[4180]: E0220 11:47:32.186375 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895f1f270882d8d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:32.012510605 +0000 UTC m=+13.187562465,LastTimestamp:2026-02-20 11:47:32.012510605 +0000 UTC m=+13.187562465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.191373 master-0 kubenswrapper[4180]: E0220 11:47:32.191259 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895f1f2750b00a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" in 3.117s (3.117s including waiting). Image size: 514875199 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:32.088193188 +0000 UTC m=+13.263245008,LastTimestamp:2026-02-20 11:47:32.088193188 +0000 UTC m=+13.263245008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.261982 master-0 kubenswrapper[4180]: E0220 11:47:32.261868 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.1895f1f18c283c40\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895f1f18c283c40 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:28.181017664 +0000 UTC m=+9.356069484,LastTimestamp:2026-02-20 11:47:32.256741603 +0000 UTC m=+13.431793423,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.275262 master-0 kubenswrapper[4180]: E0220 11:47:32.275168 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.1895f1f18cf149c0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895f1f18cf149c0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:28.194193856 +0000 UTC m=+9.369245676,LastTimestamp:2026-02-20 11:47:32.270013948 +0000 UTC m=+13.445065768,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.297789 master-0 kubenswrapper[4180]: E0220 11:47:32.297665 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895f1f281487e48 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:32.29354964 +0000 UTC m=+13.468601470,LastTimestamp:2026-02-20 11:47:32.29354964 +0000 UTC m=+13.468601470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.310202 master-0 kubenswrapper[4180]: E0220 11:47:32.310080 4180 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895f1f2820501ba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:32.305904058 +0000 UTC m=+13.480955878,LastTimestamp:2026-02-20 11:47:32.305904058 +0000 UTC m=+13.480955878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:32.335580 master-0 kubenswrapper[4180]: I0220 11:47:32.335544 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:32.336549 master-0 kubenswrapper[4180]: I0220 11:47:32.336479 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:32.336549 master-0 kubenswrapper[4180]: I0220 11:47:32.336511 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:32.336549 master-0 kubenswrapper[4180]: I0220 11:47:32.336520 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:32.336819 master-0 kubenswrapper[4180]: I0220 11:47:32.336584 4180 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 11:47:32.341916 master-0 kubenswrapper[4180]: E0220 11:47:32.341869 4180 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 20 11:47:32.476753 master-0 kubenswrapper[4180]: I0220 11:47:32.476700 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:32.738335 master-0 kubenswrapper[4180]: I0220 11:47:32.738223 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"6d3121ed9f14f1a68a11c14e19a8ba5e47d812ae84b3f62cc56772a81aa8f139"} Feb 20 11:47:32.739005 master-0 kubenswrapper[4180]: I0220 11:47:32.738985 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:32.739167 master-0 kubenswrapper[4180]: I0220 11:47:32.738375 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:32.741267 master-0 kubenswrapper[4180]: I0220 11:47:32.741245 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:32.741386 master-0 kubenswrapper[4180]: I0220 11:47:32.741370 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:32.741481 master-0 kubenswrapper[4180]: I0220 11:47:32.741468 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:32.742797 master-0 kubenswrapper[4180]: I0220 11:47:32.742774 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"553dca30a8dfd11fe251075900d8d07349a66d2b7a86bc97b7536eb7dfb88315"} Feb 20 11:47:32.743221 master-0 kubenswrapper[4180]: I0220 11:47:32.743176 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:32.744058 master-0 kubenswrapper[4180]: I0220 11:47:32.744036 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:32.744184 master-0 kubenswrapper[4180]: I0220 11:47:32.744167 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:32.744269 master-0 kubenswrapper[4180]: I0220 11:47:32.744256 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:33.479128 master-0 kubenswrapper[4180]: I0220 11:47:33.479075 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:33.745197 master-0 kubenswrapper[4180]: I0220 11:47:33.745062 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:33.745197 master-0 kubenswrapper[4180]: I0220 11:47:33.745121 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:33.746346 master-0 kubenswrapper[4180]: I0220 11:47:33.746290 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:33.746346 master-0 kubenswrapper[4180]: I0220 11:47:33.746335 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:33.746346 master-0 kubenswrapper[4180]: I0220 11:47:33.746351 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:33.747588 master-0 kubenswrapper[4180]: I0220 11:47:33.747515 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:33.747588 master-0 kubenswrapper[4180]: I0220 11:47:33.747579 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:33.747761 master-0 kubenswrapper[4180]: I0220 11:47:33.747595 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:33.917897 master-0 kubenswrapper[4180]: I0220 11:47:33.917738 4180 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 20 11:47:33.941006 master-0 kubenswrapper[4180]: I0220 11:47:33.940931 4180 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 20 11:47:34.479740 master-0 kubenswrapper[4180]: I0220 11:47:34.479657 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:34.785100 master-0 kubenswrapper[4180]: W0220 11:47:34.785017 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:34.785908 master-0 kubenswrapper[4180]: E0220 11:47:34.785101 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 20 11:47:35.245857 master-0 kubenswrapper[4180]: I0220 11:47:35.245617 4180 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:35.246117 master-0 kubenswrapper[4180]: I0220 11:47:35.245901 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:35.247462 master-0 kubenswrapper[4180]: I0220 11:47:35.247392 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:35.247663 master-0 kubenswrapper[4180]: I0220 11:47:35.247473 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:35.247663 master-0 kubenswrapper[4180]: I0220 11:47:35.247499 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:35.254905 master-0 kubenswrapper[4180]: I0220 11:47:35.254833 4180 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:35.479212 master-0 kubenswrapper[4180]: I0220 11:47:35.479156 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:35.700352 master-0 kubenswrapper[4180]: W0220 11:47:35.700281 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 20 11:47:35.700352 master-0 kubenswrapper[4180]: E0220 11:47:35.700346 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 20 11:47:35.751873 master-0 kubenswrapper[4180]: I0220 11:47:35.751813 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:35.752271 master-0 kubenswrapper[4180]: I0220 11:47:35.752233 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:35.753076 master-0 kubenswrapper[4180]: I0220 11:47:35.753016 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:35.753264 master-0 kubenswrapper[4180]: I0220 11:47:35.753242 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:35.753401 master-0 kubenswrapper[4180]: I0220 11:47:35.753381 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:35.759108 master-0 kubenswrapper[4180]: I0220 11:47:35.759081 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:47:36.477631 master-0 kubenswrapper[4180]: I0220 11:47:36.477561 4180 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:36.478715 master-0 kubenswrapper[4180]: I0220 11:47:36.477799 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:36.479672 master-0 kubenswrapper[4180]: I0220 11:47:36.479368 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:36.479672 master-0 kubenswrapper[4180]: I0220 11:47:36.479633 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:36.479925 master-0 kubenswrapper[4180]: I0220 11:47:36.479687 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:36.479925 master-0 kubenswrapper[4180]: I0220 11:47:36.479710 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:36.485330 master-0 kubenswrapper[4180]: I0220 11:47:36.485250 4180 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:36.754459 master-0 kubenswrapper[4180]: I0220 11:47:36.754422 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:36.754869 master-0 kubenswrapper[4180]: I0220 11:47:36.754463 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:36.754869 master-0 kubenswrapper[4180]: I0220 11:47:36.754421 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:36.755784 master-0 kubenswrapper[4180]: I0220 11:47:36.755726 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:36.755784 master-0 kubenswrapper[4180]: I0220 11:47:36.755786 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:36.756013 master-0 kubenswrapper[4180]: I0220 11:47:36.755803 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:36.756143 master-0 kubenswrapper[4180]: I0220 11:47:36.756087 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:36.756143 master-0 kubenswrapper[4180]: I0220 11:47:36.756141 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:36.756339 master-0 kubenswrapper[4180]: I0220 11:47:36.756166 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:37.477590 master-0 kubenswrapper[4180]: I0220 11:47:37.477488 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:37.518492 master-0 kubenswrapper[4180]: W0220 11:47:37.518403 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 20 11:47:37.519127 master-0 kubenswrapper[4180]: E0220 11:47:37.518509 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 20 11:47:37.664328 master-0 kubenswrapper[4180]: I0220 11:47:37.664179 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:37.665705 master-0 kubenswrapper[4180]: I0220 11:47:37.665665 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:37.665848 master-0 kubenswrapper[4180]: I0220 11:47:37.665713 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:37.665848 master-0 kubenswrapper[4180]: I0220 11:47:37.665731 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:37.666520 master-0 kubenswrapper[4180]: I0220 11:47:37.666219 4180 scope.go:117] "RemoveContainer" containerID="a2c683f9a896ca400275a05e822759979315bcb526b2a548a70aa7107622475e" Feb 20 11:47:37.677267 master-0 kubenswrapper[4180]: E0220 11:47:37.677096 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895f1f06632e578\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f06632e578 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:23.24921484 +0000 UTC m=+4.424266660,LastTimestamp:2026-02-20 11:47:37.670050796 +0000 UTC m=+18.845102646,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:37.756850 master-0 kubenswrapper[4180]: I0220 11:47:37.756751 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:37.757117 master-0 kubenswrapper[4180]: I0220 11:47:37.757008 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:37.758591 master-0 kubenswrapper[4180]: I0220 11:47:37.758551 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:37.758591 master-0 kubenswrapper[4180]: I0220 11:47:37.758588 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:37.758774 master-0 kubenswrapper[4180]: I0220 11:47:37.758602 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:37.758774 master-0 kubenswrapper[4180]: I0220 11:47:37.758702 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:37.758774 master-0 kubenswrapper[4180]: I0220 11:47:37.758723 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:37.758774 master-0 kubenswrapper[4180]: I0220 11:47:37.758735 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:37.967376 master-0 kubenswrapper[4180]: E0220 11:47:37.966927 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895f1f072c0d0fb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f072c0d0fb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:23.459842299 +0000 UTC m=+4.634894119,LastTimestamp:2026-02-20 11:47:37.960400381 +0000 UTC m=+19.135452241,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:37.985942 master-0 kubenswrapper[4180]: E0220 11:47:37.985748 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895f1f07389515f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f07389515f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:23.472982367 +0000 UTC m=+4.648034187,LastTimestamp:2026-02-20 11:47:37.979060192 +0000 UTC m=+19.154112052,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:38.175925 master-0 kubenswrapper[4180]: W0220 11:47:38.175862 4180 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 20 11:47:38.176160 master-0 kubenswrapper[4180]: E0220 11:47:38.175930 4180 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 20 11:47:38.479749 master-0 kubenswrapper[4180]: I0220 11:47:38.479608 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:38.760387 master-0 kubenswrapper[4180]: I0220 11:47:38.760355 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 20 11:47:38.761932 master-0 kubenswrapper[4180]: I0220 11:47:38.761659 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 20 11:47:38.762159 master-0 kubenswrapper[4180]: I0220 11:47:38.762098 4180 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="53e7dc45156105f926a77b4b48981d5e387a572098dd2e0e299ab01a43056605" exitCode=1 Feb 20 11:47:38.762247 master-0 kubenswrapper[4180]: I0220 11:47:38.762151 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"53e7dc45156105f926a77b4b48981d5e387a572098dd2e0e299ab01a43056605"} Feb 20 11:47:38.762247 master-0 kubenswrapper[4180]: I0220 11:47:38.762203 4180 scope.go:117] "RemoveContainer" containerID="a2c683f9a896ca400275a05e822759979315bcb526b2a548a70aa7107622475e" Feb 20 11:47:38.762736 master-0 kubenswrapper[4180]: I0220 11:47:38.762372 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:38.763599 master-0 kubenswrapper[4180]: I0220 11:47:38.763556 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:38.763723 master-0 kubenswrapper[4180]: I0220 11:47:38.763615 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:38.763723 master-0 kubenswrapper[4180]: I0220 11:47:38.763632 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:38.764151 master-0 kubenswrapper[4180]: I0220 11:47:38.764107 4180 scope.go:117] "RemoveContainer" containerID="53e7dc45156105f926a77b4b48981d5e387a572098dd2e0e299ab01a43056605" Feb 20 11:47:38.764924 master-0 kubenswrapper[4180]: E0220 11:47:38.764352 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 20 11:47:38.773285 master-0 kubenswrapper[4180]: E0220 11:47:38.773116 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895f1f0bcb9e0b5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f0bcb9e0b5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:24.700901557 +0000 UTC m=+5.875953367,LastTimestamp:2026-02-20 11:47:38.764306708 +0000 UTC m=+19.939358558,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:38.954357 master-0 kubenswrapper[4180]: I0220 11:47:38.954253 4180 csr.go:261] certificate signing request csr-ctjr9 is approved, waiting to be issued Feb 20 11:47:39.104425 master-0 kubenswrapper[4180]: E0220 11:47:39.104288 4180 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 11:47:39.343135 master-0 kubenswrapper[4180]: I0220 11:47:39.343025 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:39.344406 master-0 kubenswrapper[4180]: I0220 11:47:39.344324 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:39.344406 master-0 kubenswrapper[4180]: I0220 11:47:39.344396 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:39.344674 master-0 kubenswrapper[4180]: I0220 11:47:39.344425 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:39.344674 master-0 kubenswrapper[4180]: I0220 11:47:39.344511 4180 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 11:47:39.351520 master-0 kubenswrapper[4180]: E0220 11:47:39.351464 4180 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 20 11:47:39.482586 master-0 kubenswrapper[4180]: I0220 11:47:39.482405 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:39.620963 master-0 kubenswrapper[4180]: E0220 11:47:39.620862 4180 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 20 11:47:39.767438 master-0 kubenswrapper[4180]: I0220 11:47:39.767303 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 20 11:47:40.480233 master-0 kubenswrapper[4180]: I0220 11:47:40.480161 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:41.478695 master-0 kubenswrapper[4180]: I0220 11:47:41.478634 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:42.480849 master-0 kubenswrapper[4180]: I0220 11:47:42.480280 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:43.481176 master-0 kubenswrapper[4180]: I0220 11:47:43.481071 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:44.479255 master-0 kubenswrapper[4180]: I0220 11:47:44.479188 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:45.478899 master-0 kubenswrapper[4180]: I0220 11:47:45.478840 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:46.112247 master-0 kubenswrapper[4180]: E0220 11:47:46.112192 4180 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 11:47:46.352662 master-0 kubenswrapper[4180]: I0220 11:47:46.352574 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:46.353986 master-0 kubenswrapper[4180]: I0220 11:47:46.353921 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:46.353986 master-0 kubenswrapper[4180]: I0220 11:47:46.353978 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:46.354212 master-0 kubenswrapper[4180]: I0220 11:47:46.354002 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:46.354212 master-0 kubenswrapper[4180]: I0220 11:47:46.354079 4180 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 11:47:46.363200 master-0 kubenswrapper[4180]: E0220 11:47:46.363061 4180 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 20 11:47:46.479792 master-0 kubenswrapper[4180]: I0220 11:47:46.479690 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:47.167171 master-0 kubenswrapper[4180]: I0220 11:47:47.167055 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:47.167424 master-0 kubenswrapper[4180]: I0220 11:47:47.167281 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:47.168672 master-0 kubenswrapper[4180]: I0220 11:47:47.168621 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:47.168745 master-0 kubenswrapper[4180]: I0220 11:47:47.168676 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:47.168745 master-0 kubenswrapper[4180]: I0220 11:47:47.168693 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:47.174331 master-0 kubenswrapper[4180]: I0220 11:47:47.174276 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:47:47.477742 master-0 kubenswrapper[4180]: I0220 11:47:47.477600 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:47.790438 master-0 kubenswrapper[4180]: I0220 11:47:47.790365 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:47.791481 master-0 kubenswrapper[4180]: I0220 11:47:47.791425 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:47.791583 master-0 kubenswrapper[4180]: I0220 11:47:47.791484 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:47.791583 master-0 kubenswrapper[4180]: I0220 11:47:47.791504 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:48.479838 master-0 kubenswrapper[4180]: I0220 11:47:48.479743 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:49.478767 master-0 kubenswrapper[4180]: I0220 11:47:49.478667 4180 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 11:47:49.621970 master-0 kubenswrapper[4180]: E0220 11:47:49.621868 4180 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 20 11:47:49.663876 master-0 kubenswrapper[4180]: I0220 11:47:49.663755 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:49.664933 master-0 kubenswrapper[4180]: I0220 11:47:49.664878 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:49.665021 master-0 kubenswrapper[4180]: I0220 11:47:49.664934 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:49.665021 master-0 kubenswrapper[4180]: I0220 11:47:49.664958 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:49.665471 master-0 kubenswrapper[4180]: I0220 11:47:49.665424 4180 scope.go:117] "RemoveContainer" containerID="53e7dc45156105f926a77b4b48981d5e387a572098dd2e0e299ab01a43056605" Feb 20 11:47:49.665742 master-0 kubenswrapper[4180]: E0220 11:47:49.665690 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 20 11:47:49.673281 master-0 kubenswrapper[4180]: E0220 11:47:49.673096 4180 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895f1f0bcb9e0b5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895f1f0bcb9e0b5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:47:24.700901557 +0000 UTC m=+5.875953367,LastTimestamp:2026-02-20 11:47:49.665644989 +0000 UTC m=+30.840696849,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:47:50.109867 master-0 kubenswrapper[4180]: I0220 11:47:50.109800 4180 csr.go:257] certificate signing request csr-ctjr9 is issued Feb 20 11:47:50.330424 master-0 kubenswrapper[4180]: I0220 11:47:50.330350 4180 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 20 11:47:50.483813 master-0 kubenswrapper[4180]: I0220 11:47:50.483670 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:50.500447 master-0 kubenswrapper[4180]: I0220 11:47:50.500391 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:50.565359 master-0 kubenswrapper[4180]: I0220 11:47:50.565296 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:50.837209 master-0 kubenswrapper[4180]: I0220 11:47:50.837138 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:50.837209 master-0 kubenswrapper[4180]: E0220 11:47:50.837207 4180 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 20 11:47:50.858016 master-0 kubenswrapper[4180]: I0220 11:47:50.857943 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:50.873797 master-0 kubenswrapper[4180]: I0220 11:47:50.873733 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:50.942494 master-0 kubenswrapper[4180]: I0220 11:47:50.942439 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:51.111684 master-0 kubenswrapper[4180]: I0220 11:47:51.111544 4180 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-21 11:39:43 +0000 UTC, rotation deadline is 2026-02-21 08:53:11.372824085 +0000 UTC Feb 20 11:47:51.111684 master-0 kubenswrapper[4180]: I0220 11:47:51.111585 4180 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 21h5m20.261242177s for next certificate rotation Feb 20 11:47:51.199814 master-0 kubenswrapper[4180]: I0220 11:47:51.199734 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:51.199814 master-0 kubenswrapper[4180]: E0220 11:47:51.199781 4180 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 20 11:47:51.300390 master-0 kubenswrapper[4180]: I0220 11:47:51.300337 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:51.316803 master-0 kubenswrapper[4180]: I0220 11:47:51.316723 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:51.377639 master-0 kubenswrapper[4180]: I0220 11:47:51.377508 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:51.656804 master-0 kubenswrapper[4180]: I0220 11:47:51.656657 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:51.656804 master-0 kubenswrapper[4180]: E0220 11:47:51.656705 4180 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 20 11:47:52.236580 master-0 kubenswrapper[4180]: I0220 11:47:52.236490 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:52.253339 master-0 kubenswrapper[4180]: I0220 11:47:52.253243 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:52.312168 master-0 kubenswrapper[4180]: I0220 11:47:52.312104 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:52.591411 master-0 kubenswrapper[4180]: I0220 11:47:52.591325 4180 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 11:47:52.591411 master-0 kubenswrapper[4180]: E0220 11:47:52.591377 4180 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 20 11:47:53.117636 master-0 kubenswrapper[4180]: E0220 11:47:53.117577 4180 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Feb 20 11:47:53.364274 master-0 kubenswrapper[4180]: I0220 11:47:53.364197 4180 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:47:53.365867 master-0 kubenswrapper[4180]: I0220 11:47:53.365820 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:47:53.365981 master-0 kubenswrapper[4180]: I0220 11:47:53.365889 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:47:53.365981 master-0 kubenswrapper[4180]: I0220 11:47:53.365909 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:47:53.365981 master-0 kubenswrapper[4180]: I0220 11:47:53.365969 4180 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 11:47:53.380393 master-0 kubenswrapper[4180]: I0220 11:47:53.380274 4180 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 20 11:47:53.380393 master-0 kubenswrapper[4180]: E0220 11:47:53.380333 4180 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 20 11:47:53.393064 master-0 kubenswrapper[4180]: E0220 11:47:53.392999 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:53.493450 master-0 kubenswrapper[4180]: E0220 11:47:53.493373 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:53.494602 master-0 kubenswrapper[4180]: I0220 11:47:53.494558 4180 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 20 11:47:53.505004 master-0 kubenswrapper[4180]: I0220 11:47:53.504965 4180 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 20 11:47:53.594253 master-0 kubenswrapper[4180]: E0220 11:47:53.594156 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:53.694776 master-0 kubenswrapper[4180]: E0220 11:47:53.694640 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:53.795045 master-0 kubenswrapper[4180]: E0220 11:47:53.794968 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:53.896112 master-0 kubenswrapper[4180]: E0220 11:47:53.896034 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:53.996581 master-0 kubenswrapper[4180]: E0220 11:47:53.996400 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:54.097510 master-0 kubenswrapper[4180]: E0220 11:47:54.097418 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:54.198387 master-0 kubenswrapper[4180]: E0220 11:47:54.198273 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:54.299265 master-0 kubenswrapper[4180]: E0220 11:47:54.299177 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:54.399811 master-0 kubenswrapper[4180]: E0220 11:47:54.399720 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:54.500398 master-0 kubenswrapper[4180]: E0220 11:47:54.500316 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:54.601239 master-0 kubenswrapper[4180]: E0220 11:47:54.601088 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:54.701755 master-0 kubenswrapper[4180]: E0220 11:47:54.701649 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:54.802861 master-0 kubenswrapper[4180]: E0220 11:47:54.802746 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:54.903220 master-0 kubenswrapper[4180]: E0220 11:47:54.903047 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:55.004184 master-0 kubenswrapper[4180]: E0220 11:47:55.004122 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:55.014339 master-0 kubenswrapper[4180]: I0220 11:47:55.014263 4180 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 20 11:47:55.104343 master-0 kubenswrapper[4180]: E0220 11:47:55.104237 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:55.205119 master-0 kubenswrapper[4180]: E0220 11:47:55.204964 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:55.306009 master-0 kubenswrapper[4180]: E0220 11:47:55.305962 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:55.406716 master-0 kubenswrapper[4180]: E0220 11:47:55.406630 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:55.507496 master-0 kubenswrapper[4180]: E0220 11:47:55.507444 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:55.608694 master-0 kubenswrapper[4180]: E0220 11:47:55.608613 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:55.709008 master-0 kubenswrapper[4180]: E0220 11:47:55.708946 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:55.809339 master-0 kubenswrapper[4180]: E0220 11:47:55.809171 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:55.909954 master-0 kubenswrapper[4180]: E0220 11:47:55.909882 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:56.010683 master-0 kubenswrapper[4180]: E0220 11:47:56.010591 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:56.111217 master-0 kubenswrapper[4180]: E0220 11:47:56.111030 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:56.212062 master-0 kubenswrapper[4180]: E0220 11:47:56.211984 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:56.313197 master-0 kubenswrapper[4180]: E0220 11:47:56.313099 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:56.414287 master-0 kubenswrapper[4180]: E0220 11:47:56.414154 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:56.514440 master-0 kubenswrapper[4180]: E0220 11:47:56.514343 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:56.615577 master-0 kubenswrapper[4180]: E0220 11:47:56.615475 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:56.716194 master-0 kubenswrapper[4180]: E0220 11:47:56.716048 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:56.817009 master-0 kubenswrapper[4180]: E0220 11:47:56.816939 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:56.918057 master-0 kubenswrapper[4180]: E0220 11:47:56.917997 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:57.018351 master-0 kubenswrapper[4180]: E0220 11:47:57.018313 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:57.119342 master-0 kubenswrapper[4180]: E0220 11:47:57.119298 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:57.219863 master-0 kubenswrapper[4180]: E0220 11:47:57.219792 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:57.320658 master-0 kubenswrapper[4180]: E0220 11:47:57.320477 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:57.420654 master-0 kubenswrapper[4180]: E0220 11:47:57.420597 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:57.521734 master-0 kubenswrapper[4180]: E0220 11:47:57.521665 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:57.622834 master-0 kubenswrapper[4180]: E0220 11:47:57.622697 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:57.723013 master-0 kubenswrapper[4180]: E0220 11:47:57.722933 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:57.796029 master-0 kubenswrapper[4180]: I0220 11:47:57.795956 4180 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 20 11:47:57.823635 master-0 kubenswrapper[4180]: E0220 11:47:57.823568 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:57.923878 master-0 kubenswrapper[4180]: E0220 11:47:57.923719 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:58.023975 master-0 kubenswrapper[4180]: E0220 11:47:58.023840 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:58.124145 master-0 kubenswrapper[4180]: E0220 11:47:58.124051 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:58.225046 master-0 kubenswrapper[4180]: E0220 11:47:58.224875 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:58.325077 master-0 kubenswrapper[4180]: E0220 11:47:58.325011 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:58.425841 master-0 kubenswrapper[4180]: E0220 11:47:58.425747 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:58.526239 master-0 kubenswrapper[4180]: E0220 11:47:58.526115 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:58.627257 master-0 kubenswrapper[4180]: E0220 11:47:58.627032 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:58.727875 master-0 kubenswrapper[4180]: E0220 11:47:58.727792 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:58.829072 master-0 kubenswrapper[4180]: E0220 11:47:58.828928 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:58.929969 master-0 kubenswrapper[4180]: E0220 11:47:58.929904 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:59.030970 master-0 kubenswrapper[4180]: E0220 11:47:59.030901 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:59.132081 master-0 kubenswrapper[4180]: E0220 11:47:59.131930 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:59.233230 master-0 kubenswrapper[4180]: E0220 11:47:59.233140 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:59.333554 master-0 kubenswrapper[4180]: E0220 11:47:59.333474 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:59.433822 master-0 kubenswrapper[4180]: E0220 11:47:59.433635 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:59.534641 master-0 kubenswrapper[4180]: E0220 11:47:59.534547 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:59.622801 master-0 kubenswrapper[4180]: E0220 11:47:59.622707 4180 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 20 11:47:59.635609 master-0 kubenswrapper[4180]: E0220 11:47:59.635567 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:59.736808 master-0 kubenswrapper[4180]: E0220 11:47:59.736622 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:59.837509 master-0 kubenswrapper[4180]: E0220 11:47:59.837430 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:59.875218 master-0 kubenswrapper[4180]: I0220 11:47:59.875042 4180 csr.go:261] certificate signing request csr-7vwk2 is approved, waiting to be issued Feb 20 11:47:59.884963 master-0 kubenswrapper[4180]: I0220 11:47:59.884922 4180 csr.go:257] certificate signing request csr-7vwk2 is issued Feb 20 11:47:59.938165 master-0 kubenswrapper[4180]: E0220 11:47:59.938097 4180 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 11:47:59.997958 master-0 kubenswrapper[4180]: I0220 11:47:59.997799 4180 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 20 11:48:00.271448 master-0 kubenswrapper[4180]: I0220 11:48:00.271377 4180 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 20 11:48:00.476506 master-0 kubenswrapper[4180]: I0220 11:48:00.476463 4180 apiserver.go:52] "Watching apiserver" Feb 20 11:48:00.482330 master-0 kubenswrapper[4180]: I0220 11:48:00.482293 4180 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 20 11:48:00.482862 master-0 kubenswrapper[4180]: I0220 11:48:00.482759 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-s6zmp","openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw","openshift-network-operator/network-operator-7d7db75979-fv598"] Feb 20 11:48:00.483405 master-0 kubenswrapper[4180]: I0220 11:48:00.483360 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:00.483405 master-0 kubenswrapper[4180]: I0220 11:48:00.483388 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:48:00.484262 master-0 kubenswrapper[4180]: I0220 11:48:00.483363 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.484720 master-0 kubenswrapper[4180]: I0220 11:48:00.484689 4180 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 20 11:48:00.485976 master-0 kubenswrapper[4180]: I0220 11:48:00.485935 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 20 11:48:00.486778 master-0 kubenswrapper[4180]: I0220 11:48:00.486738 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 20 11:48:00.488884 master-0 kubenswrapper[4180]: I0220 11:48:00.488325 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 20 11:48:00.488884 master-0 kubenswrapper[4180]: I0220 11:48:00.488399 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 20 11:48:00.488884 master-0 kubenswrapper[4180]: I0220 11:48:00.488470 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Feb 20 11:48:00.488884 master-0 kubenswrapper[4180]: I0220 11:48:00.488521 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Feb 20 11:48:00.488884 master-0 kubenswrapper[4180]: I0220 11:48:00.488625 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 20 11:48:00.488884 master-0 kubenswrapper[4180]: I0220 11:48:00.488702 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Feb 20 11:48:00.488884 master-0 kubenswrapper[4180]: I0220 11:48:00.488811 4180 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Feb 20 11:48:00.490275 master-0 kubenswrapper[4180]: I0220 11:48:00.489268 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 20 11:48:00.573493 master-0 kubenswrapper[4180]: I0220 11:48:00.573394 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-resolv-conf\") pod \"assisted-installer-controller-s6zmp\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.573493 master-0 kubenswrapper[4180]: I0220 11:48:00.573472 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-sno-bootstrap-files\") pod \"assisted-installer-controller-s6zmp\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.573789 master-0 kubenswrapper[4180]: I0220 11:48:00.573514 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjphz\" (UniqueName: \"kubernetes.io/projected/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-kube-api-access-kjphz\") pod \"assisted-installer-controller-s6zmp\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.573789 master-0 kubenswrapper[4180]: I0220 11:48:00.573579 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/312ca024-c8f0-4994-8f9a-b707607341fe-host-etc-kube\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:48:00.573789 master-0 kubenswrapper[4180]: I0220 11:48:00.573616 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpnmz\" (UniqueName: \"kubernetes.io/projected/312ca024-c8f0-4994-8f9a-b707607341fe-kube-api-access-bpnmz\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:48:00.573789 master-0 kubenswrapper[4180]: I0220 11:48:00.573650 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:00.573789 master-0 kubenswrapper[4180]: I0220 11:48:00.573687 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67f890c8-05a1-4797-8da8-6194aea0df9a-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:00.573789 master-0 kubenswrapper[4180]: I0220 11:48:00.573748 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:00.574390 master-0 kubenswrapper[4180]: I0220 11:48:00.573843 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/67f890c8-05a1-4797-8da8-6194aea0df9a-service-ca\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:00.574390 master-0 kubenswrapper[4180]: I0220 11:48:00.573903 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-ca-bundle\") pod \"assisted-installer-controller-s6zmp\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.574390 master-0 kubenswrapper[4180]: I0220 11:48:00.573942 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:00.574390 master-0 kubenswrapper[4180]: I0220 11:48:00.574004 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/312ca024-c8f0-4994-8f9a-b707607341fe-metrics-tls\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:48:00.574390 master-0 kubenswrapper[4180]: I0220 11:48:00.574048 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-var-run-resolv-conf\") pod \"assisted-installer-controller-s6zmp\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.675211 master-0 kubenswrapper[4180]: I0220 11:48:00.675114 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:00.675211 master-0 kubenswrapper[4180]: I0220 11:48:00.675190 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/312ca024-c8f0-4994-8f9a-b707607341fe-metrics-tls\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:48:00.675659 master-0 kubenswrapper[4180]: I0220 11:48:00.675233 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-var-run-resolv-conf\") pod \"assisted-installer-controller-s6zmp\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.675659 master-0 kubenswrapper[4180]: I0220 11:48:00.675277 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-resolv-conf\") pod \"assisted-installer-controller-s6zmp\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.675659 master-0 kubenswrapper[4180]: I0220 11:48:00.675325 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-sno-bootstrap-files\") pod \"assisted-installer-controller-s6zmp\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.675659 master-0 kubenswrapper[4180]: I0220 11:48:00.675456 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-var-run-resolv-conf\") pod \"assisted-installer-controller-s6zmp\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.675659 master-0 kubenswrapper[4180]: I0220 11:48:00.675571 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjphz\" (UniqueName: \"kubernetes.io/projected/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-kube-api-access-kjphz\") pod \"assisted-installer-controller-s6zmp\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.675659 master-0 kubenswrapper[4180]: I0220 11:48:00.675597 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-sno-bootstrap-files\") pod \"assisted-installer-controller-s6zmp\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.675997 master-0 kubenswrapper[4180]: I0220 11:48:00.675703 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-resolv-conf\") pod \"assisted-installer-controller-s6zmp\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.675997 master-0 kubenswrapper[4180]: I0220 11:48:00.675760 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:00.675997 master-0 kubenswrapper[4180]: I0220 11:48:00.675801 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67f890c8-05a1-4797-8da8-6194aea0df9a-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:00.675997 master-0 kubenswrapper[4180]: I0220 11:48:00.675881 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:00.676237 master-0 kubenswrapper[4180]: I0220 11:48:00.676102 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/312ca024-c8f0-4994-8f9a-b707607341fe-host-etc-kube\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:48:00.676237 master-0 kubenswrapper[4180]: E0220 11:48:00.676177 4180 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 11:48:00.676237 master-0 kubenswrapper[4180]: I0220 11:48:00.676193 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/312ca024-c8f0-4994-8f9a-b707607341fe-host-etc-kube\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:48:00.676237 master-0 kubenswrapper[4180]: I0220 11:48:00.676176 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpnmz\" (UniqueName: \"kubernetes.io/projected/312ca024-c8f0-4994-8f9a-b707607341fe-kube-api-access-bpnmz\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:48:00.676450 master-0 kubenswrapper[4180]: E0220 11:48:00.676274 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert podName:67f890c8-05a1-4797-8da8-6194aea0df9a nodeName:}" failed. No retries permitted until 2026-02-20 11:48:01.176241804 +0000 UTC m=+42.351293654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert") pod "cluster-version-operator-5cfd9759cf-4pnsw" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a") : secret "cluster-version-operator-serving-cert" not found Feb 20 11:48:00.676450 master-0 kubenswrapper[4180]: I0220 11:48:00.676312 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-ca-bundle\") pod \"assisted-installer-controller-s6zmp\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.676450 master-0 kubenswrapper[4180]: I0220 11:48:00.676347 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-ca-bundle\") pod \"assisted-installer-controller-s6zmp\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.676662 master-0 kubenswrapper[4180]: I0220 11:48:00.676583 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:00.676662 master-0 kubenswrapper[4180]: I0220 11:48:00.676590 4180 scope.go:117] "RemoveContainer" containerID="53e7dc45156105f926a77b4b48981d5e387a572098dd2e0e299ab01a43056605" Feb 20 11:48:00.676790 master-0 kubenswrapper[4180]: I0220 11:48:00.676755 4180 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 20 11:48:00.677182 master-0 kubenswrapper[4180]: I0220 11:48:00.677135 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 20 11:48:00.677315 master-0 kubenswrapper[4180]: I0220 11:48:00.676354 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:00.677382 master-0 kubenswrapper[4180]: I0220 11:48:00.677314 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/67f890c8-05a1-4797-8da8-6194aea0df9a-service-ca\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:00.678788 master-0 kubenswrapper[4180]: I0220 11:48:00.678666 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/67f890c8-05a1-4797-8da8-6194aea0df9a-service-ca\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:00.685247 master-0 kubenswrapper[4180]: I0220 11:48:00.685212 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/312ca024-c8f0-4994-8f9a-b707607341fe-metrics-tls\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:48:00.706452 master-0 kubenswrapper[4180]: I0220 11:48:00.706402 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpnmz\" (UniqueName: \"kubernetes.io/projected/312ca024-c8f0-4994-8f9a-b707607341fe-kube-api-access-bpnmz\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:48:00.708100 master-0 kubenswrapper[4180]: I0220 11:48:00.708041 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjphz\" (UniqueName: \"kubernetes.io/projected/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-kube-api-access-kjphz\") pod \"assisted-installer-controller-s6zmp\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.708667 master-0 kubenswrapper[4180]: I0220 11:48:00.708615 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67f890c8-05a1-4797-8da8-6194aea0df9a-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:00.826613 master-0 kubenswrapper[4180]: I0220 11:48:00.826397 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:48:00.850852 master-0 kubenswrapper[4180]: W0220 11:48:00.850785 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod312ca024_c8f0_4994_8f9a_b707607341fe.slice/crio-638616f7252126f59d4bcab9c5e05a063b0ebaded1038582ec0b4152c67c3d10 WatchSource:0}: Error finding container 638616f7252126f59d4bcab9c5e05a063b0ebaded1038582ec0b4152c67c3d10: Status 404 returned error can't find the container with id 638616f7252126f59d4bcab9c5e05a063b0ebaded1038582ec0b4152c67c3d10 Feb 20 11:48:00.857639 master-0 kubenswrapper[4180]: I0220 11:48:00.857593 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:00.873674 master-0 kubenswrapper[4180]: W0220 11:48:00.873628 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab7ffa68_5f62_4dc8_a24a_9988f3bb1edd.slice/crio-17066e06e0b2e2b17534c5886b653c787b7eefd7c251f9787e27a3b174b19ab1 WatchSource:0}: Error finding container 17066e06e0b2e2b17534c5886b653c787b7eefd7c251f9787e27a3b174b19ab1: Status 404 returned error can't find the container with id 17066e06e0b2e2b17534c5886b653c787b7eefd7c251f9787e27a3b174b19ab1 Feb 20 11:48:00.886761 master-0 kubenswrapper[4180]: I0220 11:48:00.886695 4180 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-21 11:39:43 +0000 UTC, rotation deadline is 2026-02-21 09:12:10.094181773 +0000 UTC Feb 20 11:48:00.886761 master-0 kubenswrapper[4180]: I0220 11:48:00.886746 4180 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 21h24m9.207441229s for next certificate rotation Feb 20 11:48:01.182069 master-0 kubenswrapper[4180]: I0220 11:48:01.181242 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:01.182069 master-0 kubenswrapper[4180]: E0220 11:48:01.181561 4180 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 11:48:01.182069 master-0 kubenswrapper[4180]: E0220 11:48:01.182050 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert podName:67f890c8-05a1-4797-8da8-6194aea0df9a nodeName:}" failed. No retries permitted until 2026-02-20 11:48:02.18200841 +0000 UTC m=+43.357060270 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert") pod "cluster-version-operator-5cfd9759cf-4pnsw" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a") : secret "cluster-version-operator-serving-cert" not found Feb 20 11:48:01.887455 master-0 kubenswrapper[4180]: I0220 11:48:01.887365 4180 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-21 11:39:43 +0000 UTC, rotation deadline is 2026-02-21 08:38:07.051183542 +0000 UTC Feb 20 11:48:01.887455 master-0 kubenswrapper[4180]: I0220 11:48:01.887427 4180 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h50m5.16376233s for next certificate rotation Feb 20 11:48:02.135835 master-0 kubenswrapper[4180]: I0220 11:48:02.135686 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-s6zmp" event={"ID":"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd","Type":"ContainerStarted","Data":"17066e06e0b2e2b17534c5886b653c787b7eefd7c251f9787e27a3b174b19ab1"} Feb 20 11:48:02.135835 master-0 kubenswrapper[4180]: I0220 11:48:02.135759 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-fv598" event={"ID":"312ca024-c8f0-4994-8f9a-b707607341fe","Type":"ContainerStarted","Data":"638616f7252126f59d4bcab9c5e05a063b0ebaded1038582ec0b4152c67c3d10"} Feb 20 11:48:02.139186 master-0 kubenswrapper[4180]: I0220 11:48:02.139038 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 20 11:48:02.139816 master-0 kubenswrapper[4180]: I0220 11:48:02.139741 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"6a6cafc7c486ca7c318193e8cb75dc02c40abcaf8818e09b14c243a316830547"} Feb 20 11:48:02.228116 master-0 kubenswrapper[4180]: I0220 11:48:02.227968 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:02.228581 master-0 kubenswrapper[4180]: E0220 11:48:02.228240 4180 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 11:48:02.228581 master-0 kubenswrapper[4180]: E0220 11:48:02.228363 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert podName:67f890c8-05a1-4797-8da8-6194aea0df9a nodeName:}" failed. No retries permitted until 2026-02-20 11:48:04.228329527 +0000 UTC m=+45.403381387 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert") pod "cluster-version-operator-5cfd9759cf-4pnsw" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a") : secret "cluster-version-operator-serving-cert" not found Feb 20 11:48:04.244395 master-0 kubenswrapper[4180]: I0220 11:48:04.244331 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:04.245179 master-0 kubenswrapper[4180]: E0220 11:48:04.244586 4180 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 11:48:04.245355 master-0 kubenswrapper[4180]: E0220 11:48:04.245339 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert podName:67f890c8-05a1-4797-8da8-6194aea0df9a nodeName:}" failed. No retries permitted until 2026-02-20 11:48:08.24531306 +0000 UTC m=+49.420364890 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert") pod "cluster-version-operator-5cfd9759cf-4pnsw" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a") : secret "cluster-version-operator-serving-cert" not found Feb 20 11:48:05.150017 master-0 kubenswrapper[4180]: I0220 11:48:05.149956 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-fv598" event={"ID":"312ca024-c8f0-4994-8f9a-b707607341fe","Type":"ContainerStarted","Data":"9e91bb7cb260950fd5e975354ec43adcbf694e33c154dd1b679deca6be0b9cfb"} Feb 20 11:48:05.169280 master-0 kubenswrapper[4180]: I0220 11:48:05.169139 4180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7d7db75979-fv598" podStartSLOduration=7.56733241 podStartE2EDuration="11.169079492s" podCreationTimestamp="2026-02-20 11:47:54 +0000 UTC" firstStartedPulling="2026-02-20 11:48:00.854392247 +0000 UTC m=+42.029444107" lastFinishedPulling="2026-02-20 11:48:04.456139359 +0000 UTC m=+45.631191189" observedRunningTime="2026-02-20 11:48:05.168775124 +0000 UTC m=+46.343826954" watchObservedRunningTime="2026-02-20 11:48:05.169079492 +0000 UTC m=+46.344131352" Feb 20 11:48:05.169511 master-0 kubenswrapper[4180]: I0220 11:48:05.169427 4180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=5.169416582 podStartE2EDuration="5.169416582s" podCreationTimestamp="2026-02-20 11:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:48:02.158215622 +0000 UTC m=+43.333267502" watchObservedRunningTime="2026-02-20 11:48:05.169416582 +0000 UTC m=+46.344468442" Feb 20 11:48:07.166938 master-0 kubenswrapper[4180]: I0220 11:48:07.166395 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-8zqsd"] Feb 20 11:48:07.166938 master-0 kubenswrapper[4180]: I0220 11:48:07.166739 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-8zqsd" Feb 20 11:48:07.266965 master-0 kubenswrapper[4180]: I0220 11:48:07.266899 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm8xf\" (UniqueName: \"kubernetes.io/projected/952aa6bb-4f60-4582-b978-52ebf9218755-kube-api-access-zm8xf\") pod \"mtu-prober-8zqsd\" (UID: \"952aa6bb-4f60-4582-b978-52ebf9218755\") " pod="openshift-network-operator/mtu-prober-8zqsd" Feb 20 11:48:07.367326 master-0 kubenswrapper[4180]: I0220 11:48:07.367241 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm8xf\" (UniqueName: \"kubernetes.io/projected/952aa6bb-4f60-4582-b978-52ebf9218755-kube-api-access-zm8xf\") pod \"mtu-prober-8zqsd\" (UID: \"952aa6bb-4f60-4582-b978-52ebf9218755\") " pod="openshift-network-operator/mtu-prober-8zqsd" Feb 20 11:48:07.388146 master-0 kubenswrapper[4180]: I0220 11:48:07.388112 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm8xf\" (UniqueName: \"kubernetes.io/projected/952aa6bb-4f60-4582-b978-52ebf9218755-kube-api-access-zm8xf\") pod \"mtu-prober-8zqsd\" (UID: \"952aa6bb-4f60-4582-b978-52ebf9218755\") " pod="openshift-network-operator/mtu-prober-8zqsd" Feb 20 11:48:07.488915 master-0 kubenswrapper[4180]: I0220 11:48:07.488754 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-8zqsd" Feb 20 11:48:07.542621 master-0 kubenswrapper[4180]: W0220 11:48:07.542580 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod952aa6bb_4f60_4582_b978_52ebf9218755.slice/crio-aaff8df8130a8f21e8f2fac6966945ec9db98da32ff593d737d8c12e79e27bd7 WatchSource:0}: Error finding container aaff8df8130a8f21e8f2fac6966945ec9db98da32ff593d737d8c12e79e27bd7: Status 404 returned error can't find the container with id aaff8df8130a8f21e8f2fac6966945ec9db98da32ff593d737d8c12e79e27bd7 Feb 20 11:48:08.160555 master-0 kubenswrapper[4180]: I0220 11:48:08.160420 4180 generic.go:334] "Generic (PLEG): container finished" podID="952aa6bb-4f60-4582-b978-52ebf9218755" containerID="5ad7139b014a017e9214a9b49d5763ba0bf59d3613eecad560b203e714e96877" exitCode=0 Feb 20 11:48:08.160884 master-0 kubenswrapper[4180]: I0220 11:48:08.160568 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-8zqsd" event={"ID":"952aa6bb-4f60-4582-b978-52ebf9218755","Type":"ContainerDied","Data":"5ad7139b014a017e9214a9b49d5763ba0bf59d3613eecad560b203e714e96877"} Feb 20 11:48:08.160884 master-0 kubenswrapper[4180]: I0220 11:48:08.160639 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-8zqsd" event={"ID":"952aa6bb-4f60-4582-b978-52ebf9218755","Type":"ContainerStarted","Data":"aaff8df8130a8f21e8f2fac6966945ec9db98da32ff593d737d8c12e79e27bd7"} Feb 20 11:48:08.163491 master-0 kubenswrapper[4180]: I0220 11:48:08.163435 4180 generic.go:334] "Generic (PLEG): container finished" podID="ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" containerID="136d6f3a9793756201eb14c53a4ba43141e49068fbce78152349e9d918491065" exitCode=0 Feb 20 11:48:08.163491 master-0 kubenswrapper[4180]: I0220 11:48:08.163484 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-s6zmp" event={"ID":"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd","Type":"ContainerDied","Data":"136d6f3a9793756201eb14c53a4ba43141e49068fbce78152349e9d918491065"} Feb 20 11:48:08.273455 master-0 kubenswrapper[4180]: I0220 11:48:08.273396 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:08.274513 master-0 kubenswrapper[4180]: E0220 11:48:08.273599 4180 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 11:48:08.274513 master-0 kubenswrapper[4180]: E0220 11:48:08.273670 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert podName:67f890c8-05a1-4797-8da8-6194aea0df9a nodeName:}" failed. No retries permitted until 2026-02-20 11:48:16.27364702 +0000 UTC m=+57.448698880 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert") pod "cluster-version-operator-5cfd9759cf-4pnsw" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a") : secret "cluster-version-operator-serving-cert" not found Feb 20 11:48:09.184934 master-0 kubenswrapper[4180]: I0220 11:48:09.184881 4180 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-8zqsd" Feb 20 11:48:09.189498 master-0 kubenswrapper[4180]: I0220 11:48:09.189449 4180 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:09.282410 master-0 kubenswrapper[4180]: I0220 11:48:09.282265 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-sno-bootstrap-files\") pod \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " Feb 20 11:48:09.282410 master-0 kubenswrapper[4180]: I0220 11:48:09.282339 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-ca-bundle\") pod \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " Feb 20 11:48:09.282410 master-0 kubenswrapper[4180]: I0220 11:48:09.282381 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjphz\" (UniqueName: \"kubernetes.io/projected/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-kube-api-access-kjphz\") pod \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " Feb 20 11:48:09.282410 master-0 kubenswrapper[4180]: I0220 11:48:09.282393 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" (UID: "ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:09.282410 master-0 kubenswrapper[4180]: I0220 11:48:09.282419 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-resolv-conf\") pod \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " Feb 20 11:48:09.284154 master-0 kubenswrapper[4180]: I0220 11:48:09.282449 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-var-run-resolv-conf\") pod \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\" (UID: \"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd\") " Feb 20 11:48:09.284154 master-0 kubenswrapper[4180]: I0220 11:48:09.282485 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm8xf\" (UniqueName: \"kubernetes.io/projected/952aa6bb-4f60-4582-b978-52ebf9218755-kube-api-access-zm8xf\") pod \"952aa6bb-4f60-4582-b978-52ebf9218755\" (UID: \"952aa6bb-4f60-4582-b978-52ebf9218755\") " Feb 20 11:48:09.284154 master-0 kubenswrapper[4180]: I0220 11:48:09.282476 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" (UID: "ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:09.284154 master-0 kubenswrapper[4180]: I0220 11:48:09.282586 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" (UID: "ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:09.284154 master-0 kubenswrapper[4180]: I0220 11:48:09.282601 4180 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:09.284154 master-0 kubenswrapper[4180]: I0220 11:48:09.282663 4180 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:09.284154 master-0 kubenswrapper[4180]: I0220 11:48:09.282648 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" (UID: "ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:09.287680 master-0 kubenswrapper[4180]: I0220 11:48:09.287602 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/952aa6bb-4f60-4582-b978-52ebf9218755-kube-api-access-zm8xf" (OuterVolumeSpecName: "kube-api-access-zm8xf") pod "952aa6bb-4f60-4582-b978-52ebf9218755" (UID: "952aa6bb-4f60-4582-b978-52ebf9218755"). InnerVolumeSpecName "kube-api-access-zm8xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:48:09.287812 master-0 kubenswrapper[4180]: I0220 11:48:09.287784 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-kube-api-access-kjphz" (OuterVolumeSpecName: "kube-api-access-kjphz") pod "ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" (UID: "ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd"). InnerVolumeSpecName "kube-api-access-kjphz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:48:09.383899 master-0 kubenswrapper[4180]: I0220 11:48:09.383791 4180 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjphz\" (UniqueName: \"kubernetes.io/projected/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-kube-api-access-kjphz\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:09.383899 master-0 kubenswrapper[4180]: I0220 11:48:09.383851 4180 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:09.383899 master-0 kubenswrapper[4180]: I0220 11:48:09.383893 4180 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:09.384258 master-0 kubenswrapper[4180]: I0220 11:48:09.383922 4180 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm8xf\" (UniqueName: \"kubernetes.io/projected/952aa6bb-4f60-4582-b978-52ebf9218755-kube-api-access-zm8xf\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:10.170083 master-0 kubenswrapper[4180]: I0220 11:48:10.169847 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-8zqsd" event={"ID":"952aa6bb-4f60-4582-b978-52ebf9218755","Type":"ContainerDied","Data":"aaff8df8130a8f21e8f2fac6966945ec9db98da32ff593d737d8c12e79e27bd7"} Feb 20 11:48:10.170083 master-0 kubenswrapper[4180]: I0220 11:48:10.169911 4180 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaff8df8130a8f21e8f2fac6966945ec9db98da32ff593d737d8c12e79e27bd7" Feb 20 11:48:10.170083 master-0 kubenswrapper[4180]: I0220 11:48:10.169987 4180 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-8zqsd" Feb 20 11:48:10.172429 master-0 kubenswrapper[4180]: I0220 11:48:10.172358 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-s6zmp" event={"ID":"ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd","Type":"ContainerDied","Data":"17066e06e0b2e2b17534c5886b653c787b7eefd7c251f9787e27a3b174b19ab1"} Feb 20 11:48:10.172429 master-0 kubenswrapper[4180]: I0220 11:48:10.172422 4180 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:48:10.172641 master-0 kubenswrapper[4180]: I0220 11:48:10.172430 4180 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17066e06e0b2e2b17534c5886b653c787b7eefd7c251f9787e27a3b174b19ab1" Feb 20 11:48:12.181472 master-0 kubenswrapper[4180]: I0220 11:48:12.181353 4180 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-8zqsd"] Feb 20 11:48:12.184785 master-0 kubenswrapper[4180]: I0220 11:48:12.184703 4180 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-8zqsd"] Feb 20 11:48:13.670066 master-0 kubenswrapper[4180]: I0220 11:48:13.669942 4180 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="952aa6bb-4f60-4582-b978-52ebf9218755" path="/var/lib/kubelet/pods/952aa6bb-4f60-4582-b978-52ebf9218755/volumes" Feb 20 11:48:16.333751 master-0 kubenswrapper[4180]: I0220 11:48:16.333687 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:16.334795 master-0 kubenswrapper[4180]: E0220 11:48:16.333875 4180 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 11:48:16.334937 master-0 kubenswrapper[4180]: E0220 11:48:16.334873 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert podName:67f890c8-05a1-4797-8da8-6194aea0df9a nodeName:}" failed. No retries permitted until 2026-02-20 11:48:32.334835098 +0000 UTC m=+73.509886948 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert") pod "cluster-version-operator-5cfd9759cf-4pnsw" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a") : secret "cluster-version-operator-serving-cert" not found Feb 20 11:48:17.066448 master-0 kubenswrapper[4180]: I0220 11:48:17.066373 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9qpc7"] Feb 20 11:48:17.066713 master-0 kubenswrapper[4180]: E0220 11:48:17.066494 4180 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952aa6bb-4f60-4582-b978-52ebf9218755" containerName="prober" Feb 20 11:48:17.066713 master-0 kubenswrapper[4180]: I0220 11:48:17.066513 4180 state_mem.go:107] "Deleted CPUSet assignment" podUID="952aa6bb-4f60-4582-b978-52ebf9218755" containerName="prober" Feb 20 11:48:17.066713 master-0 kubenswrapper[4180]: E0220 11:48:17.066563 4180 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" containerName="assisted-installer-controller" Feb 20 11:48:17.066713 master-0 kubenswrapper[4180]: I0220 11:48:17.066581 4180 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" containerName="assisted-installer-controller" Feb 20 11:48:17.066713 master-0 kubenswrapper[4180]: I0220 11:48:17.066649 4180 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" containerName="assisted-installer-controller" Feb 20 11:48:17.066713 master-0 kubenswrapper[4180]: I0220 11:48:17.066665 4180 memory_manager.go:354] "RemoveStaleState removing state" podUID="952aa6bb-4f60-4582-b978-52ebf9218755" containerName="prober" Feb 20 11:48:17.067043 master-0 kubenswrapper[4180]: I0220 11:48:17.066924 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.069394 master-0 kubenswrapper[4180]: I0220 11:48:17.069342 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 20 11:48:17.069605 master-0 kubenswrapper[4180]: I0220 11:48:17.069441 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 20 11:48:17.069775 master-0 kubenswrapper[4180]: I0220 11:48:17.069740 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 20 11:48:17.070002 master-0 kubenswrapper[4180]: I0220 11:48:17.069961 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 20 11:48:17.140491 master-0 kubenswrapper[4180]: I0220 11:48:17.140411 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-hostroot\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.140491 master-0 kubenswrapper[4180]: I0220 11:48:17.140493 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-netns\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.140910 master-0 kubenswrapper[4180]: I0220 11:48:17.140549 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-k8s-cni-cncf-io\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.140910 master-0 kubenswrapper[4180]: I0220 11:48:17.140689 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.140910 master-0 kubenswrapper[4180]: I0220 11:48:17.140793 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-os-release\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.140910 master-0 kubenswrapper[4180]: I0220 11:48:17.140829 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-socket-dir-parent\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.140910 master-0 kubenswrapper[4180]: I0220 11:48:17.140876 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-daemon-config\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.140910 master-0 kubenswrapper[4180]: I0220 11:48:17.140906 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-etc-kubernetes\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.141293 master-0 kubenswrapper[4180]: I0220 11:48:17.140972 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-bin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.141293 master-0 kubenswrapper[4180]: I0220 11:48:17.141006 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-kubelet\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.141293 master-0 kubenswrapper[4180]: I0220 11:48:17.141035 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrwcs\" (UniqueName: \"kubernetes.io/projected/533fe3c7-504f-40aa-aab0-8d66ef27920f-kube-api-access-jrwcs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.141293 master-0 kubenswrapper[4180]: I0220 11:48:17.141125 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-conf-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.141293 master-0 kubenswrapper[4180]: I0220 11:48:17.141184 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-multus-certs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.141293 master-0 kubenswrapper[4180]: I0220 11:48:17.141215 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-cni-binary-copy\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.141293 master-0 kubenswrapper[4180]: I0220 11:48:17.141248 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-system-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.141293 master-0 kubenswrapper[4180]: I0220 11:48:17.141278 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-cnibin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.141760 master-0 kubenswrapper[4180]: I0220 11:48:17.141305 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-multus\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.241794 master-0 kubenswrapper[4180]: I0220 11:48:17.241681 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-bin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.241794 master-0 kubenswrapper[4180]: I0220 11:48:17.241739 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-kubelet\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.242173 master-0 kubenswrapper[4180]: I0220 11:48:17.241819 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-bin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.242173 master-0 kubenswrapper[4180]: I0220 11:48:17.241938 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-kubelet\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.242173 master-0 kubenswrapper[4180]: I0220 11:48:17.242029 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrwcs\" (UniqueName: \"kubernetes.io/projected/533fe3c7-504f-40aa-aab0-8d66ef27920f-kube-api-access-jrwcs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.242173 master-0 kubenswrapper[4180]: I0220 11:48:17.242122 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-conf-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.242731 master-0 kubenswrapper[4180]: I0220 11:48:17.242175 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-multus-certs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.242731 master-0 kubenswrapper[4180]: I0220 11:48:17.242223 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-cni-binary-copy\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.242731 master-0 kubenswrapper[4180]: I0220 11:48:17.242273 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-system-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.242731 master-0 kubenswrapper[4180]: I0220 11:48:17.242319 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-cnibin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.242731 master-0 kubenswrapper[4180]: I0220 11:48:17.242371 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-multus\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.242731 master-0 kubenswrapper[4180]: I0220 11:48:17.242421 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-hostroot\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.242731 master-0 kubenswrapper[4180]: I0220 11:48:17.242507 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-netns\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.242731 master-0 kubenswrapper[4180]: I0220 11:48:17.242605 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-k8s-cni-cncf-io\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.242731 master-0 kubenswrapper[4180]: I0220 11:48:17.242617 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-conf-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.242731 master-0 kubenswrapper[4180]: I0220 11:48:17.242650 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.242731 master-0 kubenswrapper[4180]: I0220 11:48:17.242714 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-os-release\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.243633 master-0 kubenswrapper[4180]: I0220 11:48:17.242762 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.243633 master-0 kubenswrapper[4180]: I0220 11:48:17.242783 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-socket-dir-parent\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.243633 master-0 kubenswrapper[4180]: I0220 11:48:17.242851 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-daemon-config\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.243633 master-0 kubenswrapper[4180]: I0220 11:48:17.242886 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-etc-kubernetes\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.243633 master-0 kubenswrapper[4180]: I0220 11:48:17.242980 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-system-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.243633 master-0 kubenswrapper[4180]: I0220 11:48:17.242993 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-etc-kubernetes\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.243633 master-0 kubenswrapper[4180]: I0220 11:48:17.243102 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-multus-certs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.243633 master-0 kubenswrapper[4180]: I0220 11:48:17.243156 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-os-release\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.243633 master-0 kubenswrapper[4180]: I0220 11:48:17.243204 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-socket-dir-parent\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.243633 master-0 kubenswrapper[4180]: I0220 11:48:17.243247 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-hostroot\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.243633 master-0 kubenswrapper[4180]: I0220 11:48:17.243343 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-netns\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.243633 master-0 kubenswrapper[4180]: I0220 11:48:17.243446 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-cnibin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.243633 master-0 kubenswrapper[4180]: I0220 11:48:17.243589 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-k8s-cni-cncf-io\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.243633 master-0 kubenswrapper[4180]: I0220 11:48:17.243591 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-cni-binary-copy\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.243633 master-0 kubenswrapper[4180]: I0220 11:48:17.243632 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-multus\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.244855 master-0 kubenswrapper[4180]: I0220 11:48:17.244156 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-daemon-config\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.267216 master-0 kubenswrapper[4180]: I0220 11:48:17.267156 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-f2l64"] Feb 20 11:48:17.267629 master-0 kubenswrapper[4180]: I0220 11:48:17.267598 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.270518 master-0 kubenswrapper[4180]: I0220 11:48:17.270479 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 20 11:48:17.270808 master-0 kubenswrapper[4180]: I0220 11:48:17.270768 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 20 11:48:17.276202 master-0 kubenswrapper[4180]: I0220 11:48:17.276151 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrwcs\" (UniqueName: \"kubernetes.io/projected/533fe3c7-504f-40aa-aab0-8d66ef27920f-kube-api-access-jrwcs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.344629 master-0 kubenswrapper[4180]: I0220 11:48:17.344407 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-binary-copy\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.344629 master-0 kubenswrapper[4180]: I0220 11:48:17.344551 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5pw4\" (UniqueName: \"kubernetes.io/projected/07281644-2789-424f-8429-aa4448dda01e-kube-api-access-l5pw4\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.345407 master-0 kubenswrapper[4180]: I0220 11:48:17.344685 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-cnibin\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.345407 master-0 kubenswrapper[4180]: I0220 11:48:17.344740 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.345407 master-0 kubenswrapper[4180]: I0220 11:48:17.344847 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.345407 master-0 kubenswrapper[4180]: I0220 11:48:17.344888 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-system-cni-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.345407 master-0 kubenswrapper[4180]: I0220 11:48:17.344916 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-os-release\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.345407 master-0 kubenswrapper[4180]: I0220 11:48:17.345002 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-whereabouts-configmap\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.386415 master-0 kubenswrapper[4180]: I0220 11:48:17.385927 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9qpc7" Feb 20 11:48:17.399975 master-0 kubenswrapper[4180]: W0220 11:48:17.399901 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod533fe3c7_504f_40aa_aab0_8d66ef27920f.slice/crio-daeb204866928dea63cda5d95ee5bd6ef7be131f67e5efa51523f3185688b49e WatchSource:0}: Error finding container daeb204866928dea63cda5d95ee5bd6ef7be131f67e5efa51523f3185688b49e: Status 404 returned error can't find the container with id daeb204866928dea63cda5d95ee5bd6ef7be131f67e5efa51523f3185688b49e Feb 20 11:48:17.445963 master-0 kubenswrapper[4180]: I0220 11:48:17.445840 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-cnibin\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.445963 master-0 kubenswrapper[4180]: I0220 11:48:17.445940 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.446271 master-0 kubenswrapper[4180]: I0220 11:48:17.446089 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-cnibin\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.446271 master-0 kubenswrapper[4180]: I0220 11:48:17.446151 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.446271 master-0 kubenswrapper[4180]: I0220 11:48:17.446182 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-system-cni-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.446271 master-0 kubenswrapper[4180]: I0220 11:48:17.446204 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-os-release\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.446271 master-0 kubenswrapper[4180]: I0220 11:48:17.446225 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-whereabouts-configmap\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.446271 master-0 kubenswrapper[4180]: I0220 11:48:17.446223 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.446271 master-0 kubenswrapper[4180]: I0220 11:48:17.446248 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-binary-copy\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.446861 master-0 kubenswrapper[4180]: I0220 11:48:17.446346 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5pw4\" (UniqueName: \"kubernetes.io/projected/07281644-2789-424f-8429-aa4448dda01e-kube-api-access-l5pw4\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.446861 master-0 kubenswrapper[4180]: I0220 11:48:17.446371 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-os-release\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.446861 master-0 kubenswrapper[4180]: I0220 11:48:17.446433 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-system-cni-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.447807 master-0 kubenswrapper[4180]: I0220 11:48:17.447738 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-whereabouts-configmap\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.448198 master-0 kubenswrapper[4180]: I0220 11:48:17.448144 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.448722 master-0 kubenswrapper[4180]: I0220 11:48:17.448655 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-binary-copy\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.472426 master-0 kubenswrapper[4180]: I0220 11:48:17.472345 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5pw4\" (UniqueName: \"kubernetes.io/projected/07281644-2789-424f-8429-aa4448dda01e-kube-api-access-l5pw4\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.595785 master-0 kubenswrapper[4180]: I0220 11:48:17.595650 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:48:17.613750 master-0 kubenswrapper[4180]: W0220 11:48:17.613641 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07281644_2789_424f_8429_aa4448dda01e.slice/crio-b48391bd94beb64b336f15ad176f98f36973e5e545db832340669d5eac56bf63 WatchSource:0}: Error finding container b48391bd94beb64b336f15ad176f98f36973e5e545db832340669d5eac56bf63: Status 404 returned error can't find the container with id b48391bd94beb64b336f15ad176f98f36973e5e545db832340669d5eac56bf63 Feb 20 11:48:18.046031 master-0 kubenswrapper[4180]: I0220 11:48:18.045971 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-29622"] Feb 20 11:48:18.046513 master-0 kubenswrapper[4180]: I0220 11:48:18.046464 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:18.046664 master-0 kubenswrapper[4180]: E0220 11:48:18.046609 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:18.152275 master-0 kubenswrapper[4180]: I0220 11:48:18.152226 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79j9f\" (UniqueName: \"kubernetes.io/projected/1709ef31-9ddd-42bf-9a95-4be4502a0828-kube-api-access-79j9f\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:18.152275 master-0 kubenswrapper[4180]: I0220 11:48:18.152273 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:18.193456 master-0 kubenswrapper[4180]: I0220 11:48:18.193410 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2l64" event={"ID":"07281644-2789-424f-8429-aa4448dda01e","Type":"ContainerStarted","Data":"b48391bd94beb64b336f15ad176f98f36973e5e545db832340669d5eac56bf63"} Feb 20 11:48:18.194628 master-0 kubenswrapper[4180]: I0220 11:48:18.194595 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9qpc7" event={"ID":"533fe3c7-504f-40aa-aab0-8d66ef27920f","Type":"ContainerStarted","Data":"daeb204866928dea63cda5d95ee5bd6ef7be131f67e5efa51523f3185688b49e"} Feb 20 11:48:18.253430 master-0 kubenswrapper[4180]: I0220 11:48:18.253380 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79j9f\" (UniqueName: \"kubernetes.io/projected/1709ef31-9ddd-42bf-9a95-4be4502a0828-kube-api-access-79j9f\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:18.253430 master-0 kubenswrapper[4180]: I0220 11:48:18.253429 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:18.253701 master-0 kubenswrapper[4180]: E0220 11:48:18.253577 4180 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 11:48:18.253701 master-0 kubenswrapper[4180]: E0220 11:48:18.253641 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs podName:1709ef31-9ddd-42bf-9a95-4be4502a0828 nodeName:}" failed. No retries permitted until 2026-02-20 11:48:18.753625019 +0000 UTC m=+59.928676839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs") pod "network-metrics-daemon-29622" (UID: "1709ef31-9ddd-42bf-9a95-4be4502a0828") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 11:48:18.274212 master-0 kubenswrapper[4180]: I0220 11:48:18.274172 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79j9f\" (UniqueName: \"kubernetes.io/projected/1709ef31-9ddd-42bf-9a95-4be4502a0828-kube-api-access-79j9f\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:18.757005 master-0 kubenswrapper[4180]: I0220 11:48:18.756946 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:18.757474 master-0 kubenswrapper[4180]: E0220 11:48:18.757195 4180 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 11:48:18.757474 master-0 kubenswrapper[4180]: E0220 11:48:18.757309 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs podName:1709ef31-9ddd-42bf-9a95-4be4502a0828 nodeName:}" failed. No retries permitted until 2026-02-20 11:48:19.757278246 +0000 UTC m=+60.932330106 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs") pod "network-metrics-daemon-29622" (UID: "1709ef31-9ddd-42bf-9a95-4be4502a0828") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 11:48:19.671903 master-0 kubenswrapper[4180]: I0220 11:48:19.671840 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:19.672474 master-0 kubenswrapper[4180]: E0220 11:48:19.672382 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:19.763132 master-0 kubenswrapper[4180]: I0220 11:48:19.763080 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:19.764178 master-0 kubenswrapper[4180]: E0220 11:48:19.763234 4180 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 11:48:19.764178 master-0 kubenswrapper[4180]: E0220 11:48:19.763294 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs podName:1709ef31-9ddd-42bf-9a95-4be4502a0828 nodeName:}" failed. No retries permitted until 2026-02-20 11:48:21.763274603 +0000 UTC m=+62.938326433 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs") pod "network-metrics-daemon-29622" (UID: "1709ef31-9ddd-42bf-9a95-4be4502a0828") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 11:48:21.204834 master-0 kubenswrapper[4180]: I0220 11:48:21.204749 4180 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="fd6a9476a5e46b15b6371b4f9b6a262cda38dc0b2ce85f673487d39ba4902d2c" exitCode=0 Feb 20 11:48:21.205707 master-0 kubenswrapper[4180]: I0220 11:48:21.204808 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2l64" event={"ID":"07281644-2789-424f-8429-aa4448dda01e","Type":"ContainerDied","Data":"fd6a9476a5e46b15b6371b4f9b6a262cda38dc0b2ce85f673487d39ba4902d2c"} Feb 20 11:48:21.665561 master-0 kubenswrapper[4180]: I0220 11:48:21.664686 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:21.665561 master-0 kubenswrapper[4180]: E0220 11:48:21.664916 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:21.779202 master-0 kubenswrapper[4180]: I0220 11:48:21.779147 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:21.779448 master-0 kubenswrapper[4180]: E0220 11:48:21.779313 4180 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 11:48:21.779448 master-0 kubenswrapper[4180]: E0220 11:48:21.779388 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs podName:1709ef31-9ddd-42bf-9a95-4be4502a0828 nodeName:}" failed. No retries permitted until 2026-02-20 11:48:25.779366211 +0000 UTC m=+66.954418041 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs") pod "network-metrics-daemon-29622" (UID: "1709ef31-9ddd-42bf-9a95-4be4502a0828") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 11:48:23.663862 master-0 kubenswrapper[4180]: I0220 11:48:23.663792 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:23.664372 master-0 kubenswrapper[4180]: E0220 11:48:23.663984 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:25.664508 master-0 kubenswrapper[4180]: I0220 11:48:25.664428 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:25.665651 master-0 kubenswrapper[4180]: E0220 11:48:25.664695 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:25.813787 master-0 kubenswrapper[4180]: I0220 11:48:25.813709 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:25.813998 master-0 kubenswrapper[4180]: E0220 11:48:25.813845 4180 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 11:48:25.813998 master-0 kubenswrapper[4180]: E0220 11:48:25.813910 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs podName:1709ef31-9ddd-42bf-9a95-4be4502a0828 nodeName:}" failed. No retries permitted until 2026-02-20 11:48:33.813894344 +0000 UTC m=+74.988946164 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs") pod "network-metrics-daemon-29622" (UID: "1709ef31-9ddd-42bf-9a95-4be4502a0828") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 11:48:27.664624 master-0 kubenswrapper[4180]: I0220 11:48:27.664557 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:27.665993 master-0 kubenswrapper[4180]: E0220 11:48:27.664693 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:29.464221 master-0 kubenswrapper[4180]: I0220 11:48:29.464122 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8"] Feb 20 11:48:29.465266 master-0 kubenswrapper[4180]: I0220 11:48:29.465223 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:48:29.469407 master-0 kubenswrapper[4180]: I0220 11:48:29.469081 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 20 11:48:29.470916 master-0 kubenswrapper[4180]: I0220 11:48:29.469817 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 20 11:48:29.470916 master-0 kubenswrapper[4180]: I0220 11:48:29.469837 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 20 11:48:29.470916 master-0 kubenswrapper[4180]: I0220 11:48:29.469903 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 20 11:48:29.470916 master-0 kubenswrapper[4180]: I0220 11:48:29.470001 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 20 11:48:29.538633 master-0 kubenswrapper[4180]: I0220 11:48:29.538522 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ts6s\" (UniqueName: \"kubernetes.io/projected/31969539-bfd1-466f-8697-f13cbbd957df-kube-api-access-7ts6s\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:48:29.538633 master-0 kubenswrapper[4180]: I0220 11:48:29.538628 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:48:29.538902 master-0 kubenswrapper[4180]: I0220 11:48:29.538676 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31969539-bfd1-466f-8697-f13cbbd957df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:48:29.538902 master-0 kubenswrapper[4180]: I0220 11:48:29.538751 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:48:29.640643 master-0 kubenswrapper[4180]: I0220 11:48:29.639586 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ts6s\" (UniqueName: \"kubernetes.io/projected/31969539-bfd1-466f-8697-f13cbbd957df-kube-api-access-7ts6s\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:48:29.640643 master-0 kubenswrapper[4180]: I0220 11:48:29.639847 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:48:29.640643 master-0 kubenswrapper[4180]: I0220 11:48:29.639954 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31969539-bfd1-466f-8697-f13cbbd957df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:48:29.640643 master-0 kubenswrapper[4180]: I0220 11:48:29.640315 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:48:29.640919 master-0 kubenswrapper[4180]: I0220 11:48:29.640674 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:48:29.641426 master-0 kubenswrapper[4180]: I0220 11:48:29.641384 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:48:29.647630 master-0 kubenswrapper[4180]: I0220 11:48:29.647291 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31969539-bfd1-466f-8697-f13cbbd957df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:48:29.664113 master-0 kubenswrapper[4180]: I0220 11:48:29.664002 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:29.665290 master-0 kubenswrapper[4180]: E0220 11:48:29.665246 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:29.667230 master-0 kubenswrapper[4180]: I0220 11:48:29.667194 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ts6s\" (UniqueName: \"kubernetes.io/projected/31969539-bfd1-466f-8697-f13cbbd957df-kube-api-access-7ts6s\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:48:29.671943 master-0 kubenswrapper[4180]: I0220 11:48:29.671900 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bj4fp"] Feb 20 11:48:29.673019 master-0 kubenswrapper[4180]: I0220 11:48:29.672983 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.674727 master-0 kubenswrapper[4180]: I0220 11:48:29.674648 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 20 11:48:29.674975 master-0 kubenswrapper[4180]: I0220 11:48:29.674931 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 20 11:48:29.741285 master-0 kubenswrapper[4180]: I0220 11:48:29.741183 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-run-netns\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741285 master-0 kubenswrapper[4180]: I0220 11:48:29.741244 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-env-overrides\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741453 master-0 kubenswrapper[4180]: I0220 11:48:29.741295 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e979c659-8581-466f-8528-01b6b4f51499-ovn-node-metrics-cert\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741453 master-0 kubenswrapper[4180]: I0220 11:48:29.741323 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-var-lib-openvswitch\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741453 master-0 kubenswrapper[4180]: I0220 11:48:29.741352 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-ovn\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741453 master-0 kubenswrapper[4180]: I0220 11:48:29.741373 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-run-ovn-kubernetes\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741453 master-0 kubenswrapper[4180]: I0220 11:48:29.741394 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-systemd-units\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741453 master-0 kubenswrapper[4180]: I0220 11:48:29.741415 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741453 master-0 kubenswrapper[4180]: I0220 11:48:29.741438 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-ovnkube-config\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741654 master-0 kubenswrapper[4180]: I0220 11:48:29.741519 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbbb4\" (UniqueName: \"kubernetes.io/projected/e979c659-8581-466f-8528-01b6b4f51499-kube-api-access-jbbb4\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741654 master-0 kubenswrapper[4180]: I0220 11:48:29.741619 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-slash\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741712 master-0 kubenswrapper[4180]: I0220 11:48:29.741650 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-cni-bin\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741712 master-0 kubenswrapper[4180]: I0220 11:48:29.741689 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-kubelet\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741770 master-0 kubenswrapper[4180]: I0220 11:48:29.741730 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-ovnkube-script-lib\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741770 master-0 kubenswrapper[4180]: I0220 11:48:29.741754 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-etc-openvswitch\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741821 master-0 kubenswrapper[4180]: I0220 11:48:29.741785 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-openvswitch\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741821 master-0 kubenswrapper[4180]: I0220 11:48:29.741802 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-systemd\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741821 master-0 kubenswrapper[4180]: I0220 11:48:29.741818 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-cni-netd\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741907 master-0 kubenswrapper[4180]: I0220 11:48:29.741843 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-node-log\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.741907 master-0 kubenswrapper[4180]: I0220 11:48:29.741876 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-log-socket\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.780807 master-0 kubenswrapper[4180]: I0220 11:48:29.780753 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:48:29.842426 master-0 kubenswrapper[4180]: I0220 11:48:29.842378 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-ovn\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.842610 master-0 kubenswrapper[4180]: I0220 11:48:29.842521 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-ovn\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.842610 master-0 kubenswrapper[4180]: I0220 11:48:29.842582 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-run-ovn-kubernetes\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.842701 master-0 kubenswrapper[4180]: I0220 11:48:29.842630 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-systemd-units\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.842701 master-0 kubenswrapper[4180]: I0220 11:48:29.842640 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-run-ovn-kubernetes\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.842701 master-0 kubenswrapper[4180]: I0220 11:48:29.842665 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-var-lib-openvswitch\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.842701 master-0 kubenswrapper[4180]: I0220 11:48:29.842692 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-systemd-units\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.842818 master-0 kubenswrapper[4180]: I0220 11:48:29.842702 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.842818 master-0 kubenswrapper[4180]: I0220 11:48:29.842774 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.842818 master-0 kubenswrapper[4180]: I0220 11:48:29.842788 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-var-lib-openvswitch\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.842901 master-0 kubenswrapper[4180]: I0220 11:48:29.842837 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-ovnkube-config\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.842901 master-0 kubenswrapper[4180]: I0220 11:48:29.842857 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbbb4\" (UniqueName: \"kubernetes.io/projected/e979c659-8581-466f-8528-01b6b4f51499-kube-api-access-jbbb4\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.842901 master-0 kubenswrapper[4180]: I0220 11:48:29.842880 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-cni-bin\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.842983 master-0 kubenswrapper[4180]: I0220 11:48:29.842902 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-slash\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.842983 master-0 kubenswrapper[4180]: I0220 11:48:29.842942 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-slash\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.842983 master-0 kubenswrapper[4180]: I0220 11:48:29.842968 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-cni-bin\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.843692 master-0 kubenswrapper[4180]: I0220 11:48:29.843111 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-kubelet\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.843692 master-0 kubenswrapper[4180]: I0220 11:48:29.843142 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-ovnkube-script-lib\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.843692 master-0 kubenswrapper[4180]: I0220 11:48:29.843161 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-kubelet\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.843692 master-0 kubenswrapper[4180]: I0220 11:48:29.843168 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-etc-openvswitch\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.843692 master-0 kubenswrapper[4180]: I0220 11:48:29.843213 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-openvswitch\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.843692 master-0 kubenswrapper[4180]: I0220 11:48:29.843241 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-cni-netd\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.843692 master-0 kubenswrapper[4180]: I0220 11:48:29.843281 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-systemd\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.843692 master-0 kubenswrapper[4180]: I0220 11:48:29.843310 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-node-log\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.843692 master-0 kubenswrapper[4180]: I0220 11:48:29.843337 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-log-socket\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.843692 master-0 kubenswrapper[4180]: I0220 11:48:29.843365 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-run-netns\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.843692 master-0 kubenswrapper[4180]: I0220 11:48:29.843392 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-env-overrides\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.843692 master-0 kubenswrapper[4180]: I0220 11:48:29.843423 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e979c659-8581-466f-8528-01b6b4f51499-ovn-node-metrics-cert\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.844942 master-0 kubenswrapper[4180]: I0220 11:48:29.843862 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-ovnkube-script-lib\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.844942 master-0 kubenswrapper[4180]: I0220 11:48:29.843900 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-systemd\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.844942 master-0 kubenswrapper[4180]: I0220 11:48:29.843923 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-etc-openvswitch\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.844942 master-0 kubenswrapper[4180]: I0220 11:48:29.843943 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-openvswitch\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.844942 master-0 kubenswrapper[4180]: I0220 11:48:29.843963 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-cni-netd\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.844942 master-0 kubenswrapper[4180]: I0220 11:48:29.843988 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-run-netns\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.844942 master-0 kubenswrapper[4180]: I0220 11:48:29.844050 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-log-socket\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.844942 master-0 kubenswrapper[4180]: I0220 11:48:29.844170 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-node-log\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.844942 master-0 kubenswrapper[4180]: I0220 11:48:29.844341 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-env-overrides\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.844942 master-0 kubenswrapper[4180]: I0220 11:48:29.844890 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-ovnkube-config\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.847844 master-0 kubenswrapper[4180]: I0220 11:48:29.847809 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e979c659-8581-466f-8528-01b6b4f51499-ovn-node-metrics-cert\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.872463 master-0 kubenswrapper[4180]: I0220 11:48:29.872405 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbbb4\" (UniqueName: \"kubernetes.io/projected/e979c659-8581-466f-8528-01b6b4f51499-kube-api-access-jbbb4\") pod \"ovnkube-node-bj4fp\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:29.985913 master-0 kubenswrapper[4180]: I0220 11:48:29.985851 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:31.233624 master-0 kubenswrapper[4180]: I0220 11:48:31.232930 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9qpc7" event={"ID":"533fe3c7-504f-40aa-aab0-8d66ef27920f","Type":"ContainerStarted","Data":"64459734fca304401891759da747fee38281849cb2a9e70927ee5e1f8ddede55"} Feb 20 11:48:31.234554 master-0 kubenswrapper[4180]: I0220 11:48:31.234502 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" event={"ID":"31969539-bfd1-466f-8697-f13cbbd957df","Type":"ContainerStarted","Data":"ad658dcd2fa690480858473bc5f21788ccae22e5c8f1dcb56da1cca529a4b70f"} Feb 20 11:48:31.234554 master-0 kubenswrapper[4180]: I0220 11:48:31.234539 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" event={"ID":"31969539-bfd1-466f-8697-f13cbbd957df","Type":"ContainerStarted","Data":"ad27979ee67ec73db6166a66f6c8de5d02655b589472440fd2f397e6aebb3ab2"} Feb 20 11:48:31.245432 master-0 kubenswrapper[4180]: I0220 11:48:31.245332 4180 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="59c4640ef16d19d630f393a377f5a55900e0d594bb8e948836367e29624486c7" exitCode=0 Feb 20 11:48:31.245432 master-0 kubenswrapper[4180]: I0220 11:48:31.245403 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2l64" event={"ID":"07281644-2789-424f-8429-aa4448dda01e","Type":"ContainerDied","Data":"59c4640ef16d19d630f393a377f5a55900e0d594bb8e948836367e29624486c7"} Feb 20 11:48:31.246593 master-0 kubenswrapper[4180]: I0220 11:48:31.246444 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerStarted","Data":"74210ef7f7477a5cc9d9d264c0abf5069d4cce4d0d3176995bf660061a1084b1"} Feb 20 11:48:31.268392 master-0 kubenswrapper[4180]: I0220 11:48:31.267786 4180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9qpc7" podStartSLOduration=1.202023134 podStartE2EDuration="14.267763084s" podCreationTimestamp="2026-02-20 11:48:17 +0000 UTC" firstStartedPulling="2026-02-20 11:48:17.401390422 +0000 UTC m=+58.576442232" lastFinishedPulling="2026-02-20 11:48:30.467130322 +0000 UTC m=+71.642182182" observedRunningTime="2026-02-20 11:48:31.251355901 +0000 UTC m=+72.426407721" watchObservedRunningTime="2026-02-20 11:48:31.267763084 +0000 UTC m=+72.442814904" Feb 20 11:48:31.665667 master-0 kubenswrapper[4180]: I0220 11:48:31.665590 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:31.665918 master-0 kubenswrapper[4180]: E0220 11:48:31.665867 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:32.372567 master-0 kubenswrapper[4180]: I0220 11:48:32.372434 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:48:32.373445 master-0 kubenswrapper[4180]: E0220 11:48:32.372673 4180 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 11:48:32.373445 master-0 kubenswrapper[4180]: E0220 11:48:32.372762 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert podName:67f890c8-05a1-4797-8da8-6194aea0df9a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:04.372736253 +0000 UTC m=+105.547788273 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert") pod "cluster-version-operator-5cfd9759cf-4pnsw" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a") : secret "cluster-version-operator-serving-cert" not found Feb 20 11:48:32.642653 master-0 kubenswrapper[4180]: I0220 11:48:32.642497 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-h5w2t"] Feb 20 11:48:32.642867 master-0 kubenswrapper[4180]: I0220 11:48:32.642839 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:32.642935 master-0 kubenswrapper[4180]: E0220 11:48:32.642901 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:48:32.777342 master-0 kubenswrapper[4180]: I0220 11:48:32.777198 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zmwm\" (UniqueName: \"kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm\") pod \"network-check-target-h5w2t\" (UID: \"39ccf158-b40f-4dba-90e2-27b1409487b7\") " pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:32.878410 master-0 kubenswrapper[4180]: I0220 11:48:32.878324 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmwm\" (UniqueName: \"kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm\") pod \"network-check-target-h5w2t\" (UID: \"39ccf158-b40f-4dba-90e2-27b1409487b7\") " pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:32.893596 master-0 kubenswrapper[4180]: E0220 11:48:32.893465 4180 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 11:48:32.893596 master-0 kubenswrapper[4180]: E0220 11:48:32.893510 4180 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 11:48:32.893596 master-0 kubenswrapper[4180]: E0220 11:48:32.893547 4180 projected.go:194] Error preparing data for projected volume kube-api-access-4zmwm for pod openshift-network-diagnostics/network-check-target-h5w2t: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 11:48:32.893881 master-0 kubenswrapper[4180]: E0220 11:48:32.893631 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm podName:39ccf158-b40f-4dba-90e2-27b1409487b7 nodeName:}" failed. No retries permitted until 2026-02-20 11:48:33.393607549 +0000 UTC m=+74.568659379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4zmwm" (UniqueName: "kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm") pod "network-check-target-h5w2t" (UID: "39ccf158-b40f-4dba-90e2-27b1409487b7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 11:48:33.482895 master-0 kubenswrapper[4180]: I0220 11:48:33.482828 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmwm\" (UniqueName: \"kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm\") pod \"network-check-target-h5w2t\" (UID: \"39ccf158-b40f-4dba-90e2-27b1409487b7\") " pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:33.484339 master-0 kubenswrapper[4180]: E0220 11:48:33.483013 4180 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 11:48:33.484339 master-0 kubenswrapper[4180]: E0220 11:48:33.483035 4180 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 11:48:33.484339 master-0 kubenswrapper[4180]: E0220 11:48:33.483048 4180 projected.go:194] Error preparing data for projected volume kube-api-access-4zmwm for pod openshift-network-diagnostics/network-check-target-h5w2t: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 11:48:33.484339 master-0 kubenswrapper[4180]: E0220 11:48:33.483109 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm podName:39ccf158-b40f-4dba-90e2-27b1409487b7 nodeName:}" failed. No retries permitted until 2026-02-20 11:48:34.483091722 +0000 UTC m=+75.658143552 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4zmwm" (UniqueName: "kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm") pod "network-check-target-h5w2t" (UID: "39ccf158-b40f-4dba-90e2-27b1409487b7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 11:48:33.664171 master-0 kubenswrapper[4180]: I0220 11:48:33.664056 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:33.664454 master-0 kubenswrapper[4180]: E0220 11:48:33.664274 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:33.885918 master-0 kubenswrapper[4180]: I0220 11:48:33.885831 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:33.886131 master-0 kubenswrapper[4180]: E0220 11:48:33.886030 4180 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 11:48:33.886131 master-0 kubenswrapper[4180]: E0220 11:48:33.886110 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs podName:1709ef31-9ddd-42bf-9a95-4be4502a0828 nodeName:}" failed. No retries permitted until 2026-02-20 11:48:49.886090251 +0000 UTC m=+91.061142071 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs") pod "network-metrics-daemon-29622" (UID: "1709ef31-9ddd-42bf-9a95-4be4502a0828") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 11:48:34.258773 master-0 kubenswrapper[4180]: I0220 11:48:34.258685 4180 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="f1fbf807f82eab937178a587053f37db417fee5bbaad310485c0ef4a2b0f6684" exitCode=0 Feb 20 11:48:34.258773 master-0 kubenswrapper[4180]: I0220 11:48:34.258771 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2l64" event={"ID":"07281644-2789-424f-8429-aa4448dda01e","Type":"ContainerDied","Data":"f1fbf807f82eab937178a587053f37db417fee5bbaad310485c0ef4a2b0f6684"} Feb 20 11:48:34.491663 master-0 kubenswrapper[4180]: I0220 11:48:34.491627 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmwm\" (UniqueName: \"kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm\") pod \"network-check-target-h5w2t\" (UID: \"39ccf158-b40f-4dba-90e2-27b1409487b7\") " pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:34.492105 master-0 kubenswrapper[4180]: E0220 11:48:34.491778 4180 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 11:48:34.492105 master-0 kubenswrapper[4180]: E0220 11:48:34.491795 4180 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 11:48:34.492105 master-0 kubenswrapper[4180]: E0220 11:48:34.491807 4180 projected.go:194] Error preparing data for projected volume kube-api-access-4zmwm for pod openshift-network-diagnostics/network-check-target-h5w2t: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 11:48:34.492105 master-0 kubenswrapper[4180]: E0220 11:48:34.491851 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm podName:39ccf158-b40f-4dba-90e2-27b1409487b7 nodeName:}" failed. No retries permitted until 2026-02-20 11:48:36.491838123 +0000 UTC m=+77.666889933 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4zmwm" (UniqueName: "kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm") pod "network-check-target-h5w2t" (UID: "39ccf158-b40f-4dba-90e2-27b1409487b7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 11:48:34.664004 master-0 kubenswrapper[4180]: I0220 11:48:34.663867 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:34.664246 master-0 kubenswrapper[4180]: E0220 11:48:34.664059 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:48:35.666956 master-0 kubenswrapper[4180]: I0220 11:48:35.666890 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:35.667460 master-0 kubenswrapper[4180]: E0220 11:48:35.667037 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:36.146301 master-0 kubenswrapper[4180]: I0220 11:48:36.146095 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-psm4s"] Feb 20 11:48:36.146633 master-0 kubenswrapper[4180]: I0220 11:48:36.146603 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:48:36.148990 master-0 kubenswrapper[4180]: I0220 11:48:36.148938 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 20 11:48:36.149191 master-0 kubenswrapper[4180]: I0220 11:48:36.149148 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 20 11:48:36.149717 master-0 kubenswrapper[4180]: I0220 11:48:36.149625 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 20 11:48:36.150020 master-0 kubenswrapper[4180]: I0220 11:48:36.149976 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 20 11:48:36.150193 master-0 kubenswrapper[4180]: I0220 11:48:36.149978 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 20 11:48:36.270601 master-0 kubenswrapper[4180]: I0220 11:48:36.270495 4180 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="3aec6ee8f7b5920e9465051d7cfad692f6df3984abc458694d67b2ca16e3fc95" exitCode=0 Feb 20 11:48:36.271034 master-0 kubenswrapper[4180]: I0220 11:48:36.270599 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2l64" event={"ID":"07281644-2789-424f-8429-aa4448dda01e","Type":"ContainerDied","Data":"3aec6ee8f7b5920e9465051d7cfad692f6df3984abc458694d67b2ca16e3fc95"} Feb 20 11:48:36.308014 master-0 kubenswrapper[4180]: I0220 11:48:36.307714 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms8wk\" (UniqueName: \"kubernetes.io/projected/836a6d7e-9b26-425f-ae21-00422515d7fe-kube-api-access-ms8wk\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:48:36.308014 master-0 kubenswrapper[4180]: I0220 11:48:36.307820 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/836a6d7e-9b26-425f-ae21-00422515d7fe-webhook-cert\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:48:36.308014 master-0 kubenswrapper[4180]: I0220 11:48:36.307896 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-env-overrides\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:48:36.308014 master-0 kubenswrapper[4180]: I0220 11:48:36.307946 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-ovnkube-identity-cm\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:48:36.409160 master-0 kubenswrapper[4180]: I0220 11:48:36.408705 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/836a6d7e-9b26-425f-ae21-00422515d7fe-webhook-cert\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:48:36.409160 master-0 kubenswrapper[4180]: I0220 11:48:36.408829 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-env-overrides\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:48:36.409160 master-0 kubenswrapper[4180]: I0220 11:48:36.408909 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-ovnkube-identity-cm\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:48:36.410334 master-0 kubenswrapper[4180]: I0220 11:48:36.409960 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms8wk\" (UniqueName: \"kubernetes.io/projected/836a6d7e-9b26-425f-ae21-00422515d7fe-kube-api-access-ms8wk\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:48:36.411052 master-0 kubenswrapper[4180]: I0220 11:48:36.410969 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-ovnkube-identity-cm\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:48:36.416098 master-0 kubenswrapper[4180]: I0220 11:48:36.416054 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/836a6d7e-9b26-425f-ae21-00422515d7fe-webhook-cert\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:48:36.416506 master-0 kubenswrapper[4180]: I0220 11:48:36.416240 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-env-overrides\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:48:36.511160 master-0 kubenswrapper[4180]: I0220 11:48:36.511030 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmwm\" (UniqueName: \"kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm\") pod \"network-check-target-h5w2t\" (UID: \"39ccf158-b40f-4dba-90e2-27b1409487b7\") " pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:36.511417 master-0 kubenswrapper[4180]: E0220 11:48:36.511237 4180 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 11:48:36.511417 master-0 kubenswrapper[4180]: E0220 11:48:36.511264 4180 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 11:48:36.511417 master-0 kubenswrapper[4180]: E0220 11:48:36.511283 4180 projected.go:194] Error preparing data for projected volume kube-api-access-4zmwm for pod openshift-network-diagnostics/network-check-target-h5w2t: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 11:48:36.511417 master-0 kubenswrapper[4180]: E0220 11:48:36.511353 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm podName:39ccf158-b40f-4dba-90e2-27b1409487b7 nodeName:}" failed. No retries permitted until 2026-02-20 11:48:40.511332871 +0000 UTC m=+81.686384701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4zmwm" (UniqueName: "kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm") pod "network-check-target-h5w2t" (UID: "39ccf158-b40f-4dba-90e2-27b1409487b7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 11:48:36.664233 master-0 kubenswrapper[4180]: I0220 11:48:36.664122 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:36.664407 master-0 kubenswrapper[4180]: E0220 11:48:36.664273 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:48:36.842780 master-0 kubenswrapper[4180]: I0220 11:48:36.842695 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms8wk\" (UniqueName: \"kubernetes.io/projected/836a6d7e-9b26-425f-ae21-00422515d7fe-kube-api-access-ms8wk\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:48:37.063003 master-0 kubenswrapper[4180]: I0220 11:48:37.062929 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:48:37.077462 master-0 kubenswrapper[4180]: W0220 11:48:37.077401 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod836a6d7e_9b26_425f_ae21_00422515d7fe.slice/crio-1206e415177b826b05bc4efd16176f68cc29c42141e8fa6d0d360426d4f33a85 WatchSource:0}: Error finding container 1206e415177b826b05bc4efd16176f68cc29c42141e8fa6d0d360426d4f33a85: Status 404 returned error can't find the container with id 1206e415177b826b05bc4efd16176f68cc29c42141e8fa6d0d360426d4f33a85 Feb 20 11:48:37.273940 master-0 kubenswrapper[4180]: I0220 11:48:37.273877 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-psm4s" event={"ID":"836a6d7e-9b26-425f-ae21-00422515d7fe","Type":"ContainerStarted","Data":"1206e415177b826b05bc4efd16176f68cc29c42141e8fa6d0d360426d4f33a85"} Feb 20 11:48:37.664270 master-0 kubenswrapper[4180]: I0220 11:48:37.664220 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:37.664490 master-0 kubenswrapper[4180]: E0220 11:48:37.664447 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:38.663923 master-0 kubenswrapper[4180]: I0220 11:48:38.663856 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:38.664456 master-0 kubenswrapper[4180]: E0220 11:48:38.664216 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:48:39.155899 master-0 kubenswrapper[4180]: I0220 11:48:39.155853 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 20 11:48:39.663700 master-0 kubenswrapper[4180]: I0220 11:48:39.663655 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:39.664776 master-0 kubenswrapper[4180]: E0220 11:48:39.664734 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:39.676666 master-0 kubenswrapper[4180]: I0220 11:48:39.676602 4180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=1.676586199 podStartE2EDuration="1.676586199s" podCreationTimestamp="2026-02-20 11:48:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:48:39.676093204 +0000 UTC m=+80.851145044" watchObservedRunningTime="2026-02-20 11:48:39.676586199 +0000 UTC m=+80.851638019" Feb 20 11:48:40.552957 master-0 kubenswrapper[4180]: I0220 11:48:40.552898 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmwm\" (UniqueName: \"kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm\") pod \"network-check-target-h5w2t\" (UID: \"39ccf158-b40f-4dba-90e2-27b1409487b7\") " pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:40.553194 master-0 kubenswrapper[4180]: E0220 11:48:40.553091 4180 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 11:48:40.553194 master-0 kubenswrapper[4180]: E0220 11:48:40.553118 4180 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 11:48:40.553194 master-0 kubenswrapper[4180]: E0220 11:48:40.553136 4180 projected.go:194] Error preparing data for projected volume kube-api-access-4zmwm for pod openshift-network-diagnostics/network-check-target-h5w2t: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 11:48:40.553276 master-0 kubenswrapper[4180]: E0220 11:48:40.553194 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm podName:39ccf158-b40f-4dba-90e2-27b1409487b7 nodeName:}" failed. No retries permitted until 2026-02-20 11:48:48.553171963 +0000 UTC m=+89.728223813 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4zmwm" (UniqueName: "kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm") pod "network-check-target-h5w2t" (UID: "39ccf158-b40f-4dba-90e2-27b1409487b7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 11:48:40.663854 master-0 kubenswrapper[4180]: I0220 11:48:40.663808 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:40.664042 master-0 kubenswrapper[4180]: E0220 11:48:40.663953 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:48:41.664464 master-0 kubenswrapper[4180]: I0220 11:48:41.664364 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:41.665645 master-0 kubenswrapper[4180]: E0220 11:48:41.664579 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:42.664546 master-0 kubenswrapper[4180]: I0220 11:48:42.664481 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:42.665192 master-0 kubenswrapper[4180]: E0220 11:48:42.664629 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:48:43.664274 master-0 kubenswrapper[4180]: I0220 11:48:43.664219 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:43.664443 master-0 kubenswrapper[4180]: E0220 11:48:43.664396 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:44.699339 master-0 kubenswrapper[4180]: I0220 11:48:44.698758 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:44.699339 master-0 kubenswrapper[4180]: E0220 11:48:44.698891 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:48:45.664546 master-0 kubenswrapper[4180]: I0220 11:48:45.664500 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:45.664739 master-0 kubenswrapper[4180]: E0220 11:48:45.664655 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:46.663828 master-0 kubenswrapper[4180]: I0220 11:48:46.663747 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:46.664493 master-0 kubenswrapper[4180]: E0220 11:48:46.663922 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:48:47.665015 master-0 kubenswrapper[4180]: I0220 11:48:47.664667 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:47.665701 master-0 kubenswrapper[4180]: E0220 11:48:47.665156 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:48.303707 master-0 kubenswrapper[4180]: I0220 11:48:48.303602 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-psm4s" event={"ID":"836a6d7e-9b26-425f-ae21-00422515d7fe","Type":"ContainerStarted","Data":"ace904c5f4a3faa1035b1dcf89c693ce9b93dceae341e4edfb98ee1576eea9b6"} Feb 20 11:48:48.303707 master-0 kubenswrapper[4180]: I0220 11:48:48.303706 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-psm4s" event={"ID":"836a6d7e-9b26-425f-ae21-00422515d7fe","Type":"ContainerStarted","Data":"aaa3af9a84cfc7c141c74309c63bc10c16f762c6dcb4569b57086875bce4bcd4"} Feb 20 11:48:48.306326 master-0 kubenswrapper[4180]: I0220 11:48:48.306238 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" event={"ID":"31969539-bfd1-466f-8697-f13cbbd957df","Type":"ContainerStarted","Data":"61a6b1802bd2528d8da1d6327d61e384e195f07e99b735a85a4645765053313c"} Feb 20 11:48:48.310720 master-0 kubenswrapper[4180]: I0220 11:48:48.310656 4180 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="f69d0d27fc97dfc5ca9cd544f311dfc218b6f712d28eef596d03ab2168409d7f" exitCode=0 Feb 20 11:48:48.310844 master-0 kubenswrapper[4180]: I0220 11:48:48.310766 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2l64" event={"ID":"07281644-2789-424f-8429-aa4448dda01e","Type":"ContainerDied","Data":"f69d0d27fc97dfc5ca9cd544f311dfc218b6f712d28eef596d03ab2168409d7f"} Feb 20 11:48:48.313738 master-0 kubenswrapper[4180]: I0220 11:48:48.313677 4180 generic.go:334] "Generic (PLEG): container finished" podID="e979c659-8581-466f-8528-01b6b4f51499" containerID="5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1" exitCode=0 Feb 20 11:48:48.313842 master-0 kubenswrapper[4180]: I0220 11:48:48.313740 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerDied","Data":"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1"} Feb 20 11:48:48.330859 master-0 kubenswrapper[4180]: I0220 11:48:48.330660 4180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-psm4s" podStartSLOduration=2.9052602260000002 podStartE2EDuration="13.330637097s" podCreationTimestamp="2026-02-20 11:48:35 +0000 UTC" firstStartedPulling="2026-02-20 11:48:37.080045012 +0000 UTC m=+78.255096842" lastFinishedPulling="2026-02-20 11:48:47.505421853 +0000 UTC m=+88.680473713" observedRunningTime="2026-02-20 11:48:48.329966567 +0000 UTC m=+89.505018427" watchObservedRunningTime="2026-02-20 11:48:48.330637097 +0000 UTC m=+89.505688947" Feb 20 11:48:48.398695 master-0 kubenswrapper[4180]: I0220 11:48:48.398499 4180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" podStartSLOduration=2.484415375 podStartE2EDuration="19.398472732s" podCreationTimestamp="2026-02-20 11:48:29 +0000 UTC" firstStartedPulling="2026-02-20 11:48:30.595693583 +0000 UTC m=+71.770745413" lastFinishedPulling="2026-02-20 11:48:47.5097509 +0000 UTC m=+88.684802770" observedRunningTime="2026-02-20 11:48:48.367360987 +0000 UTC m=+89.542412827" watchObservedRunningTime="2026-02-20 11:48:48.398472732 +0000 UTC m=+89.573524592" Feb 20 11:48:48.634602 master-0 kubenswrapper[4180]: I0220 11:48:48.634164 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmwm\" (UniqueName: \"kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm\") pod \"network-check-target-h5w2t\" (UID: \"39ccf158-b40f-4dba-90e2-27b1409487b7\") " pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:48.634757 master-0 kubenswrapper[4180]: E0220 11:48:48.634352 4180 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 11:48:48.634757 master-0 kubenswrapper[4180]: E0220 11:48:48.634661 4180 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 11:48:48.634757 master-0 kubenswrapper[4180]: E0220 11:48:48.634687 4180 projected.go:194] Error preparing data for projected volume kube-api-access-4zmwm for pod openshift-network-diagnostics/network-check-target-h5w2t: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 11:48:48.634937 master-0 kubenswrapper[4180]: E0220 11:48:48.634766 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm podName:39ccf158-b40f-4dba-90e2-27b1409487b7 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:04.634740249 +0000 UTC m=+105.809792109 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4zmwm" (UniqueName: "kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm") pod "network-check-target-h5w2t" (UID: "39ccf158-b40f-4dba-90e2-27b1409487b7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 11:48:48.664297 master-0 kubenswrapper[4180]: I0220 11:48:48.663639 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:48.664297 master-0 kubenswrapper[4180]: E0220 11:48:48.663810 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:48:49.323421 master-0 kubenswrapper[4180]: I0220 11:48:49.323277 4180 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="9c73ec43a36008a1472e95cc448d96cb453a34c7d0f5983c1a526f4f124df839" exitCode=0 Feb 20 11:48:49.324935 master-0 kubenswrapper[4180]: I0220 11:48:49.323452 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2l64" event={"ID":"07281644-2789-424f-8429-aa4448dda01e","Type":"ContainerDied","Data":"9c73ec43a36008a1472e95cc448d96cb453a34c7d0f5983c1a526f4f124df839"} Feb 20 11:48:49.328702 master-0 kubenswrapper[4180]: I0220 11:48:49.328665 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerStarted","Data":"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6"} Feb 20 11:48:49.328822 master-0 kubenswrapper[4180]: I0220 11:48:49.328720 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerStarted","Data":"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94"} Feb 20 11:48:49.328822 master-0 kubenswrapper[4180]: I0220 11:48:49.328741 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerStarted","Data":"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2"} Feb 20 11:48:49.328822 master-0 kubenswrapper[4180]: I0220 11:48:49.328761 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerStarted","Data":"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0"} Feb 20 11:48:49.328822 master-0 kubenswrapper[4180]: I0220 11:48:49.328778 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerStarted","Data":"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8"} Feb 20 11:48:49.328822 master-0 kubenswrapper[4180]: I0220 11:48:49.328795 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerStarted","Data":"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43"} Feb 20 11:48:49.664731 master-0 kubenswrapper[4180]: I0220 11:48:49.664655 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:49.665873 master-0 kubenswrapper[4180]: E0220 11:48:49.665805 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:49.946952 master-0 kubenswrapper[4180]: I0220 11:48:49.946708 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:49.947188 master-0 kubenswrapper[4180]: E0220 11:48:49.946958 4180 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 11:48:49.947188 master-0 kubenswrapper[4180]: E0220 11:48:49.947063 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs podName:1709ef31-9ddd-42bf-9a95-4be4502a0828 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:21.947034614 +0000 UTC m=+123.122086474 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs") pod "network-metrics-daemon-29622" (UID: "1709ef31-9ddd-42bf-9a95-4be4502a0828") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 11:48:50.338554 master-0 kubenswrapper[4180]: I0220 11:48:50.338435 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2l64" event={"ID":"07281644-2789-424f-8429-aa4448dda01e","Type":"ContainerStarted","Data":"8172b91a1888c252a9a415c4888b89fdcd3636c4761b35acdbf9fa3e57730aa7"} Feb 20 11:48:50.367403 master-0 kubenswrapper[4180]: I0220 11:48:50.364151 4180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-f2l64" podStartSLOduration=3.553097995 podStartE2EDuration="33.364125718s" podCreationTimestamp="2026-02-20 11:48:17 +0000 UTC" firstStartedPulling="2026-02-20 11:48:17.61683502 +0000 UTC m=+58.791886880" lastFinishedPulling="2026-02-20 11:48:47.427862783 +0000 UTC m=+88.602914603" observedRunningTime="2026-02-20 11:48:50.363628213 +0000 UTC m=+91.538680093" watchObservedRunningTime="2026-02-20 11:48:50.364125718 +0000 UTC m=+91.539177578" Feb 20 11:48:50.663849 master-0 kubenswrapper[4180]: I0220 11:48:50.663647 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:50.663849 master-0 kubenswrapper[4180]: E0220 11:48:50.663817 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:48:51.355194 master-0 kubenswrapper[4180]: I0220 11:48:51.355106 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerStarted","Data":"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d"} Feb 20 11:48:51.664507 master-0 kubenswrapper[4180]: I0220 11:48:51.664075 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:51.664779 master-0 kubenswrapper[4180]: E0220 11:48:51.664670 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:51.677156 master-0 kubenswrapper[4180]: I0220 11:48:51.677058 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 20 11:48:52.663797 master-0 kubenswrapper[4180]: I0220 11:48:52.663719 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:52.664653 master-0 kubenswrapper[4180]: E0220 11:48:52.663936 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:48:53.664218 master-0 kubenswrapper[4180]: I0220 11:48:53.664144 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:53.665155 master-0 kubenswrapper[4180]: E0220 11:48:53.664380 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:54.374979 master-0 kubenswrapper[4180]: I0220 11:48:54.374903 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerStarted","Data":"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b"} Feb 20 11:48:54.376694 master-0 kubenswrapper[4180]: I0220 11:48:54.375458 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:54.376694 master-0 kubenswrapper[4180]: I0220 11:48:54.375721 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:54.376694 master-0 kubenswrapper[4180]: I0220 11:48:54.375809 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:54.422506 master-0 kubenswrapper[4180]: I0220 11:48:54.422177 4180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" podStartSLOduration=8.291969644 podStartE2EDuration="25.422144347s" podCreationTimestamp="2026-02-20 11:48:29 +0000 UTC" firstStartedPulling="2026-02-20 11:48:30.349153633 +0000 UTC m=+71.524205463" lastFinishedPulling="2026-02-20 11:48:47.479328296 +0000 UTC m=+88.654380166" observedRunningTime="2026-02-20 11:48:54.408244338 +0000 UTC m=+95.583296238" watchObservedRunningTime="2026-02-20 11:48:54.422144347 +0000 UTC m=+95.597196197" Feb 20 11:48:54.422808 master-0 kubenswrapper[4180]: I0220 11:48:54.422567 4180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=3.4225221279999998 podStartE2EDuration="3.422522128s" podCreationTimestamp="2026-02-20 11:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:48:54.421657152 +0000 UTC m=+95.596709012" watchObservedRunningTime="2026-02-20 11:48:54.422522128 +0000 UTC m=+95.597573988" Feb 20 11:48:54.433367 master-0 kubenswrapper[4180]: I0220 11:48:54.433298 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:54.434483 master-0 kubenswrapper[4180]: I0220 11:48:54.434434 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:54.664499 master-0 kubenswrapper[4180]: I0220 11:48:54.664336 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:54.664499 master-0 kubenswrapper[4180]: E0220 11:48:54.664484 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:48:54.761134 master-0 kubenswrapper[4180]: I0220 11:48:54.761057 4180 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bj4fp"] Feb 20 11:48:55.664220 master-0 kubenswrapper[4180]: I0220 11:48:55.664142 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:55.664518 master-0 kubenswrapper[4180]: E0220 11:48:55.664335 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:55.684217 master-0 kubenswrapper[4180]: I0220 11:48:55.684145 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 20 11:48:55.684895 master-0 kubenswrapper[4180]: W0220 11:48:55.684443 4180 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 20 11:48:56.026180 master-0 kubenswrapper[4180]: I0220 11:48:56.025843 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-h5w2t"] Feb 20 11:48:56.026437 master-0 kubenswrapper[4180]: I0220 11:48:56.026289 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:56.026437 master-0 kubenswrapper[4180]: E0220 11:48:56.026405 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:48:56.028496 master-0 kubenswrapper[4180]: I0220 11:48:56.028415 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-29622"] Feb 20 11:48:56.381380 master-0 kubenswrapper[4180]: I0220 11:48:56.381183 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:56.381641 master-0 kubenswrapper[4180]: E0220 11:48:56.381391 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:56.382389 master-0 kubenswrapper[4180]: I0220 11:48:56.382319 4180 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="ovn-controller" containerID="cri-o://6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43" gracePeriod=30 Feb 20 11:48:56.382486 master-0 kubenswrapper[4180]: I0220 11:48:56.382358 4180 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="nbdb" containerID="cri-o://7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6" gracePeriod=30 Feb 20 11:48:56.382666 master-0 kubenswrapper[4180]: I0220 11:48:56.382605 4180 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="northd" containerID="cri-o://f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94" gracePeriod=30 Feb 20 11:48:56.382812 master-0 kubenswrapper[4180]: I0220 11:48:56.382766 4180 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2" gracePeriod=30 Feb 20 11:48:56.382904 master-0 kubenswrapper[4180]: I0220 11:48:56.382866 4180 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="sbdb" containerID="cri-o://168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d" gracePeriod=30 Feb 20 11:48:56.382965 master-0 kubenswrapper[4180]: I0220 11:48:56.382883 4180 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="kube-rbac-proxy-node" containerID="cri-o://c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0" gracePeriod=30 Feb 20 11:48:56.383032 master-0 kubenswrapper[4180]: I0220 11:48:56.382908 4180 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="ovn-acl-logging" containerID="cri-o://9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8" gracePeriod=30 Feb 20 11:48:56.404578 master-0 kubenswrapper[4180]: I0220 11:48:56.402569 4180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=1.402514977 podStartE2EDuration="1.402514977s" podCreationTimestamp="2026-02-20 11:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:48:56.401998552 +0000 UTC m=+97.577050442" watchObservedRunningTime="2026-02-20 11:48:56.402514977 +0000 UTC m=+97.577566837" Feb 20 11:48:56.425569 master-0 kubenswrapper[4180]: I0220 11:48:56.423947 4180 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="ovnkube-controller" containerID="cri-o://781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b" gracePeriod=30 Feb 20 11:48:56.901713 master-0 kubenswrapper[4180]: I0220 11:48:56.901638 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bj4fp_e979c659-8581-466f-8528-01b6b4f51499/ovnkube-controller/0.log" Feb 20 11:48:56.904438 master-0 kubenswrapper[4180]: I0220 11:48:56.904385 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bj4fp_e979c659-8581-466f-8528-01b6b4f51499/kube-rbac-proxy-ovn-metrics/0.log" Feb 20 11:48:56.905260 master-0 kubenswrapper[4180]: I0220 11:48:56.905186 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bj4fp_e979c659-8581-466f-8528-01b6b4f51499/kube-rbac-proxy-node/0.log" Feb 20 11:48:56.905936 master-0 kubenswrapper[4180]: I0220 11:48:56.905894 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bj4fp_e979c659-8581-466f-8528-01b6b4f51499/ovn-acl-logging/0.log" Feb 20 11:48:56.906848 master-0 kubenswrapper[4180]: I0220 11:48:56.906794 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bj4fp_e979c659-8581-466f-8528-01b6b4f51499/ovn-controller/0.log" Feb 20 11:48:56.907570 master-0 kubenswrapper[4180]: I0220 11:48:56.907510 4180 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:56.972723 master-0 kubenswrapper[4180]: I0220 11:48:56.972652 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7l848"] Feb 20 11:48:56.973016 master-0 kubenswrapper[4180]: E0220 11:48:56.972837 4180 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="kubecfg-setup" Feb 20 11:48:56.973016 master-0 kubenswrapper[4180]: I0220 11:48:56.972857 4180 state_mem.go:107] "Deleted CPUSet assignment" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="kubecfg-setup" Feb 20 11:48:56.973016 master-0 kubenswrapper[4180]: E0220 11:48:56.972873 4180 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="ovn-acl-logging" Feb 20 11:48:56.973016 master-0 kubenswrapper[4180]: I0220 11:48:56.972885 4180 state_mem.go:107] "Deleted CPUSet assignment" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="ovn-acl-logging" Feb 20 11:48:56.973016 master-0 kubenswrapper[4180]: E0220 11:48:56.972903 4180 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="ovn-controller" Feb 20 11:48:56.973016 master-0 kubenswrapper[4180]: I0220 11:48:56.972916 4180 state_mem.go:107] "Deleted CPUSet assignment" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="ovn-controller" Feb 20 11:48:56.973016 master-0 kubenswrapper[4180]: E0220 11:48:56.972929 4180 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="kube-rbac-proxy-node" Feb 20 11:48:56.973016 master-0 kubenswrapper[4180]: I0220 11:48:56.972941 4180 state_mem.go:107] "Deleted CPUSet assignment" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="kube-rbac-proxy-node" Feb 20 11:48:56.973016 master-0 kubenswrapper[4180]: E0220 11:48:56.972956 4180 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="kube-rbac-proxy-ovn-metrics" Feb 20 11:48:56.973016 master-0 kubenswrapper[4180]: I0220 11:48:56.972968 4180 state_mem.go:107] "Deleted CPUSet assignment" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="kube-rbac-proxy-ovn-metrics" Feb 20 11:48:56.973016 master-0 kubenswrapper[4180]: E0220 11:48:56.972981 4180 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="nbdb" Feb 20 11:48:56.973016 master-0 kubenswrapper[4180]: I0220 11:48:56.972993 4180 state_mem.go:107] "Deleted CPUSet assignment" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="nbdb" Feb 20 11:48:56.973016 master-0 kubenswrapper[4180]: E0220 11:48:56.973006 4180 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="northd" Feb 20 11:48:56.973016 master-0 kubenswrapper[4180]: I0220 11:48:56.973018 4180 state_mem.go:107] "Deleted CPUSet assignment" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="northd" Feb 20 11:48:56.973016 master-0 kubenswrapper[4180]: E0220 11:48:56.973032 4180 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="sbdb" Feb 20 11:48:56.975042 master-0 kubenswrapper[4180]: I0220 11:48:56.973044 4180 state_mem.go:107] "Deleted CPUSet assignment" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="sbdb" Feb 20 11:48:56.975042 master-0 kubenswrapper[4180]: E0220 11:48:56.973056 4180 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="ovnkube-controller" Feb 20 11:48:56.975042 master-0 kubenswrapper[4180]: I0220 11:48:56.973068 4180 state_mem.go:107] "Deleted CPUSet assignment" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="ovnkube-controller" Feb 20 11:48:56.975042 master-0 kubenswrapper[4180]: I0220 11:48:56.973128 4180 memory_manager.go:354] "RemoveStaleState removing state" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="nbdb" Feb 20 11:48:56.975042 master-0 kubenswrapper[4180]: I0220 11:48:56.973146 4180 memory_manager.go:354] "RemoveStaleState removing state" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="kube-rbac-proxy-ovn-metrics" Feb 20 11:48:56.975042 master-0 kubenswrapper[4180]: I0220 11:48:56.973163 4180 memory_manager.go:354] "RemoveStaleState removing state" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="ovn-controller" Feb 20 11:48:56.975042 master-0 kubenswrapper[4180]: I0220 11:48:56.973176 4180 memory_manager.go:354] "RemoveStaleState removing state" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="sbdb" Feb 20 11:48:56.975042 master-0 kubenswrapper[4180]: I0220 11:48:56.973188 4180 memory_manager.go:354] "RemoveStaleState removing state" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="northd" Feb 20 11:48:56.975042 master-0 kubenswrapper[4180]: I0220 11:48:56.973199 4180 memory_manager.go:354] "RemoveStaleState removing state" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="kube-rbac-proxy-node" Feb 20 11:48:56.975042 master-0 kubenswrapper[4180]: I0220 11:48:56.973212 4180 memory_manager.go:354] "RemoveStaleState removing state" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="ovnkube-controller" Feb 20 11:48:56.975042 master-0 kubenswrapper[4180]: I0220 11:48:56.973224 4180 memory_manager.go:354] "RemoveStaleState removing state" podUID="e979c659-8581-466f-8528-01b6b4f51499" containerName="ovn-acl-logging" Feb 20 11:48:56.975042 master-0 kubenswrapper[4180]: I0220 11:48:56.974417 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.012443 master-0 kubenswrapper[4180]: I0220 11:48:57.012349 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-ovn\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012443 master-0 kubenswrapper[4180]: I0220 11:48:57.012398 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-systemd-units\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012443 master-0 kubenswrapper[4180]: I0220 11:48:57.012427 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-ovnkube-config\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012443 master-0 kubenswrapper[4180]: I0220 11:48:57.012450 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-openvswitch\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012480 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-kubelet\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012506 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-slash\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012552 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-var-lib-openvswitch\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012587 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbbb4\" (UniqueName: \"kubernetes.io/projected/e979c659-8581-466f-8528-01b6b4f51499-kube-api-access-jbbb4\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012610 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e979c659-8581-466f-8528-01b6b4f51499-ovn-node-metrics-cert\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012631 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-cni-bin\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012655 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-log-socket\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012685 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-node-log\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012714 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-ovnkube-script-lib\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012741 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-var-lib-cni-networks-ovn-kubernetes\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012768 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-etc-openvswitch\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012798 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-run-ovn-kubernetes\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012828 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-env-overrides\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012855 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-run-netns\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012882 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-systemd\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012908 4180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-cni-netd\") pod \"e979c659-8581-466f-8528-01b6b4f51499\" (UID: \"e979c659-8581-466f-8528-01b6b4f51499\") " Feb 20 11:48:57.012949 master-0 kubenswrapper[4180]: I0220 11:48:57.012970 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-ovn\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013003 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-log-socket\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013036 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-node-log\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013065 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-slash\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013091 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-bin\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013119 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-config\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013147 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-script-lib\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013180 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-netd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013229 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-kubelet\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013259 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovn-node-metrics-cert\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013291 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j4cs\" (UniqueName: \"kubernetes.io/projected/478be5e4-cf17-4ebf-a45a-c18cd2b69929-kube-api-access-5j4cs\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013325 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-var-lib-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013357 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-systemd-units\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013390 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013420 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013468 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-netns\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.013980 master-0 kubenswrapper[4180]: I0220 11:48:57.013497 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.014926 master-0 kubenswrapper[4180]: I0220 11:48:57.013565 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-etc-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.014926 master-0 kubenswrapper[4180]: I0220 11:48:57.013603 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-systemd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.014926 master-0 kubenswrapper[4180]: I0220 11:48:57.013633 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-env-overrides\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.014926 master-0 kubenswrapper[4180]: I0220 11:48:57.013737 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:57.014926 master-0 kubenswrapper[4180]: I0220 11:48:57.013779 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:57.014926 master-0 kubenswrapper[4180]: I0220 11:48:57.014384 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:57.014926 master-0 kubenswrapper[4180]: I0220 11:48:57.014473 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:57.014926 master-0 kubenswrapper[4180]: I0220 11:48:57.014576 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-slash" (OuterVolumeSpecName: "host-slash") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:57.014926 master-0 kubenswrapper[4180]: I0220 11:48:57.014628 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:57.014926 master-0 kubenswrapper[4180]: I0220 11:48:57.014771 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:57.015483 master-0 kubenswrapper[4180]: I0220 11:48:57.015053 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:48:57.015483 master-0 kubenswrapper[4180]: I0220 11:48:57.015188 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-node-log" (OuterVolumeSpecName: "node-log") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:57.015483 master-0 kubenswrapper[4180]: I0220 11:48:57.015231 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:57.015483 master-0 kubenswrapper[4180]: I0220 11:48:57.015287 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-log-socket" (OuterVolumeSpecName: "log-socket") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:57.015483 master-0 kubenswrapper[4180]: I0220 11:48:57.015346 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:57.015483 master-0 kubenswrapper[4180]: I0220 11:48:57.015397 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:57.015483 master-0 kubenswrapper[4180]: I0220 11:48:57.015437 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:57.016013 master-0 kubenswrapper[4180]: I0220 11:48:57.015597 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:57.016013 master-0 kubenswrapper[4180]: I0220 11:48:57.015962 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:48:57.016147 master-0 kubenswrapper[4180]: I0220 11:48:57.016024 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:48:57.022561 master-0 kubenswrapper[4180]: I0220 11:48:57.021822 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e979c659-8581-466f-8528-01b6b4f51499-kube-api-access-jbbb4" (OuterVolumeSpecName: "kube-api-access-jbbb4") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "kube-api-access-jbbb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:48:57.022561 master-0 kubenswrapper[4180]: I0220 11:48:57.022239 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e979c659-8581-466f-8528-01b6b4f51499-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:48:57.025429 master-0 kubenswrapper[4180]: I0220 11:48:57.025363 4180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "e979c659-8581-466f-8528-01b6b4f51499" (UID: "e979c659-8581-466f-8528-01b6b4f51499"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:48:57.114990 master-0 kubenswrapper[4180]: I0220 11:48:57.114741 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-config\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.114990 master-0 kubenswrapper[4180]: I0220 11:48:57.114888 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-script-lib\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.114990 master-0 kubenswrapper[4180]: I0220 11:48:57.114945 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-slash\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.114990 master-0 kubenswrapper[4180]: I0220 11:48:57.114987 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-bin\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115415 master-0 kubenswrapper[4180]: I0220 11:48:57.115034 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-netd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115415 master-0 kubenswrapper[4180]: I0220 11:48:57.115102 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-kubelet\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115415 master-0 kubenswrapper[4180]: I0220 11:48:57.115148 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovn-node-metrics-cert\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115415 master-0 kubenswrapper[4180]: I0220 11:48:57.115190 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j4cs\" (UniqueName: \"kubernetes.io/projected/478be5e4-cf17-4ebf-a45a-c18cd2b69929-kube-api-access-5j4cs\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115415 master-0 kubenswrapper[4180]: I0220 11:48:57.115241 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-var-lib-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115415 master-0 kubenswrapper[4180]: I0220 11:48:57.115282 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-systemd-units\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115415 master-0 kubenswrapper[4180]: I0220 11:48:57.115329 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115415 master-0 kubenswrapper[4180]: I0220 11:48:57.115372 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115984 master-0 kubenswrapper[4180]: I0220 11:48:57.115436 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-netns\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115984 master-0 kubenswrapper[4180]: I0220 11:48:57.115484 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115984 master-0 kubenswrapper[4180]: I0220 11:48:57.115577 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-etc-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115984 master-0 kubenswrapper[4180]: I0220 11:48:57.115628 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-systemd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115984 master-0 kubenswrapper[4180]: I0220 11:48:57.115676 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-env-overrides\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115984 master-0 kubenswrapper[4180]: I0220 11:48:57.115718 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-ovn\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115984 master-0 kubenswrapper[4180]: I0220 11:48:57.115760 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-log-socket\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115984 master-0 kubenswrapper[4180]: I0220 11:48:57.115801 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-node-log\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115984 master-0 kubenswrapper[4180]: I0220 11:48:57.115864 4180 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-node-log\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.115984 master-0 kubenswrapper[4180]: I0220 11:48:57.115893 4180 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.115984 master-0 kubenswrapper[4180]: I0220 11:48:57.115921 4180 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.115984 master-0 kubenswrapper[4180]: I0220 11:48:57.115929 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-config\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.115984 master-0 kubenswrapper[4180]: I0220 11:48:57.115948 4180 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.115984 master-0 kubenswrapper[4180]: I0220 11:48:57.115975 4180 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.115984 master-0 kubenswrapper[4180]: I0220 11:48:57.116004 4180 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-env-overrides\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116014 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-systemd-units\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116029 4180 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-run-netns\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116053 4180 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-systemd\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116077 4180 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116102 4180 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-systemd-units\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116127 4180 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e979c659-8581-466f-8528-01b6b4f51499-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116151 4180 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116174 4180 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116200 4180 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-kubelet\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116225 4180 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-slash\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116251 4180 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116276 4180 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbbb4\" (UniqueName: \"kubernetes.io/projected/e979c659-8581-466f-8528-01b6b4f51499-kube-api-access-jbbb4\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116301 4180 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e979c659-8581-466f-8528-01b6b4f51499-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116324 4180 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116347 4180 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e979c659-8581-466f-8528-01b6b4f51499-log-socket\") on node \"master-0\" DevicePath \"\"" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116413 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-node-log\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116483 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116578 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116643 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-netns\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116699 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.116745 master-0 kubenswrapper[4180]: I0220 11:48:57.116757 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-etc-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.118594 master-0 kubenswrapper[4180]: I0220 11:48:57.116816 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-systemd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.118594 master-0 kubenswrapper[4180]: I0220 11:48:57.116849 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-script-lib\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.118594 master-0 kubenswrapper[4180]: I0220 11:48:57.116908 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-slash\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.118594 master-0 kubenswrapper[4180]: I0220 11:48:57.116957 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-bin\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.118594 master-0 kubenswrapper[4180]: I0220 11:48:57.116999 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-netd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.118594 master-0 kubenswrapper[4180]: I0220 11:48:57.117038 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-kubelet\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.118594 master-0 kubenswrapper[4180]: I0220 11:48:57.117230 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-ovn\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.118594 master-0 kubenswrapper[4180]: I0220 11:48:57.117607 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-var-lib-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.118594 master-0 kubenswrapper[4180]: I0220 11:48:57.117687 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-env-overrides\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.118594 master-0 kubenswrapper[4180]: I0220 11:48:57.117747 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-log-socket\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.122127 master-0 kubenswrapper[4180]: I0220 11:48:57.121774 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovn-node-metrics-cert\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.148451 master-0 kubenswrapper[4180]: I0220 11:48:57.148402 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j4cs\" (UniqueName: \"kubernetes.io/projected/478be5e4-cf17-4ebf-a45a-c18cd2b69929-kube-api-access-5j4cs\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.305606 master-0 kubenswrapper[4180]: I0220 11:48:57.305557 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:48:57.323801 master-0 kubenswrapper[4180]: W0220 11:48:57.323741 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod478be5e4_cf17_4ebf_a45a_c18cd2b69929.slice/crio-23b2cdbe43b5f53ee3da0198ee8c38e0997aeb51d9b1fb66eb114d3637b2718c WatchSource:0}: Error finding container 23b2cdbe43b5f53ee3da0198ee8c38e0997aeb51d9b1fb66eb114d3637b2718c: Status 404 returned error can't find the container with id 23b2cdbe43b5f53ee3da0198ee8c38e0997aeb51d9b1fb66eb114d3637b2718c Feb 20 11:48:57.386682 master-0 kubenswrapper[4180]: I0220 11:48:57.386619 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" event={"ID":"478be5e4-cf17-4ebf-a45a-c18cd2b69929","Type":"ContainerStarted","Data":"23b2cdbe43b5f53ee3da0198ee8c38e0997aeb51d9b1fb66eb114d3637b2718c"} Feb 20 11:48:57.389015 master-0 kubenswrapper[4180]: I0220 11:48:57.388982 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bj4fp_e979c659-8581-466f-8528-01b6b4f51499/ovnkube-controller/0.log" Feb 20 11:48:57.391141 master-0 kubenswrapper[4180]: I0220 11:48:57.391098 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bj4fp_e979c659-8581-466f-8528-01b6b4f51499/kube-rbac-proxy-ovn-metrics/0.log" Feb 20 11:48:57.391945 master-0 kubenswrapper[4180]: I0220 11:48:57.391888 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bj4fp_e979c659-8581-466f-8528-01b6b4f51499/kube-rbac-proxy-node/0.log" Feb 20 11:48:57.392649 master-0 kubenswrapper[4180]: I0220 11:48:57.392607 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bj4fp_e979c659-8581-466f-8528-01b6b4f51499/ovn-acl-logging/0.log" Feb 20 11:48:57.393168 master-0 kubenswrapper[4180]: I0220 11:48:57.393126 4180 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bj4fp_e979c659-8581-466f-8528-01b6b4f51499/ovn-controller/0.log" Feb 20 11:48:57.393756 master-0 kubenswrapper[4180]: I0220 11:48:57.393697 4180 generic.go:334] "Generic (PLEG): container finished" podID="e979c659-8581-466f-8528-01b6b4f51499" containerID="781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b" exitCode=2 Feb 20 11:48:57.393756 master-0 kubenswrapper[4180]: I0220 11:48:57.393748 4180 generic.go:334] "Generic (PLEG): container finished" podID="e979c659-8581-466f-8528-01b6b4f51499" containerID="168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d" exitCode=0 Feb 20 11:48:57.393903 master-0 kubenswrapper[4180]: I0220 11:48:57.393772 4180 generic.go:334] "Generic (PLEG): container finished" podID="e979c659-8581-466f-8528-01b6b4f51499" containerID="7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6" exitCode=0 Feb 20 11:48:57.393903 master-0 kubenswrapper[4180]: I0220 11:48:57.393791 4180 generic.go:334] "Generic (PLEG): container finished" podID="e979c659-8581-466f-8528-01b6b4f51499" containerID="f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94" exitCode=0 Feb 20 11:48:57.393903 master-0 kubenswrapper[4180]: I0220 11:48:57.393809 4180 generic.go:334] "Generic (PLEG): container finished" podID="e979c659-8581-466f-8528-01b6b4f51499" containerID="5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2" exitCode=143 Feb 20 11:48:57.393903 master-0 kubenswrapper[4180]: I0220 11:48:57.393827 4180 generic.go:334] "Generic (PLEG): container finished" podID="e979c659-8581-466f-8528-01b6b4f51499" containerID="c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0" exitCode=143 Feb 20 11:48:57.393903 master-0 kubenswrapper[4180]: I0220 11:48:57.393844 4180 generic.go:334] "Generic (PLEG): container finished" podID="e979c659-8581-466f-8528-01b6b4f51499" containerID="9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8" exitCode=143 Feb 20 11:48:57.393903 master-0 kubenswrapper[4180]: I0220 11:48:57.393862 4180 generic.go:334] "Generic (PLEG): container finished" podID="e979c659-8581-466f-8528-01b6b4f51499" containerID="6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43" exitCode=143 Feb 20 11:48:57.393903 master-0 kubenswrapper[4180]: I0220 11:48:57.393747 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerDied","Data":"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b"} Feb 20 11:48:57.394294 master-0 kubenswrapper[4180]: I0220 11:48:57.393909 4180 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" Feb 20 11:48:57.394294 master-0 kubenswrapper[4180]: I0220 11:48:57.393943 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerDied","Data":"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d"} Feb 20 11:48:57.394294 master-0 kubenswrapper[4180]: I0220 11:48:57.393986 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerDied","Data":"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6"} Feb 20 11:48:57.394294 master-0 kubenswrapper[4180]: I0220 11:48:57.394016 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerDied","Data":"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94"} Feb 20 11:48:57.394294 master-0 kubenswrapper[4180]: I0220 11:48:57.394042 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerDied","Data":"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2"} Feb 20 11:48:57.394294 master-0 kubenswrapper[4180]: I0220 11:48:57.394054 4180 scope.go:117] "RemoveContainer" containerID="781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b" Feb 20 11:48:57.394294 master-0 kubenswrapper[4180]: I0220 11:48:57.394067 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerDied","Data":"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394216 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394467 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394481 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394498 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerDied","Data":"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394518 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394566 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394599 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394613 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394623 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394635 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394646 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394658 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394672 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394699 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerDied","Data":"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394724 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394742 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394758 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394773 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394787 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394802 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394873 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394893 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43"} Feb 20 11:48:57.394876 master-0 kubenswrapper[4180]: I0220 11:48:57.394909 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1"} Feb 20 11:48:57.397174 master-0 kubenswrapper[4180]: I0220 11:48:57.394972 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bj4fp" event={"ID":"e979c659-8581-466f-8528-01b6b4f51499","Type":"ContainerDied","Data":"74210ef7f7477a5cc9d9d264c0abf5069d4cce4d0d3176995bf660061a1084b1"} Feb 20 11:48:57.397174 master-0 kubenswrapper[4180]: I0220 11:48:57.394997 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b"} Feb 20 11:48:57.397174 master-0 kubenswrapper[4180]: I0220 11:48:57.395010 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d"} Feb 20 11:48:57.397174 master-0 kubenswrapper[4180]: I0220 11:48:57.395066 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6"} Feb 20 11:48:57.397174 master-0 kubenswrapper[4180]: I0220 11:48:57.395085 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94"} Feb 20 11:48:57.397174 master-0 kubenswrapper[4180]: I0220 11:48:57.395099 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2"} Feb 20 11:48:57.397174 master-0 kubenswrapper[4180]: I0220 11:48:57.395114 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0"} Feb 20 11:48:57.397174 master-0 kubenswrapper[4180]: I0220 11:48:57.395163 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8"} Feb 20 11:48:57.397174 master-0 kubenswrapper[4180]: I0220 11:48:57.395176 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43"} Feb 20 11:48:57.397174 master-0 kubenswrapper[4180]: I0220 11:48:57.395187 4180 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1"} Feb 20 11:48:57.415727 master-0 kubenswrapper[4180]: I0220 11:48:57.415658 4180 scope.go:117] "RemoveContainer" containerID="168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d" Feb 20 11:48:57.435475 master-0 kubenswrapper[4180]: I0220 11:48:57.435418 4180 scope.go:117] "RemoveContainer" containerID="7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6" Feb 20 11:48:57.445307 master-0 kubenswrapper[4180]: I0220 11:48:57.443626 4180 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bj4fp"] Feb 20 11:48:57.451570 master-0 kubenswrapper[4180]: I0220 11:48:57.451479 4180 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bj4fp"] Feb 20 11:48:57.460893 master-0 kubenswrapper[4180]: I0220 11:48:57.460830 4180 scope.go:117] "RemoveContainer" containerID="f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94" Feb 20 11:48:57.557464 master-0 kubenswrapper[4180]: I0220 11:48:57.557401 4180 scope.go:117] "RemoveContainer" containerID="5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2" Feb 20 11:48:57.572057 master-0 kubenswrapper[4180]: I0220 11:48:57.572009 4180 scope.go:117] "RemoveContainer" containerID="c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0" Feb 20 11:48:57.586814 master-0 kubenswrapper[4180]: I0220 11:48:57.586722 4180 scope.go:117] "RemoveContainer" containerID="9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8" Feb 20 11:48:57.602924 master-0 kubenswrapper[4180]: I0220 11:48:57.602870 4180 scope.go:117] "RemoveContainer" containerID="6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43" Feb 20 11:48:57.620291 master-0 kubenswrapper[4180]: I0220 11:48:57.620226 4180 scope.go:117] "RemoveContainer" containerID="5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1" Feb 20 11:48:57.635834 master-0 kubenswrapper[4180]: I0220 11:48:57.635779 4180 scope.go:117] "RemoveContainer" containerID="781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b" Feb 20 11:48:57.636316 master-0 kubenswrapper[4180]: E0220 11:48:57.636260 4180 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b\": container with ID starting with 781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b not found: ID does not exist" containerID="781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b" Feb 20 11:48:57.636389 master-0 kubenswrapper[4180]: I0220 11:48:57.636318 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b"} err="failed to get container status \"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b\": rpc error: code = NotFound desc = could not find container \"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b\": container with ID starting with 781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b not found: ID does not exist" Feb 20 11:48:57.636389 master-0 kubenswrapper[4180]: I0220 11:48:57.636357 4180 scope.go:117] "RemoveContainer" containerID="168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d" Feb 20 11:48:57.637054 master-0 kubenswrapper[4180]: E0220 11:48:57.636852 4180 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d\": container with ID starting with 168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d not found: ID does not exist" containerID="168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d" Feb 20 11:48:57.637054 master-0 kubenswrapper[4180]: I0220 11:48:57.636909 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d"} err="failed to get container status \"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d\": rpc error: code = NotFound desc = could not find container \"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d\": container with ID starting with 168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d not found: ID does not exist" Feb 20 11:48:57.637054 master-0 kubenswrapper[4180]: I0220 11:48:57.636948 4180 scope.go:117] "RemoveContainer" containerID="7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6" Feb 20 11:48:57.637415 master-0 kubenswrapper[4180]: E0220 11:48:57.637356 4180 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6\": container with ID starting with 7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6 not found: ID does not exist" containerID="7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6" Feb 20 11:48:57.637415 master-0 kubenswrapper[4180]: I0220 11:48:57.637402 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6"} err="failed to get container status \"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6\": rpc error: code = NotFound desc = could not find container \"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6\": container with ID starting with 7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6 not found: ID does not exist" Feb 20 11:48:57.637598 master-0 kubenswrapper[4180]: I0220 11:48:57.637430 4180 scope.go:117] "RemoveContainer" containerID="f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94" Feb 20 11:48:57.637912 master-0 kubenswrapper[4180]: E0220 11:48:57.637859 4180 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94\": container with ID starting with f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94 not found: ID does not exist" containerID="f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94" Feb 20 11:48:57.637980 master-0 kubenswrapper[4180]: I0220 11:48:57.637901 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94"} err="failed to get container status \"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94\": rpc error: code = NotFound desc = could not find container \"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94\": container with ID starting with f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94 not found: ID does not exist" Feb 20 11:48:57.637980 master-0 kubenswrapper[4180]: I0220 11:48:57.637928 4180 scope.go:117] "RemoveContainer" containerID="5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2" Feb 20 11:48:57.638462 master-0 kubenswrapper[4180]: E0220 11:48:57.638307 4180 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2\": container with ID starting with 5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2 not found: ID does not exist" containerID="5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2" Feb 20 11:48:57.638462 master-0 kubenswrapper[4180]: I0220 11:48:57.638379 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2"} err="failed to get container status \"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2\": rpc error: code = NotFound desc = could not find container \"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2\": container with ID starting with 5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2 not found: ID does not exist" Feb 20 11:48:57.638462 master-0 kubenswrapper[4180]: I0220 11:48:57.638407 4180 scope.go:117] "RemoveContainer" containerID="c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0" Feb 20 11:48:57.638893 master-0 kubenswrapper[4180]: E0220 11:48:57.638833 4180 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0\": container with ID starting with c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0 not found: ID does not exist" containerID="c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0" Feb 20 11:48:57.638965 master-0 kubenswrapper[4180]: I0220 11:48:57.638888 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0"} err="failed to get container status \"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0\": rpc error: code = NotFound desc = could not find container \"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0\": container with ID starting with c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0 not found: ID does not exist" Feb 20 11:48:57.638965 master-0 kubenswrapper[4180]: I0220 11:48:57.638924 4180 scope.go:117] "RemoveContainer" containerID="9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8" Feb 20 11:48:57.642432 master-0 kubenswrapper[4180]: E0220 11:48:57.642373 4180 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8\": container with ID starting with 9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8 not found: ID does not exist" containerID="9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8" Feb 20 11:48:57.642580 master-0 kubenswrapper[4180]: I0220 11:48:57.642425 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8"} err="failed to get container status \"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8\": rpc error: code = NotFound desc = could not find container \"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8\": container with ID starting with 9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8 not found: ID does not exist" Feb 20 11:48:57.642580 master-0 kubenswrapper[4180]: I0220 11:48:57.642457 4180 scope.go:117] "RemoveContainer" containerID="6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43" Feb 20 11:48:57.643483 master-0 kubenswrapper[4180]: E0220 11:48:57.643429 4180 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43\": container with ID starting with 6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43 not found: ID does not exist" containerID="6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43" Feb 20 11:48:57.643654 master-0 kubenswrapper[4180]: I0220 11:48:57.643490 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43"} err="failed to get container status \"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43\": rpc error: code = NotFound desc = could not find container \"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43\": container with ID starting with 6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43 not found: ID does not exist" Feb 20 11:48:57.643654 master-0 kubenswrapper[4180]: I0220 11:48:57.643567 4180 scope.go:117] "RemoveContainer" containerID="5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1" Feb 20 11:48:57.644450 master-0 kubenswrapper[4180]: E0220 11:48:57.644395 4180 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1\": container with ID starting with 5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1 not found: ID does not exist" containerID="5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1" Feb 20 11:48:57.644574 master-0 kubenswrapper[4180]: I0220 11:48:57.644447 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1"} err="failed to get container status \"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1\": rpc error: code = NotFound desc = could not find container \"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1\": container with ID starting with 5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1 not found: ID does not exist" Feb 20 11:48:57.644574 master-0 kubenswrapper[4180]: I0220 11:48:57.644482 4180 scope.go:117] "RemoveContainer" containerID="781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b" Feb 20 11:48:57.645265 master-0 kubenswrapper[4180]: I0220 11:48:57.645198 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b"} err="failed to get container status \"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b\": rpc error: code = NotFound desc = could not find container \"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b\": container with ID starting with 781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b not found: ID does not exist" Feb 20 11:48:57.645265 master-0 kubenswrapper[4180]: I0220 11:48:57.645257 4180 scope.go:117] "RemoveContainer" containerID="168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d" Feb 20 11:48:57.645976 master-0 kubenswrapper[4180]: I0220 11:48:57.645922 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d"} err="failed to get container status \"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d\": rpc error: code = NotFound desc = could not find container \"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d\": container with ID starting with 168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d not found: ID does not exist" Feb 20 11:48:57.645976 master-0 kubenswrapper[4180]: I0220 11:48:57.645957 4180 scope.go:117] "RemoveContainer" containerID="7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6" Feb 20 11:48:57.646507 master-0 kubenswrapper[4180]: I0220 11:48:57.646451 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6"} err="failed to get container status \"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6\": rpc error: code = NotFound desc = could not find container \"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6\": container with ID starting with 7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6 not found: ID does not exist" Feb 20 11:48:57.646507 master-0 kubenswrapper[4180]: I0220 11:48:57.646492 4180 scope.go:117] "RemoveContainer" containerID="f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94" Feb 20 11:48:57.647260 master-0 kubenswrapper[4180]: I0220 11:48:57.647218 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94"} err="failed to get container status \"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94\": rpc error: code = NotFound desc = could not find container \"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94\": container with ID starting with f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94 not found: ID does not exist" Feb 20 11:48:57.647260 master-0 kubenswrapper[4180]: I0220 11:48:57.647254 4180 scope.go:117] "RemoveContainer" containerID="5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2" Feb 20 11:48:57.647876 master-0 kubenswrapper[4180]: I0220 11:48:57.647832 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2"} err="failed to get container status \"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2\": rpc error: code = NotFound desc = could not find container \"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2\": container with ID starting with 5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2 not found: ID does not exist" Feb 20 11:48:57.647967 master-0 kubenswrapper[4180]: I0220 11:48:57.647875 4180 scope.go:117] "RemoveContainer" containerID="c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0" Feb 20 11:48:57.648782 master-0 kubenswrapper[4180]: I0220 11:48:57.648557 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0"} err="failed to get container status \"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0\": rpc error: code = NotFound desc = could not find container \"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0\": container with ID starting with c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0 not found: ID does not exist" Feb 20 11:48:57.648782 master-0 kubenswrapper[4180]: I0220 11:48:57.648616 4180 scope.go:117] "RemoveContainer" containerID="9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8" Feb 20 11:48:57.649137 master-0 kubenswrapper[4180]: I0220 11:48:57.649084 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8"} err="failed to get container status \"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8\": rpc error: code = NotFound desc = could not find container \"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8\": container with ID starting with 9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8 not found: ID does not exist" Feb 20 11:48:57.649137 master-0 kubenswrapper[4180]: I0220 11:48:57.649123 4180 scope.go:117] "RemoveContainer" containerID="6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43" Feb 20 11:48:57.649660 master-0 kubenswrapper[4180]: I0220 11:48:57.649621 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43"} err="failed to get container status \"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43\": rpc error: code = NotFound desc = could not find container \"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43\": container with ID starting with 6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43 not found: ID does not exist" Feb 20 11:48:57.649660 master-0 kubenswrapper[4180]: I0220 11:48:57.649655 4180 scope.go:117] "RemoveContainer" containerID="5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1" Feb 20 11:48:57.650222 master-0 kubenswrapper[4180]: I0220 11:48:57.650170 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1"} err="failed to get container status \"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1\": rpc error: code = NotFound desc = could not find container \"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1\": container with ID starting with 5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1 not found: ID does not exist" Feb 20 11:48:57.650222 master-0 kubenswrapper[4180]: I0220 11:48:57.650222 4180 scope.go:117] "RemoveContainer" containerID="781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b" Feb 20 11:48:57.650822 master-0 kubenswrapper[4180]: I0220 11:48:57.650777 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b"} err="failed to get container status \"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b\": rpc error: code = NotFound desc = could not find container \"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b\": container with ID starting with 781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b not found: ID does not exist" Feb 20 11:48:57.650822 master-0 kubenswrapper[4180]: I0220 11:48:57.650817 4180 scope.go:117] "RemoveContainer" containerID="168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d" Feb 20 11:48:57.651280 master-0 kubenswrapper[4180]: I0220 11:48:57.651241 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d"} err="failed to get container status \"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d\": rpc error: code = NotFound desc = could not find container \"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d\": container with ID starting with 168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d not found: ID does not exist" Feb 20 11:48:57.651360 master-0 kubenswrapper[4180]: I0220 11:48:57.651278 4180 scope.go:117] "RemoveContainer" containerID="7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6" Feb 20 11:48:57.651792 master-0 kubenswrapper[4180]: I0220 11:48:57.651756 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6"} err="failed to get container status \"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6\": rpc error: code = NotFound desc = could not find container \"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6\": container with ID starting with 7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6 not found: ID does not exist" Feb 20 11:48:57.651925 master-0 kubenswrapper[4180]: I0220 11:48:57.651790 4180 scope.go:117] "RemoveContainer" containerID="f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94" Feb 20 11:48:57.652490 master-0 kubenswrapper[4180]: I0220 11:48:57.652437 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94"} err="failed to get container status \"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94\": rpc error: code = NotFound desc = could not find container \"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94\": container with ID starting with f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94 not found: ID does not exist" Feb 20 11:48:57.652638 master-0 kubenswrapper[4180]: I0220 11:48:57.652488 4180 scope.go:117] "RemoveContainer" containerID="5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2" Feb 20 11:48:57.653013 master-0 kubenswrapper[4180]: I0220 11:48:57.652974 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2"} err="failed to get container status \"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2\": rpc error: code = NotFound desc = could not find container \"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2\": container with ID starting with 5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2 not found: ID does not exist" Feb 20 11:48:57.653097 master-0 kubenswrapper[4180]: I0220 11:48:57.653010 4180 scope.go:117] "RemoveContainer" containerID="c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0" Feb 20 11:48:57.653457 master-0 kubenswrapper[4180]: I0220 11:48:57.653410 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0"} err="failed to get container status \"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0\": rpc error: code = NotFound desc = could not find container \"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0\": container with ID starting with c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0 not found: ID does not exist" Feb 20 11:48:57.653613 master-0 kubenswrapper[4180]: I0220 11:48:57.653460 4180 scope.go:117] "RemoveContainer" containerID="9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8" Feb 20 11:48:57.654137 master-0 kubenswrapper[4180]: I0220 11:48:57.654088 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8"} err="failed to get container status \"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8\": rpc error: code = NotFound desc = could not find container \"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8\": container with ID starting with 9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8 not found: ID does not exist" Feb 20 11:48:57.654240 master-0 kubenswrapper[4180]: I0220 11:48:57.654140 4180 scope.go:117] "RemoveContainer" containerID="6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43" Feb 20 11:48:57.654753 master-0 kubenswrapper[4180]: I0220 11:48:57.654695 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43"} err="failed to get container status \"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43\": rpc error: code = NotFound desc = could not find container \"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43\": container with ID starting with 6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43 not found: ID does not exist" Feb 20 11:48:57.654753 master-0 kubenswrapper[4180]: I0220 11:48:57.654748 4180 scope.go:117] "RemoveContainer" containerID="5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1" Feb 20 11:48:57.655260 master-0 kubenswrapper[4180]: I0220 11:48:57.655219 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1"} err="failed to get container status \"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1\": rpc error: code = NotFound desc = could not find container \"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1\": container with ID starting with 5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1 not found: ID does not exist" Feb 20 11:48:57.655260 master-0 kubenswrapper[4180]: I0220 11:48:57.655258 4180 scope.go:117] "RemoveContainer" containerID="781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b" Feb 20 11:48:57.655872 master-0 kubenswrapper[4180]: I0220 11:48:57.655816 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b"} err="failed to get container status \"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b\": rpc error: code = NotFound desc = could not find container \"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b\": container with ID starting with 781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b not found: ID does not exist" Feb 20 11:48:57.655872 master-0 kubenswrapper[4180]: I0220 11:48:57.655870 4180 scope.go:117] "RemoveContainer" containerID="168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d" Feb 20 11:48:57.656334 master-0 kubenswrapper[4180]: I0220 11:48:57.656284 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d"} err="failed to get container status \"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d\": rpc error: code = NotFound desc = could not find container \"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d\": container with ID starting with 168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d not found: ID does not exist" Feb 20 11:48:57.656433 master-0 kubenswrapper[4180]: I0220 11:48:57.656332 4180 scope.go:117] "RemoveContainer" containerID="7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6" Feb 20 11:48:57.656909 master-0 kubenswrapper[4180]: I0220 11:48:57.656870 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6"} err="failed to get container status \"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6\": rpc error: code = NotFound desc = could not find container \"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6\": container with ID starting with 7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6 not found: ID does not exist" Feb 20 11:48:57.656909 master-0 kubenswrapper[4180]: I0220 11:48:57.656903 4180 scope.go:117] "RemoveContainer" containerID="f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94" Feb 20 11:48:57.657339 master-0 kubenswrapper[4180]: I0220 11:48:57.657298 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94"} err="failed to get container status \"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94\": rpc error: code = NotFound desc = could not find container \"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94\": container with ID starting with f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94 not found: ID does not exist" Feb 20 11:48:57.657436 master-0 kubenswrapper[4180]: I0220 11:48:57.657342 4180 scope.go:117] "RemoveContainer" containerID="5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2" Feb 20 11:48:57.658084 master-0 kubenswrapper[4180]: I0220 11:48:57.658043 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2"} err="failed to get container status \"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2\": rpc error: code = NotFound desc = could not find container \"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2\": container with ID starting with 5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2 not found: ID does not exist" Feb 20 11:48:57.658084 master-0 kubenswrapper[4180]: I0220 11:48:57.658079 4180 scope.go:117] "RemoveContainer" containerID="c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0" Feb 20 11:48:57.658716 master-0 kubenswrapper[4180]: I0220 11:48:57.658678 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0"} err="failed to get container status \"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0\": rpc error: code = NotFound desc = could not find container \"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0\": container with ID starting with c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0 not found: ID does not exist" Feb 20 11:48:57.658716 master-0 kubenswrapper[4180]: I0220 11:48:57.658712 4180 scope.go:117] "RemoveContainer" containerID="9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8" Feb 20 11:48:57.659317 master-0 kubenswrapper[4180]: I0220 11:48:57.659279 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8"} err="failed to get container status \"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8\": rpc error: code = NotFound desc = could not find container \"9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8\": container with ID starting with 9ddcf4cfdd3f5159a8b3f9e1c46387395291be0fa54eeaf5c621f97a8fc364f8 not found: ID does not exist" Feb 20 11:48:57.659317 master-0 kubenswrapper[4180]: I0220 11:48:57.659315 4180 scope.go:117] "RemoveContainer" containerID="6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43" Feb 20 11:48:57.659915 master-0 kubenswrapper[4180]: I0220 11:48:57.659866 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43"} err="failed to get container status \"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43\": rpc error: code = NotFound desc = could not find container \"6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43\": container with ID starting with 6b0ffbe1c9cf1cf2b5aec11c514bf927c86bfe1a3d4eddf85ece298323606a43 not found: ID does not exist" Feb 20 11:48:57.660054 master-0 kubenswrapper[4180]: I0220 11:48:57.659915 4180 scope.go:117] "RemoveContainer" containerID="5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1" Feb 20 11:48:57.660485 master-0 kubenswrapper[4180]: I0220 11:48:57.660449 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1"} err="failed to get container status \"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1\": rpc error: code = NotFound desc = could not find container \"5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1\": container with ID starting with 5ce632c3ff6de9bcd03b513ec2f0bcf06f037bfebbe0ea6d86e05621cce1ade1 not found: ID does not exist" Feb 20 11:48:57.660636 master-0 kubenswrapper[4180]: I0220 11:48:57.660483 4180 scope.go:117] "RemoveContainer" containerID="781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b" Feb 20 11:48:57.661083 master-0 kubenswrapper[4180]: I0220 11:48:57.661028 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b"} err="failed to get container status \"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b\": rpc error: code = NotFound desc = could not find container \"781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b\": container with ID starting with 781ee382dd691834fe8e164d64155dd180ac46695c6dac4eec3b316743f7fe2b not found: ID does not exist" Feb 20 11:48:57.661083 master-0 kubenswrapper[4180]: I0220 11:48:57.661079 4180 scope.go:117] "RemoveContainer" containerID="168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d" Feb 20 11:48:57.661626 master-0 kubenswrapper[4180]: I0220 11:48:57.661586 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d"} err="failed to get container status \"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d\": rpc error: code = NotFound desc = could not find container \"168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d\": container with ID starting with 168c8bdcb158cf5941d424727c0e1b5548e9db252e3312321def54eefef0a14d not found: ID does not exist" Feb 20 11:48:57.661626 master-0 kubenswrapper[4180]: I0220 11:48:57.661620 4180 scope.go:117] "RemoveContainer" containerID="7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6" Feb 20 11:48:57.662033 master-0 kubenswrapper[4180]: I0220 11:48:57.661988 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6"} err="failed to get container status \"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6\": rpc error: code = NotFound desc = could not find container \"7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6\": container with ID starting with 7262803de8a8bf672b86af672b331319b1ea196900bd541220a58237b7e1d3a6 not found: ID does not exist" Feb 20 11:48:57.662033 master-0 kubenswrapper[4180]: I0220 11:48:57.662030 4180 scope.go:117] "RemoveContainer" containerID="f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94" Feb 20 11:48:57.662630 master-0 kubenswrapper[4180]: I0220 11:48:57.662590 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94"} err="failed to get container status \"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94\": rpc error: code = NotFound desc = could not find container \"f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94\": container with ID starting with f16aaf3ec63635dd93f6ccba6f06da6d8eb7c948b4b392e6b5fe1121f2cbff94 not found: ID does not exist" Feb 20 11:48:57.662630 master-0 kubenswrapper[4180]: I0220 11:48:57.662627 4180 scope.go:117] "RemoveContainer" containerID="5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2" Feb 20 11:48:57.663231 master-0 kubenswrapper[4180]: I0220 11:48:57.663187 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2"} err="failed to get container status \"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2\": rpc error: code = NotFound desc = could not find container \"5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2\": container with ID starting with 5c52a4a9437a02231a0d314dae309dcae15d11bd25054d82208a4045a8755af2 not found: ID does not exist" Feb 20 11:48:57.663231 master-0 kubenswrapper[4180]: I0220 11:48:57.663227 4180 scope.go:117] "RemoveContainer" containerID="c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0" Feb 20 11:48:57.663727 master-0 kubenswrapper[4180]: I0220 11:48:57.663686 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:57.663827 master-0 kubenswrapper[4180]: I0220 11:48:57.663750 4180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0"} err="failed to get container status \"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0\": rpc error: code = NotFound desc = could not find container \"c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0\": container with ID starting with c2f44cd5c3847cef12ba2c65cf3c3c6c66cd327e0a4c9da45bd7d94298fc01b0 not found: ID does not exist" Feb 20 11:48:57.663916 master-0 kubenswrapper[4180]: E0220 11:48:57.663866 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:48:57.679198 master-0 kubenswrapper[4180]: I0220 11:48:57.679135 4180 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e979c659-8581-466f-8528-01b6b4f51499" path="/var/lib/kubelet/pods/e979c659-8581-466f-8528-01b6b4f51499/volumes" Feb 20 11:48:58.400615 master-0 kubenswrapper[4180]: I0220 11:48:58.400497 4180 generic.go:334] "Generic (PLEG): container finished" podID="478be5e4-cf17-4ebf-a45a-c18cd2b69929" containerID="5b4211a2cc9a2198d36fabbec1b685ffa0d3133fee06da2f4d880279f8a4b229" exitCode=0 Feb 20 11:48:58.401710 master-0 kubenswrapper[4180]: I0220 11:48:58.400588 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" event={"ID":"478be5e4-cf17-4ebf-a45a-c18cd2b69929","Type":"ContainerDied","Data":"5b4211a2cc9a2198d36fabbec1b685ffa0d3133fee06da2f4d880279f8a4b229"} Feb 20 11:48:58.664015 master-0 kubenswrapper[4180]: I0220 11:48:58.663949 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:48:58.664204 master-0 kubenswrapper[4180]: E0220 11:48:58.664130 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:48:59.410675 master-0 kubenswrapper[4180]: I0220 11:48:59.407477 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" event={"ID":"478be5e4-cf17-4ebf-a45a-c18cd2b69929","Type":"ContainerStarted","Data":"114a60ded5236b0f2ee18c30f42acf48bea3f47bb91ff36d452bfd98c177d781"} Feb 20 11:48:59.410675 master-0 kubenswrapper[4180]: I0220 11:48:59.407772 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" event={"ID":"478be5e4-cf17-4ebf-a45a-c18cd2b69929","Type":"ContainerStarted","Data":"a730a0b80c7ce510c15a5ac91bbaedb2c5bc0865995cc5e88448e8fc5e25a5c6"} Feb 20 11:48:59.410675 master-0 kubenswrapper[4180]: I0220 11:48:59.407785 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" event={"ID":"478be5e4-cf17-4ebf-a45a-c18cd2b69929","Type":"ContainerStarted","Data":"c9a6aba68268a75b04bd7dbdcea87d6031b34d0b1ebb7dcffceef4bbfed7351e"} Feb 20 11:48:59.410675 master-0 kubenswrapper[4180]: I0220 11:48:59.407800 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" event={"ID":"478be5e4-cf17-4ebf-a45a-c18cd2b69929","Type":"ContainerStarted","Data":"183e3b93642a08a1a30105f712b8ff87531ae3e91acbaf2958391dd1d83b7393"} Feb 20 11:48:59.410675 master-0 kubenswrapper[4180]: I0220 11:48:59.407811 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" event={"ID":"478be5e4-cf17-4ebf-a45a-c18cd2b69929","Type":"ContainerStarted","Data":"4347ec78f5e7d0624f10a02f82fb389fb1a24a07db856d010f887ab85c9db5ed"} Feb 20 11:48:59.410675 master-0 kubenswrapper[4180]: I0220 11:48:59.407821 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" event={"ID":"478be5e4-cf17-4ebf-a45a-c18cd2b69929","Type":"ContainerStarted","Data":"5e7c4e0870521163c4b7644fcc614f3bf89f8588b8af1e6c39138d8141670b17"} Feb 20 11:48:59.663801 master-0 kubenswrapper[4180]: I0220 11:48:59.663708 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:48:59.665399 master-0 kubenswrapper[4180]: E0220 11:48:59.665311 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:49:00.664264 master-0 kubenswrapper[4180]: I0220 11:49:00.664169 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:00.665088 master-0 kubenswrapper[4180]: E0220 11:49:00.664370 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:49:01.663896 master-0 kubenswrapper[4180]: I0220 11:49:01.663723 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:49:01.664110 master-0 kubenswrapper[4180]: E0220 11:49:01.663911 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:49:02.427708 master-0 kubenswrapper[4180]: I0220 11:49:02.427617 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" event={"ID":"478be5e4-cf17-4ebf-a45a-c18cd2b69929","Type":"ContainerStarted","Data":"df872f46852dd77d5c0a4bec58845a3582eba4132254a30ff9685391c714e330"} Feb 20 11:49:02.663871 master-0 kubenswrapper[4180]: I0220 11:49:02.663802 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:02.664094 master-0 kubenswrapper[4180]: E0220 11:49:02.664045 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:49:03.436591 master-0 kubenswrapper[4180]: I0220 11:49:03.436446 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" event={"ID":"478be5e4-cf17-4ebf-a45a-c18cd2b69929","Type":"ContainerStarted","Data":"a3a7b761677eeea4df201a84270252709b5ea56a59a0093f6466422b4d8813e0"} Feb 20 11:49:03.437798 master-0 kubenswrapper[4180]: I0220 11:49:03.436857 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:03.437798 master-0 kubenswrapper[4180]: I0220 11:49:03.436937 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:03.472071 master-0 kubenswrapper[4180]: I0220 11:49:03.472009 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:03.664346 master-0 kubenswrapper[4180]: I0220 11:49:03.664216 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:49:03.664743 master-0 kubenswrapper[4180]: E0220 11:49:03.664416 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:49:04.258326 master-0 kubenswrapper[4180]: I0220 11:49:04.258223 4180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" podStartSLOduration=8.258197669 podStartE2EDuration="8.258197669s" podCreationTimestamp="2026-02-20 11:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:49:03.863617508 +0000 UTC m=+105.038669378" watchObservedRunningTime="2026-02-20 11:49:04.258197669 +0000 UTC m=+105.433249529" Feb 20 11:49:04.412040 master-0 kubenswrapper[4180]: I0220 11:49:04.411928 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:04.412278 master-0 kubenswrapper[4180]: E0220 11:49:04.412101 4180 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 11:49:04.412278 master-0 kubenswrapper[4180]: E0220 11:49:04.412186 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert podName:67f890c8-05a1-4797-8da8-6194aea0df9a nodeName:}" failed. No retries permitted until 2026-02-20 11:50:08.412161867 +0000 UTC m=+169.587213687 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert") pod "cluster-version-operator-5cfd9759cf-4pnsw" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a") : secret "cluster-version-operator-serving-cert" not found Feb 20 11:49:04.444611 master-0 kubenswrapper[4180]: I0220 11:49:04.443921 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:04.513831 master-0 kubenswrapper[4180]: I0220 11:49:04.513713 4180 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:04.664748 master-0 kubenswrapper[4180]: I0220 11:49:04.664638 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:04.664981 master-0 kubenswrapper[4180]: E0220 11:49:04.664862 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:49:04.714934 master-0 kubenswrapper[4180]: I0220 11:49:04.714819 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmwm\" (UniqueName: \"kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm\") pod \"network-check-target-h5w2t\" (UID: \"39ccf158-b40f-4dba-90e2-27b1409487b7\") " pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:49:04.715269 master-0 kubenswrapper[4180]: E0220 11:49:04.714971 4180 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 11:49:04.715269 master-0 kubenswrapper[4180]: E0220 11:49:04.715007 4180 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 11:49:04.715269 master-0 kubenswrapper[4180]: E0220 11:49:04.715025 4180 projected.go:194] Error preparing data for projected volume kube-api-access-4zmwm for pod openshift-network-diagnostics/network-check-target-h5w2t: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 11:49:04.715269 master-0 kubenswrapper[4180]: E0220 11:49:04.715094 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm podName:39ccf158-b40f-4dba-90e2-27b1409487b7 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:36.715069844 +0000 UTC m=+137.890121694 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-4zmwm" (UniqueName: "kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm") pod "network-check-target-h5w2t" (UID: "39ccf158-b40f-4dba-90e2-27b1409487b7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 11:49:05.664212 master-0 kubenswrapper[4180]: I0220 11:49:05.664100 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:49:05.664987 master-0 kubenswrapper[4180]: E0220 11:49:05.664321 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:49:06.664418 master-0 kubenswrapper[4180]: I0220 11:49:06.664107 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:06.664418 master-0 kubenswrapper[4180]: E0220 11:49:06.664336 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:49:07.664501 master-0 kubenswrapper[4180]: I0220 11:49:07.664345 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:49:07.665333 master-0 kubenswrapper[4180]: E0220 11:49:07.664604 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:49:08.664700 master-0 kubenswrapper[4180]: I0220 11:49:08.664595 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:08.665416 master-0 kubenswrapper[4180]: E0220 11:49:08.664796 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-29622" podUID="1709ef31-9ddd-42bf-9a95-4be4502a0828" Feb 20 11:49:09.664636 master-0 kubenswrapper[4180]: I0220 11:49:09.664509 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:49:09.665910 master-0 kubenswrapper[4180]: E0220 11:49:09.665844 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h5w2t" podUID="39ccf158-b40f-4dba-90e2-27b1409487b7" Feb 20 11:49:10.131853 master-0 kubenswrapper[4180]: I0220 11:49:10.131449 4180 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Feb 20 11:49:10.132033 master-0 kubenswrapper[4180]: I0220 11:49:10.131973 4180 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Feb 20 11:49:10.178741 master-0 kubenswrapper[4180]: I0220 11:49:10.178669 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj"] Feb 20 11:49:10.179415 master-0 kubenswrapper[4180]: I0220 11:49:10.179384 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85"] Feb 20 11:49:10.179923 master-0 kubenswrapper[4180]: I0220 11:49:10.179893 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:10.180204 master-0 kubenswrapper[4180]: I0220 11:49:10.180154 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8"] Feb 20 11:49:10.180638 master-0 kubenswrapper[4180]: I0220 11:49:10.180597 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:10.180917 master-0 kubenswrapper[4180]: I0220 11:49:10.180887 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:10.185348 master-0 kubenswrapper[4180]: I0220 11:49:10.185296 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt"] Feb 20 11:49:10.186136 master-0 kubenswrapper[4180]: I0220 11:49:10.186095 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:10.188178 master-0 kubenswrapper[4180]: I0220 11:49:10.188141 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 20 11:49:10.192377 master-0 kubenswrapper[4180]: I0220 11:49:10.192296 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 20 11:49:10.192948 master-0 kubenswrapper[4180]: I0220 11:49:10.192879 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 20 11:49:10.193198 master-0 kubenswrapper[4180]: I0220 11:49:10.193152 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 20 11:49:10.193590 master-0 kubenswrapper[4180]: I0220 11:49:10.193511 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 20 11:49:10.193730 master-0 kubenswrapper[4180]: I0220 11:49:10.193683 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 20 11:49:10.193948 master-0 kubenswrapper[4180]: I0220 11:49:10.193903 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 20 11:49:10.194170 master-0 kubenswrapper[4180]: I0220 11:49:10.194129 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 20 11:49:10.194294 master-0 kubenswrapper[4180]: I0220 11:49:10.194221 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 20 11:49:10.198388 master-0 kubenswrapper[4180]: I0220 11:49:10.198334 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 20 11:49:10.199367 master-0 kubenswrapper[4180]: I0220 11:49:10.199318 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 20 11:49:10.200156 master-0 kubenswrapper[4180]: I0220 11:49:10.200100 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc"] Feb 20 11:49:10.200811 master-0 kubenswrapper[4180]: I0220 11:49:10.200752 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 20 11:49:10.200999 master-0 kubenswrapper[4180]: I0220 11:49:10.200959 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 20 11:49:10.201321 master-0 kubenswrapper[4180]: I0220 11:49:10.201281 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2"] Feb 20 11:49:10.201823 master-0 kubenswrapper[4180]: I0220 11:49:10.201770 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:10.202058 master-0 kubenswrapper[4180]: I0220 11:49:10.202025 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt"] Feb 20 11:49:10.202932 master-0 kubenswrapper[4180]: I0220 11:49:10.202889 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:10.203897 master-0 kubenswrapper[4180]: I0220 11:49:10.203835 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc"] Feb 20 11:49:10.204430 master-0 kubenswrapper[4180]: I0220 11:49:10.204390 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.204915 master-0 kubenswrapper[4180]: I0220 11:49:10.204855 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:10.212978 master-0 kubenswrapper[4180]: I0220 11:49:10.212926 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 20 11:49:10.213253 master-0 kubenswrapper[4180]: I0220 11:49:10.213224 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 20 11:49:10.213391 master-0 kubenswrapper[4180]: I0220 11:49:10.213329 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 20 11:49:10.213573 master-0 kubenswrapper[4180]: I0220 11:49:10.213543 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 20 11:49:10.213693 master-0 kubenswrapper[4180]: I0220 11:49:10.213675 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 20 11:49:10.213923 master-0 kubenswrapper[4180]: I0220 11:49:10.213881 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 20 11:49:10.214072 master-0 kubenswrapper[4180]: I0220 11:49:10.213884 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 20 11:49:10.214303 master-0 kubenswrapper[4180]: I0220 11:49:10.214261 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 20 11:49:10.214481 master-0 kubenswrapper[4180]: I0220 11:49:10.214426 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 20 11:49:10.214785 master-0 kubenswrapper[4180]: I0220 11:49:10.214748 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 20 11:49:10.214923 master-0 kubenswrapper[4180]: I0220 11:49:10.214910 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 20 11:49:10.215191 master-0 kubenswrapper[4180]: I0220 11:49:10.214353 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 20 11:49:10.215392 master-0 kubenswrapper[4180]: I0220 11:49:10.215353 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 20 11:49:10.215572 master-0 kubenswrapper[4180]: I0220 11:49:10.215506 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 20 11:49:10.215572 master-0 kubenswrapper[4180]: I0220 11:49:10.215507 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 20 11:49:10.215845 master-0 kubenswrapper[4180]: I0220 11:49:10.215807 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 20 11:49:10.215975 master-0 kubenswrapper[4180]: I0220 11:49:10.215941 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 20 11:49:10.216176 master-0 kubenswrapper[4180]: I0220 11:49:10.215190 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw"] Feb 20 11:49:10.216634 master-0 kubenswrapper[4180]: I0220 11:49:10.216189 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 20 11:49:10.216634 master-0 kubenswrapper[4180]: I0220 11:49:10.216286 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 20 11:49:10.217356 master-0 kubenswrapper[4180]: I0220 11:49:10.217309 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk"] Feb 20 11:49:10.217902 master-0 kubenswrapper[4180]: I0220 11:49:10.217827 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw" Feb 20 11:49:10.219264 master-0 kubenswrapper[4180]: I0220 11:49:10.218187 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-kw2v6"] Feb 20 11:49:10.219579 master-0 kubenswrapper[4180]: I0220 11:49:10.218383 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:10.220464 master-0 kubenswrapper[4180]: I0220 11:49:10.220424 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:10.231596 master-0 kubenswrapper[4180]: I0220 11:49:10.229907 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 20 11:49:10.231596 master-0 kubenswrapper[4180]: I0220 11:49:10.230645 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst"] Feb 20 11:49:10.231596 master-0 kubenswrapper[4180]: I0220 11:49:10.231343 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:10.235557 master-0 kubenswrapper[4180]: I0220 11:49:10.232354 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx"] Feb 20 11:49:10.235557 master-0 kubenswrapper[4180]: I0220 11:49:10.232875 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:10.235557 master-0 kubenswrapper[4180]: I0220 11:49:10.233093 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 20 11:49:10.235557 master-0 kubenswrapper[4180]: I0220 11:49:10.233289 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 20 11:49:10.235557 master-0 kubenswrapper[4180]: I0220 11:49:10.233458 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 20 11:49:10.235557 master-0 kubenswrapper[4180]: I0220 11:49:10.233628 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 20 11:49:10.235557 master-0 kubenswrapper[4180]: I0220 11:49:10.233845 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 20 11:49:10.235557 master-0 kubenswrapper[4180]: I0220 11:49:10.233897 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 20 11:49:10.235557 master-0 kubenswrapper[4180]: I0220 11:49:10.233997 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 20 11:49:10.235557 master-0 kubenswrapper[4180]: I0220 11:49:10.234305 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 20 11:49:10.235557 master-0 kubenswrapper[4180]: I0220 11:49:10.234960 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 20 11:49:10.235557 master-0 kubenswrapper[4180]: I0220 11:49:10.235411 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 20 11:49:10.236164 master-0 kubenswrapper[4180]: I0220 11:49:10.235613 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 20 11:49:10.236164 master-0 kubenswrapper[4180]: I0220 11:49:10.235802 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 20 11:49:10.236164 master-0 kubenswrapper[4180]: I0220 11:49:10.236118 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 20 11:49:10.236365 master-0 kubenswrapper[4180]: I0220 11:49:10.236298 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 20 11:49:10.236829 master-0 kubenswrapper[4180]: I0220 11:49:10.236795 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 20 11:49:10.237040 master-0 kubenswrapper[4180]: I0220 11:49:10.237012 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 20 11:49:10.237235 master-0 kubenswrapper[4180]: I0220 11:49:10.237183 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 20 11:49:10.237895 master-0 kubenswrapper[4180]: I0220 11:49:10.237855 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g"] Feb 20 11:49:10.238566 master-0 kubenswrapper[4180]: I0220 11:49:10.238506 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:10.253694 master-0 kubenswrapper[4180]: I0220 11:49:10.250915 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 20 11:49:10.253694 master-0 kubenswrapper[4180]: I0220 11:49:10.251809 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 20 11:49:10.253930 master-0 kubenswrapper[4180]: I0220 11:49:10.253713 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd"] Feb 20 11:49:10.264408 master-0 kubenswrapper[4180]: I0220 11:49:10.264076 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 20 11:49:10.264656 master-0 kubenswrapper[4180]: I0220 11:49:10.264595 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:10.264929 master-0 kubenswrapper[4180]: I0220 11:49:10.264896 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-nr4tg"] Feb 20 11:49:10.265674 master-0 kubenswrapper[4180]: I0220 11:49:10.265650 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt"] Feb 20 11:49:10.266085 master-0 kubenswrapper[4180]: I0220 11:49:10.266025 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:10.266708 master-0 kubenswrapper[4180]: I0220 11:49:10.266665 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v"] Feb 20 11:49:10.267074 master-0 kubenswrapper[4180]: I0220 11:49:10.267037 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:10.267172 master-0 kubenswrapper[4180]: I0220 11:49:10.267077 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-qhx9j"] Feb 20 11:49:10.267409 master-0 kubenswrapper[4180]: I0220 11:49:10.267365 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:10.268012 master-0 kubenswrapper[4180]: I0220 11:49:10.267957 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l"] Feb 20 11:49:10.268563 master-0 kubenswrapper[4180]: I0220 11:49:10.268502 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:10.268632 master-0 kubenswrapper[4180]: I0220 11:49:10.268617 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:10.269264 master-0 kubenswrapper[4180]: I0220 11:49:10.269217 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75"] Feb 20 11:49:10.269640 master-0 kubenswrapper[4180]: I0220 11:49:10.269573 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:10.270053 master-0 kubenswrapper[4180]: I0220 11:49:10.270008 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89"] Feb 20 11:49:10.270493 master-0 kubenswrapper[4180]: I0220 11:49:10.270449 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:10.270755 master-0 kubenswrapper[4180]: I0220 11:49:10.270712 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw"] Feb 20 11:49:10.271075 master-0 kubenswrapper[4180]: I0220 11:49:10.271032 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.272042 master-0 kubenswrapper[4180]: I0220 11:49:10.272002 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj"] Feb 20 11:49:10.272348 master-0 kubenswrapper[4180]: I0220 11:49:10.272283 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e90033-9ddf-41b4-ab61-e89add6c2fde-serving-cert\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:10.272428 master-0 kubenswrapper[4180]: I0220 11:49:10.272354 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:10.272428 master-0 kubenswrapper[4180]: I0220 11:49:10.272392 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-serving-cert\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.272428 master-0 kubenswrapper[4180]: I0220 11:49:10.272420 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:10.272581 master-0 kubenswrapper[4180]: I0220 11:49:10.272446 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-images\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:10.272581 master-0 kubenswrapper[4180]: I0220 11:49:10.272475 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:10.272581 master-0 kubenswrapper[4180]: I0220 11:49:10.272516 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p4w6\" (UniqueName: \"kubernetes.io/projected/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-kube-api-access-8p4w6\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:10.272722 master-0 kubenswrapper[4180]: I0220 11:49:10.272593 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:10.272722 master-0 kubenswrapper[4180]: I0220 11:49:10.272645 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-config\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.272722 master-0 kubenswrapper[4180]: I0220 11:49:10.272686 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvjcp\" (UniqueName: \"kubernetes.io/projected/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-kube-api-access-lvjcp\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.272823 master-0 kubenswrapper[4180]: I0220 11:49:10.272722 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:10.272823 master-0 kubenswrapper[4180]: I0220 11:49:10.272775 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2kct\" (UniqueName: \"kubernetes.io/projected/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-kube-api-access-z2kct\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:10.272823 master-0 kubenswrapper[4180]: I0220 11:49:10.272808 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:10.272927 master-0 kubenswrapper[4180]: I0220 11:49:10.272849 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2tk7\" (UniqueName: \"kubernetes.io/projected/01e90033-9ddf-41b4-ab61-e89add6c2fde-kube-api-access-j2tk7\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:10.272927 master-0 kubenswrapper[4180]: I0220 11:49:10.272875 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:10.272927 master-0 kubenswrapper[4180]: I0220 11:49:10.272899 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:10.272927 master-0 kubenswrapper[4180]: I0220 11:49:10.272920 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:10.273058 master-0 kubenswrapper[4180]: I0220 11:49:10.272946 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpk24\" (UniqueName: \"kubernetes.io/projected/d65a0af4-c96f-44f8-9384-6bae4585983b-kube-api-access-bpk24\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:10.273058 master-0 kubenswrapper[4180]: I0220 11:49:10.272969 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:10.273058 master-0 kubenswrapper[4180]: I0220 11:49:10.272992 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:10.273058 master-0 kubenswrapper[4180]: I0220 11:49:10.273018 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e90033-9ddf-41b4-ab61-e89add6c2fde-config\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:10.273058 master-0 kubenswrapper[4180]: I0220 11:49:10.273045 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:10.273229 master-0 kubenswrapper[4180]: I0220 11:49:10.273078 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqxhp\" (UniqueName: \"kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-kube-api-access-lqxhp\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:10.273229 master-0 kubenswrapper[4180]: I0220 11:49:10.273122 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:10.273229 master-0 kubenswrapper[4180]: I0220 11:49:10.273146 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-client\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.273229 master-0 kubenswrapper[4180]: I0220 11:49:10.273172 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.273229 master-0 kubenswrapper[4180]: I0220 11:49:10.273193 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:10.273229 master-0 kubenswrapper[4180]: I0220 11:49:10.273216 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-ca\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.277627 master-0 kubenswrapper[4180]: I0220 11:49:10.273239 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:10.277627 master-0 kubenswrapper[4180]: I0220 11:49:10.273266 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk5sc\" (UniqueName: \"kubernetes.io/projected/eb135cff-1a2e-468d-80ab-f7db3f57552a-kube-api-access-tk5sc\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:10.277627 master-0 kubenswrapper[4180]: I0220 11:49:10.273289 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1388469-5e55-4c1b-97c3-c88777f29ae7-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:10.277627 master-0 kubenswrapper[4180]: I0220 11:49:10.273312 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:10.277627 master-0 kubenswrapper[4180]: I0220 11:49:10.273337 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvxsh\" (UniqueName: \"kubernetes.io/projected/839bf5b1-b242-4bbd-bc09-cf6abcf7f734-kube-api-access-pvxsh\") pod \"csi-snapshot-controller-operator-6fb4df594f-8x7xw\" (UID: \"839bf5b1-b242-4bbd-bc09-cf6abcf7f734\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw" Feb 20 11:49:10.277627 master-0 kubenswrapper[4180]: I0220 11:49:10.273374 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k2dv\" (UniqueName: \"kubernetes.io/projected/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-kube-api-access-8k2dv\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:10.277627 master-0 kubenswrapper[4180]: I0220 11:49:10.273396 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df81fcc-f967-4874-ad16-1a89f0e7875a-config\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:10.277627 master-0 kubenswrapper[4180]: I0220 11:49:10.273417 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2a7cb1-1d05-4b24-86ed-f823fad5013e-trusted-ca\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:10.277627 master-0 kubenswrapper[4180]: I0220 11:49:10.273448 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mggv\" (UniqueName: \"kubernetes.io/projected/1df81fcc-f967-4874-ad16-1a89f0e7875a-kube-api-access-7mggv\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:10.277627 master-0 kubenswrapper[4180]: I0220 11:49:10.273488 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1388469-5e55-4c1b-97c3-c88777f29ae7-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:10.277627 master-0 kubenswrapper[4180]: I0220 11:49:10.273512 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:10.277627 master-0 kubenswrapper[4180]: I0220 11:49:10.273554 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:10.277627 master-0 kubenswrapper[4180]: I0220 11:49:10.273581 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-serving-cert\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:10.277627 master-0 kubenswrapper[4180]: I0220 11:49:10.273601 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.273626 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6td56\" (UniqueName: \"kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-kube-api-access-6td56\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.273647 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1388469-5e55-4c1b-97c3-c88777f29ae7-config\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.273671 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nd7r\" (UniqueName: \"kubernetes.io/projected/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-kube-api-access-8nd7r\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.273697 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.273722 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k8n8\" (UniqueName: \"kubernetes.io/projected/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-kube-api-access-2k8n8\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.273747 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1df81fcc-f967-4874-ad16-1a89f0e7875a-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.273769 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8"] Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.273775 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-bound-sa-token\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.273902 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85"] Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.274518 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.274652 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.274673 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.274827 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.274908 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.274979 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.275189 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt"] Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.275233 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.275301 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.275389 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.275450 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.275680 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.275707 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.275755 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.275787 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc"] Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.275835 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.275845 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 20 11:49:10.278163 master-0 kubenswrapper[4180]: I0220 11:49:10.275923 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 11:49:10.279086 master-0 kubenswrapper[4180]: I0220 11:49:10.275955 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 20 11:49:10.279086 master-0 kubenswrapper[4180]: I0220 11:49:10.275988 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 20 11:49:10.279086 master-0 kubenswrapper[4180]: I0220 11:49:10.276145 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 20 11:49:10.279086 master-0 kubenswrapper[4180]: I0220 11:49:10.276264 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 20 11:49:10.279086 master-0 kubenswrapper[4180]: I0220 11:49:10.276329 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 20 11:49:10.279086 master-0 kubenswrapper[4180]: I0220 11:49:10.276321 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 20 11:49:10.279086 master-0 kubenswrapper[4180]: I0220 11:49:10.276479 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 20 11:49:10.279086 master-0 kubenswrapper[4180]: I0220 11:49:10.276559 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 20 11:49:10.279086 master-0 kubenswrapper[4180]: I0220 11:49:10.276759 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk"] Feb 20 11:49:10.279086 master-0 kubenswrapper[4180]: I0220 11:49:10.277599 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 20 11:49:10.279086 master-0 kubenswrapper[4180]: I0220 11:49:10.278244 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 20 11:49:10.279086 master-0 kubenswrapper[4180]: I0220 11:49:10.278360 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc"] Feb 20 11:49:10.279086 master-0 kubenswrapper[4180]: I0220 11:49:10.278402 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 20 11:49:10.279612 master-0 kubenswrapper[4180]: I0220 11:49:10.279109 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw"] Feb 20 11:49:10.282315 master-0 kubenswrapper[4180]: I0220 11:49:10.282172 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst"] Feb 20 11:49:10.285676 master-0 kubenswrapper[4180]: I0220 11:49:10.284132 4180 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-gkxzr"] Feb 20 11:49:10.285676 master-0 kubenswrapper[4180]: I0220 11:49:10.284608 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:10.286118 master-0 kubenswrapper[4180]: I0220 11:49:10.286082 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g"] Feb 20 11:49:10.286574 master-0 kubenswrapper[4180]: I0220 11:49:10.286480 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx"] Feb 20 11:49:10.287289 master-0 kubenswrapper[4180]: I0220 11:49:10.287230 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 20 11:49:10.287831 master-0 kubenswrapper[4180]: I0220 11:49:10.287799 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd"] Feb 20 11:49:10.291168 master-0 kubenswrapper[4180]: I0220 11:49:10.291122 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 20 11:49:10.291325 master-0 kubenswrapper[4180]: I0220 11:49:10.291284 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89"] Feb 20 11:49:10.292509 master-0 kubenswrapper[4180]: I0220 11:49:10.292460 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw"] Feb 20 11:49:10.298354 master-0 kubenswrapper[4180]: I0220 11:49:10.296520 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 20 11:49:10.298354 master-0 kubenswrapper[4180]: I0220 11:49:10.296839 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75"] Feb 20 11:49:10.300329 master-0 kubenswrapper[4180]: I0220 11:49:10.300296 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt"] Feb 20 11:49:10.303791 master-0 kubenswrapper[4180]: I0220 11:49:10.303764 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt"] Feb 20 11:49:10.308189 master-0 kubenswrapper[4180]: I0220 11:49:10.308162 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-qhx9j"] Feb 20 11:49:10.309168 master-0 kubenswrapper[4180]: I0220 11:49:10.309144 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v"] Feb 20 11:49:10.310781 master-0 kubenswrapper[4180]: I0220 11:49:10.310753 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2"] Feb 20 11:49:10.311726 master-0 kubenswrapper[4180]: I0220 11:49:10.311703 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-kw2v6"] Feb 20 11:49:10.312682 master-0 kubenswrapper[4180]: I0220 11:49:10.312663 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l"] Feb 20 11:49:10.313849 master-0 kubenswrapper[4180]: I0220 11:49:10.313798 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-nr4tg"] Feb 20 11:49:10.375393 master-0 kubenswrapper[4180]: I0220 11:49:10.375036 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:10.375393 master-0 kubenswrapper[4180]: I0220 11:49:10.375107 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-ca\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.376199 master-0 kubenswrapper[4180]: I0220 11:49:10.376148 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-ca\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.377244 master-0 kubenswrapper[4180]: I0220 11:49:10.377177 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:10.377352 master-0 kubenswrapper[4180]: I0220 11:49:10.377303 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wnh5\" (UniqueName: \"kubernetes.io/projected/22bba1b3-587d-4802-b4ae-946827c3fa7a-kube-api-access-2wnh5\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:10.377437 master-0 kubenswrapper[4180]: I0220 11:49:10.377399 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:10.377502 master-0 kubenswrapper[4180]: I0220 11:49:10.377471 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk5sc\" (UniqueName: \"kubernetes.io/projected/eb135cff-1a2e-468d-80ab-f7db3f57552a-kube-api-access-tk5sc\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:10.377607 master-0 kubenswrapper[4180]: I0220 11:49:10.377572 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0b28c90-d5b6-44f3-867c-020ece32ac7d-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:10.377669 master-0 kubenswrapper[4180]: I0220 11:49:10.377643 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:10.377726 master-0 kubenswrapper[4180]: I0220 11:49:10.377698 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0b28c90-d5b6-44f3-867c-020ece32ac7d-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:10.377803 master-0 kubenswrapper[4180]: I0220 11:49:10.377756 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1388469-5e55-4c1b-97c3-c88777f29ae7-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:10.377870 master-0 kubenswrapper[4180]: I0220 11:49:10.377813 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvxsh\" (UniqueName: \"kubernetes.io/projected/839bf5b1-b242-4bbd-bc09-cf6abcf7f734-kube-api-access-pvxsh\") pod \"csi-snapshot-controller-operator-6fb4df594f-8x7xw\" (UID: \"839bf5b1-b242-4bbd-bc09-cf6abcf7f734\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw" Feb 20 11:49:10.377926 master-0 kubenswrapper[4180]: I0220 11:49:10.377870 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vvm8\" (UniqueName: \"kubernetes.io/projected/5360f3f5-2d07-432f-af45-22659538c55e-kube-api-access-7vvm8\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:10.377982 master-0 kubenswrapper[4180]: I0220 11:49:10.377947 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26x7b\" (UniqueName: \"kubernetes.io/projected/4d060bff-3c25-4eeb-bdd3-e20fb2687645-kube-api-access-26x7b\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:10.378021 master-0 kubenswrapper[4180]: I0220 11:49:10.377998 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k2dv\" (UniqueName: \"kubernetes.io/projected/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-kube-api-access-8k2dv\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:10.378089 master-0 kubenswrapper[4180]: I0220 11:49:10.378051 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.378158 master-0 kubenswrapper[4180]: I0220 11:49:10.378101 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:10.378158 master-0 kubenswrapper[4180]: I0220 11:49:10.378141 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df81fcc-f967-4874-ad16-1a89f0e7875a-config\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:10.378247 master-0 kubenswrapper[4180]: I0220 11:49:10.378175 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2a7cb1-1d05-4b24-86ed-f823fad5013e-trusted-ca\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:10.378247 master-0 kubenswrapper[4180]: I0220 11:49:10.378212 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mggv\" (UniqueName: \"kubernetes.io/projected/1df81fcc-f967-4874-ad16-1a89f0e7875a-kube-api-access-7mggv\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:10.378340 master-0 kubenswrapper[4180]: I0220 11:49:10.378246 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1388469-5e55-4c1b-97c3-c88777f29ae7-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:10.378340 master-0 kubenswrapper[4180]: I0220 11:49:10.378283 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:10.378340 master-0 kubenswrapper[4180]: I0220 11:49:10.378321 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:10.378456 master-0 kubenswrapper[4180]: I0220 11:49:10.378394 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcnmk\" (UniqueName: \"kubernetes.io/projected/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-kube-api-access-rcnmk\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:10.378495 master-0 kubenswrapper[4180]: I0220 11:49:10.378443 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:10.378563 master-0 kubenswrapper[4180]: I0220 11:49:10.378498 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-serving-cert\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:10.378650 master-0 kubenswrapper[4180]: I0220 11:49:10.378609 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:10.379175 master-0 kubenswrapper[4180]: I0220 11:49:10.378676 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6td56\" (UniqueName: \"kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-kube-api-access-6td56\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:10.379175 master-0 kubenswrapper[4180]: I0220 11:49:10.378786 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zkbq\" (UniqueName: \"kubernetes.io/projected/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-kube-api-access-2zkbq\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.379175 master-0 kubenswrapper[4180]: I0220 11:49:10.378834 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1388469-5e55-4c1b-97c3-c88777f29ae7-config\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:10.379175 master-0 kubenswrapper[4180]: E0220 11:49:10.378841 4180 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 11:49:10.379175 master-0 kubenswrapper[4180]: I0220 11:49:10.378870 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nd7r\" (UniqueName: \"kubernetes.io/projected/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-kube-api-access-8nd7r\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:10.379175 master-0 kubenswrapper[4180]: E0220 11:49:10.378918 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:10.878892601 +0000 UTC m=+112.053944461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "node-tuning-operator-tls" not found Feb 20 11:49:10.379175 master-0 kubenswrapper[4180]: I0220 11:49:10.378952 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:10.379175 master-0 kubenswrapper[4180]: I0220 11:49:10.378997 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k8n8\" (UniqueName: \"kubernetes.io/projected/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-kube-api-access-2k8n8\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:10.379175 master-0 kubenswrapper[4180]: I0220 11:49:10.379041 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1df81fcc-f967-4874-ad16-1a89f0e7875a-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:10.379175 master-0 kubenswrapper[4180]: I0220 11:49:10.379083 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5360f3f5-2d07-432f-af45-22659538c55e-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:10.379175 master-0 kubenswrapper[4180]: I0220 11:49:10.379128 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-bound-sa-token\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:10.379175 master-0 kubenswrapper[4180]: I0220 11:49:10.379167 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4457\" (UniqueName: \"kubernetes.io/projected/6dfca740-0387-428a-b957-3e8a09c6e352-kube-api-access-d4457\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:10.379693 master-0 kubenswrapper[4180]: I0220 11:49:10.379208 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e90033-9ddf-41b4-ab61-e89add6c2fde-serving-cert\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:10.379693 master-0 kubenswrapper[4180]: I0220 11:49:10.379247 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:10.379693 master-0 kubenswrapper[4180]: I0220 11:49:10.379288 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:10.379693 master-0 kubenswrapper[4180]: I0220 11:49:10.379328 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-serving-cert\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.379693 master-0 kubenswrapper[4180]: I0220 11:49:10.379407 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:10.379693 master-0 kubenswrapper[4180]: I0220 11:49:10.379478 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:10.379693 master-0 kubenswrapper[4180]: I0220 11:49:10.379579 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-images\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:10.379693 master-0 kubenswrapper[4180]: I0220 11:49:10.379644 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:10.380442 master-0 kubenswrapper[4180]: E0220 11:49:10.380411 4180 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:10.381288 master-0 kubenswrapper[4180]: E0220 11:49:10.381270 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls podName:db2a7cb1-1d05-4b24-86ed-f823fad5013e nodeName:}" failed. No retries permitted until 2026-02-20 11:49:10.88124207 +0000 UTC m=+112.056293900 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls") pod "ingress-operator-6569778c84-kw2v6" (UID: "db2a7cb1-1d05-4b24-86ed-f823fad5013e") : secret "metrics-tls" not found Feb 20 11:49:10.383152 master-0 kubenswrapper[4180]: I0220 11:49:10.381699 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1388469-5e55-4c1b-97c3-c88777f29ae7-config\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:10.383152 master-0 kubenswrapper[4180]: E0220 11:49:10.381895 4180 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:10.383152 master-0 kubenswrapper[4180]: E0220 11:49:10.381983 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:10.881953391 +0000 UTC m=+112.057005331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:10.383152 master-0 kubenswrapper[4180]: I0220 11:49:10.381998 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2a7cb1-1d05-4b24-86ed-f823fad5013e-trusted-ca\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:10.383152 master-0 kubenswrapper[4180]: I0220 11:49:10.382115 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p4w6\" (UniqueName: \"kubernetes.io/projected/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-kube-api-access-8p4w6\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:10.383152 master-0 kubenswrapper[4180]: I0220 11:49:10.382172 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:10.383152 master-0 kubenswrapper[4180]: I0220 11:49:10.382300 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvjcp\" (UniqueName: \"kubernetes.io/projected/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-kube-api-access-lvjcp\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.383152 master-0 kubenswrapper[4180]: I0220 11:49:10.382399 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j82z\" (UniqueName: \"kubernetes.io/projected/906307ef-d988-49e7-9d63-39116a2c4880-kube-api-access-5j82z\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:10.383152 master-0 kubenswrapper[4180]: I0220 11:49:10.382466 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-config\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.383152 master-0 kubenswrapper[4180]: I0220 11:49:10.382564 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:10.383152 master-0 kubenswrapper[4180]: I0220 11:49:10.382816 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.383152 master-0 kubenswrapper[4180]: E0220 11:49:10.380454 4180 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 20 11:49:10.389148 master-0 kubenswrapper[4180]: I0220 11:49:10.383867 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-images\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:10.389148 master-0 kubenswrapper[4180]: I0220 11:49:10.384962 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:10.389148 master-0 kubenswrapper[4180]: E0220 11:49:10.385030 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert podName:d65a0af4-c96f-44f8-9384-6bae4585983b nodeName:}" failed. No retries permitted until 2026-02-20 11:49:10.885009981 +0000 UTC m=+112.060061921 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert") pod "olm-operator-5499d7f7bb-6qtzc" (UID: "d65a0af4-c96f-44f8-9384-6bae4585983b") : secret "olm-operator-serving-cert" not found Feb 20 11:49:10.389148 master-0 kubenswrapper[4180]: I0220 11:49:10.385516 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2kct\" (UniqueName: \"kubernetes.io/projected/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-kube-api-access-z2kct\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:10.389148 master-0 kubenswrapper[4180]: I0220 11:49:10.385599 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:10.389148 master-0 kubenswrapper[4180]: I0220 11:49:10.385634 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:10.389148 master-0 kubenswrapper[4180]: I0220 11:49:10.385682 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df81fcc-f967-4874-ad16-1a89f0e7875a-config\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:10.389148 master-0 kubenswrapper[4180]: I0220 11:49:10.385661 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:10.389148 master-0 kubenswrapper[4180]: I0220 11:49:10.385827 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2tk7\" (UniqueName: \"kubernetes.io/projected/01e90033-9ddf-41b4-ab61-e89add6c2fde-kube-api-access-j2tk7\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:10.389148 master-0 kubenswrapper[4180]: I0220 11:49:10.385869 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:10.389148 master-0 kubenswrapper[4180]: I0220 11:49:10.387177 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:10.389148 master-0 kubenswrapper[4180]: I0220 11:49:10.387805 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:10.389148 master-0 kubenswrapper[4180]: I0220 11:49:10.387881 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/906307ef-d988-49e7-9d63-39116a2c4880-host-slash\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:10.389148 master-0 kubenswrapper[4180]: I0220 11:49:10.387994 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:10.389747 master-0 kubenswrapper[4180]: I0220 11:49:10.388038 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-serving-cert\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.389747 master-0 kubenswrapper[4180]: I0220 11:49:10.388087 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:10.389747 master-0 kubenswrapper[4180]: I0220 11:49:10.388133 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:10.389747 master-0 kubenswrapper[4180]: I0220 11:49:10.388165 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx26k\" (UniqueName: \"kubernetes.io/projected/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-kube-api-access-jx26k\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:10.389747 master-0 kubenswrapper[4180]: I0220 11:49:10.388196 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:10.389747 master-0 kubenswrapper[4180]: I0220 11:49:10.388225 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpk24\" (UniqueName: \"kubernetes.io/projected/d65a0af4-c96f-44f8-9384-6bae4585983b-kube-api-access-bpk24\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:10.389747 master-0 kubenswrapper[4180]: I0220 11:49:10.388254 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/906307ef-d988-49e7-9d63-39116a2c4880-iptables-alerter-script\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:10.389747 master-0 kubenswrapper[4180]: I0220 11:49:10.388280 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/22bba1b3-587d-4802-b4ae-946827c3fa7a-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:10.389747 master-0 kubenswrapper[4180]: I0220 11:49:10.388307 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:10.389747 master-0 kubenswrapper[4180]: I0220 11:49:10.388335 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:10.389747 master-0 kubenswrapper[4180]: I0220 11:49:10.388360 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e90033-9ddf-41b4-ab61-e89add6c2fde-config\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:10.389747 master-0 kubenswrapper[4180]: I0220 11:49:10.388386 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:10.389747 master-0 kubenswrapper[4180]: I0220 11:49:10.388414 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqxhp\" (UniqueName: \"kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-kube-api-access-lqxhp\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:10.389747 master-0 kubenswrapper[4180]: I0220 11:49:10.388473 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:10.389747 master-0 kubenswrapper[4180]: I0220 11:49:10.388569 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:10.393049 master-0 kubenswrapper[4180]: I0220 11:49:10.388625 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-client\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.393049 master-0 kubenswrapper[4180]: I0220 11:49:10.388653 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5360f3f5-2d07-432f-af45-22659538c55e-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:10.393049 master-0 kubenswrapper[4180]: I0220 11:49:10.388714 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.393049 master-0 kubenswrapper[4180]: I0220 11:49:10.388741 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-config\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.393049 master-0 kubenswrapper[4180]: I0220 11:49:10.388794 4180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0b28c90-d5b6-44f3-867c-020ece32ac7d-config\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:10.393049 master-0 kubenswrapper[4180]: I0220 11:49:10.389152 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:10.393049 master-0 kubenswrapper[4180]: E0220 11:49:10.389218 4180 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 11:49:10.393049 master-0 kubenswrapper[4180]: E0220 11:49:10.389298 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls podName:7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca nodeName:}" failed. No retries permitted until 2026-02-20 11:49:10.889279746 +0000 UTC m=+112.064331856 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-r9ntt" (UID: "7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca") : secret "image-registry-operator-tls" not found Feb 20 11:49:10.393049 master-0 kubenswrapper[4180]: E0220 11:49:10.389474 4180 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 20 11:49:10.393049 master-0 kubenswrapper[4180]: E0220 11:49:10.389597 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls podName:eb135cff-1a2e-468d-80ab-f7db3f57552a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:10.889562225 +0000 UTC m=+112.064614085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls") pod "machine-config-operator-7f8c75f984-vvvjt" (UID: "eb135cff-1a2e-468d-80ab-f7db3f57552a") : secret "mco-proxy-tls" not found Feb 20 11:49:10.393049 master-0 kubenswrapper[4180]: E0220 11:49:10.389990 4180 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 11:49:10.393049 master-0 kubenswrapper[4180]: E0220 11:49:10.390069 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert podName:dbce6cdc-040a-48e1-8a81-b6ff9c180eba nodeName:}" failed. No retries permitted until 2026-02-20 11:49:10.890046449 +0000 UTC m=+112.065098399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-mr99g" (UID: "dbce6cdc-040a-48e1-8a81-b6ff9c180eba") : secret "package-server-manager-serving-cert" not found Feb 20 11:49:10.393049 master-0 kubenswrapper[4180]: I0220 11:49:10.390106 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:10.393049 master-0 kubenswrapper[4180]: I0220 11:49:10.390656 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1df81fcc-f967-4874-ad16-1a89f0e7875a-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:10.393049 master-0 kubenswrapper[4180]: I0220 11:49:10.390835 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-serving-cert\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:10.394007 master-0 kubenswrapper[4180]: I0220 11:49:10.390862 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e90033-9ddf-41b4-ab61-e89add6c2fde-config\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:10.394007 master-0 kubenswrapper[4180]: I0220 11:49:10.390899 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-config\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.394007 master-0 kubenswrapper[4180]: I0220 11:49:10.390913 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.394007 master-0 kubenswrapper[4180]: I0220 11:49:10.391158 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e90033-9ddf-41b4-ab61-e89add6c2fde-serving-cert\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:10.394007 master-0 kubenswrapper[4180]: I0220 11:49:10.392905 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:10.394007 master-0 kubenswrapper[4180]: I0220 11:49:10.393580 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-serving-cert\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.394609 master-0 kubenswrapper[4180]: I0220 11:49:10.394299 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-client\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.395057 master-0 kubenswrapper[4180]: I0220 11:49:10.395002 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:10.395706 master-0 kubenswrapper[4180]: I0220 11:49:10.395659 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k2dv\" (UniqueName: \"kubernetes.io/projected/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-kube-api-access-8k2dv\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:10.401228 master-0 kubenswrapper[4180]: I0220 11:49:10.400422 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:10.401228 master-0 kubenswrapper[4180]: I0220 11:49:10.401136 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1388469-5e55-4c1b-97c3-c88777f29ae7-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:10.401466 master-0 kubenswrapper[4180]: I0220 11:49:10.401417 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1388469-5e55-4c1b-97c3-c88777f29ae7-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:10.404759 master-0 kubenswrapper[4180]: I0220 11:49:10.404193 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nd7r\" (UniqueName: \"kubernetes.io/projected/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-kube-api-access-8nd7r\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:10.404759 master-0 kubenswrapper[4180]: I0220 11:49:10.404232 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvxsh\" (UniqueName: \"kubernetes.io/projected/839bf5b1-b242-4bbd-bc09-cf6abcf7f734-kube-api-access-pvxsh\") pod \"csi-snapshot-controller-operator-6fb4df594f-8x7xw\" (UID: \"839bf5b1-b242-4bbd-bc09-cf6abcf7f734\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw" Feb 20 11:49:10.404759 master-0 kubenswrapper[4180]: I0220 11:49:10.404696 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6td56\" (UniqueName: \"kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-kube-api-access-6td56\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:10.406615 master-0 kubenswrapper[4180]: I0220 11:49:10.406589 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvjcp\" (UniqueName: \"kubernetes.io/projected/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-kube-api-access-lvjcp\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.447117 master-0 kubenswrapper[4180]: I0220 11:49:10.447052 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk5sc\" (UniqueName: \"kubernetes.io/projected/eb135cff-1a2e-468d-80ab-f7db3f57552a-kube-api-access-tk5sc\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:10.450540 master-0 kubenswrapper[4180]: I0220 11:49:10.447454 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2kct\" (UniqueName: \"kubernetes.io/projected/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-kube-api-access-z2kct\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:10.483259 master-0 kubenswrapper[4180]: I0220 11:49:10.483213 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p4w6\" (UniqueName: \"kubernetes.io/projected/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-kube-api-access-8p4w6\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:10.485751 master-0 kubenswrapper[4180]: I0220 11:49:10.485687 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-bound-sa-token\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:10.490347 master-0 kubenswrapper[4180]: I0220 11:49:10.490193 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5360f3f5-2d07-432f-af45-22659538c55e-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:10.490347 master-0 kubenswrapper[4180]: I0220 11:49:10.490299 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4457\" (UniqueName: \"kubernetes.io/projected/6dfca740-0387-428a-b957-3e8a09c6e352-kube-api-access-d4457\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:10.490422 master-0 kubenswrapper[4180]: I0220 11:49:10.490351 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:10.490629 master-0 kubenswrapper[4180]: I0220 11:49:10.490581 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:10.490704 master-0 kubenswrapper[4180]: I0220 11:49:10.490683 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j82z\" (UniqueName: \"kubernetes.io/projected/906307ef-d988-49e7-9d63-39116a2c4880-kube-api-access-5j82z\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:10.490899 master-0 kubenswrapper[4180]: I0220 11:49:10.490880 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.491005 master-0 kubenswrapper[4180]: I0220 11:49:10.490989 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:10.491128 master-0 kubenswrapper[4180]: I0220 11:49:10.491115 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/906307ef-d988-49e7-9d63-39116a2c4880-host-slash\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:10.491203 master-0 kubenswrapper[4180]: I0220 11:49:10.491191 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-serving-cert\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.491305 master-0 kubenswrapper[4180]: E0220 11:49:10.491263 4180 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:10.491427 master-0 kubenswrapper[4180]: E0220 11:49:10.491400 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls podName:b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:10.991363578 +0000 UTC m=+112.166415428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls") pod "dns-operator-8c7d49845-qhx9j" (UID: "b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8") : secret "metrics-tls" not found Feb 20 11:49:10.491816 master-0 kubenswrapper[4180]: I0220 11:49:10.491785 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.491860 master-0 kubenswrapper[4180]: I0220 11:49:10.491274 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx26k\" (UniqueName: \"kubernetes.io/projected/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-kube-api-access-jx26k\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:10.491890 master-0 kubenswrapper[4180]: I0220 11:49:10.491865 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/906307ef-d988-49e7-9d63-39116a2c4880-host-slash\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:10.491928 master-0 kubenswrapper[4180]: I0220 11:49:10.491886 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:10.491961 master-0 kubenswrapper[4180]: I0220 11:49:10.491931 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/22bba1b3-587d-4802-b4ae-946827c3fa7a-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:10.492098 master-0 kubenswrapper[4180]: I0220 11:49:10.491987 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/906307ef-d988-49e7-9d63-39116a2c4880-iptables-alerter-script\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:10.492098 master-0 kubenswrapper[4180]: I0220 11:49:10.492074 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:10.492181 master-0 kubenswrapper[4180]: I0220 11:49:10.492156 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:10.492218 master-0 kubenswrapper[4180]: I0220 11:49:10.492208 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5360f3f5-2d07-432f-af45-22659538c55e-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:10.492261 master-0 kubenswrapper[4180]: I0220 11:49:10.492244 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-config\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.492290 master-0 kubenswrapper[4180]: I0220 11:49:10.492270 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0b28c90-d5b6-44f3-867c-020ece32ac7d-config\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:10.492317 master-0 kubenswrapper[4180]: I0220 11:49:10.492300 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wnh5\" (UniqueName: \"kubernetes.io/projected/22bba1b3-587d-4802-b4ae-946827c3fa7a-kube-api-access-2wnh5\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:10.492343 master-0 kubenswrapper[4180]: I0220 11:49:10.492326 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0b28c90-d5b6-44f3-867c-020ece32ac7d-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:10.492380 master-0 kubenswrapper[4180]: I0220 11:49:10.492361 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0b28c90-d5b6-44f3-867c-020ece32ac7d-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:10.492436 master-0 kubenswrapper[4180]: I0220 11:49:10.492418 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vvm8\" (UniqueName: \"kubernetes.io/projected/5360f3f5-2d07-432f-af45-22659538c55e-kube-api-access-7vvm8\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:10.492487 master-0 kubenswrapper[4180]: I0220 11:49:10.492471 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.492518 master-0 kubenswrapper[4180]: I0220 11:49:10.492497 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26x7b\" (UniqueName: \"kubernetes.io/projected/4d060bff-3c25-4eeb-bdd3-e20fb2687645-kube-api-access-26x7b\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:10.492570 master-0 kubenswrapper[4180]: I0220 11:49:10.492549 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:10.492618 master-0 kubenswrapper[4180]: I0220 11:49:10.492601 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:10.492647 master-0 kubenswrapper[4180]: I0220 11:49:10.492632 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcnmk\" (UniqueName: \"kubernetes.io/projected/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-kube-api-access-rcnmk\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:10.492674 master-0 kubenswrapper[4180]: I0220 11:49:10.492659 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zkbq\" (UniqueName: \"kubernetes.io/projected/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-kube-api-access-2zkbq\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.493105 master-0 kubenswrapper[4180]: E0220 11:49:10.493058 4180 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 11:49:10.493162 master-0 kubenswrapper[4180]: E0220 11:49:10.493127 4180 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 20 11:49:10.493330 master-0 kubenswrapper[4180]: E0220 11:49:10.493083 4180 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:10.493370 master-0 kubenswrapper[4180]: I0220 11:49:10.493352 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5360f3f5-2d07-432f-af45-22659538c55e-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:10.493422 master-0 kubenswrapper[4180]: I0220 11:49:10.493371 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-config\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.493482 master-0 kubenswrapper[4180]: E0220 11:49:10.493461 4180 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 11:49:10.493819 master-0 kubenswrapper[4180]: I0220 11:49:10.493798 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5360f3f5-2d07-432f-af45-22659538c55e-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:10.493878 master-0 kubenswrapper[4180]: I0220 11:49:10.493837 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.493910 master-0 kubenswrapper[4180]: E0220 11:49:10.493144 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs podName:dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:10.99312363 +0000 UTC m=+112.168175490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-jgv89" (UID: "dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783") : secret "multus-admission-controller-secret" not found Feb 20 11:49:10.493962 master-0 kubenswrapper[4180]: E0220 11:49:10.493929 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert podName:4d060bff-3c25-4eeb-bdd3-e20fb2687645 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:10.993903813 +0000 UTC m=+112.168955853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert") pod "catalog-operator-596f79dd6f-bjxbt" (UID: "4d060bff-3c25-4eeb-bdd3-e20fb2687645") : secret "catalog-operator-serving-cert" not found Feb 20 11:49:10.493995 master-0 kubenswrapper[4180]: E0220 11:49:10.493968 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics podName:6dfca740-0387-428a-b957-3e8a09c6e352 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:10.993954674 +0000 UTC m=+112.169006504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-nr4tg" (UID: "6dfca740-0387-428a-b957-3e8a09c6e352") : secret "marketplace-operator-metrics" not found Feb 20 11:49:10.494287 master-0 kubenswrapper[4180]: I0220 11:49:10.494265 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/22bba1b3-587d-4802-b4ae-946827c3fa7a-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:10.494624 master-0 kubenswrapper[4180]: I0220 11:49:10.494409 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0b28c90-d5b6-44f3-867c-020ece32ac7d-config\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:10.494624 master-0 kubenswrapper[4180]: E0220 11:49:10.494426 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls podName:22bba1b3-587d-4802-b4ae-946827c3fa7a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:10.994378306 +0000 UTC m=+112.169430166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-5zl5l" (UID: "22bba1b3-587d-4802-b4ae-946827c3fa7a") : secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:10.494924 master-0 kubenswrapper[4180]: I0220 11:49:10.494888 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2tk7\" (UniqueName: \"kubernetes.io/projected/01e90033-9ddf-41b4-ab61-e89add6c2fde-kube-api-access-j2tk7\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:10.495173 master-0 kubenswrapper[4180]: I0220 11:49:10.495127 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/906307ef-d988-49e7-9d63-39116a2c4880-iptables-alerter-script\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:10.495720 master-0 kubenswrapper[4180]: I0220 11:49:10.495676 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:10.495820 master-0 kubenswrapper[4180]: I0220 11:49:10.495796 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-serving-cert\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.496919 master-0 kubenswrapper[4180]: I0220 11:49:10.496890 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0b28c90-d5b6-44f3-867c-020ece32ac7d-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:10.506160 master-0 kubenswrapper[4180]: I0220 11:49:10.506127 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:10.522943 master-0 kubenswrapper[4180]: I0220 11:49:10.522899 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:10.524455 master-0 kubenswrapper[4180]: I0220 11:49:10.524424 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:10.525112 master-0 kubenswrapper[4180]: I0220 11:49:10.525095 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqxhp\" (UniqueName: \"kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-kube-api-access-lqxhp\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:10.538047 master-0 kubenswrapper[4180]: I0220 11:49:10.537989 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpk24\" (UniqueName: \"kubernetes.io/projected/d65a0af4-c96f-44f8-9384-6bae4585983b-kube-api-access-bpk24\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:10.565351 master-0 kubenswrapper[4180]: I0220 11:49:10.565313 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:10.565645 master-0 kubenswrapper[4180]: I0220 11:49:10.565584 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mggv\" (UniqueName: \"kubernetes.io/projected/1df81fcc-f967-4874-ad16-1a89f0e7875a-kube-api-access-7mggv\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:10.579149 master-0 kubenswrapper[4180]: I0220 11:49:10.579108 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:10.588343 master-0 kubenswrapper[4180]: I0220 11:49:10.588291 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:10.615081 master-0 kubenswrapper[4180]: I0220 11:49:10.615026 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k8n8\" (UniqueName: \"kubernetes.io/projected/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-kube-api-access-2k8n8\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:10.620702 master-0 kubenswrapper[4180]: I0220 11:49:10.620649 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:10.650855 master-0 kubenswrapper[4180]: I0220 11:49:10.648760 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4457\" (UniqueName: \"kubernetes.io/projected/6dfca740-0387-428a-b957-3e8a09c6e352-kube-api-access-d4457\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:10.654487 master-0 kubenswrapper[4180]: I0220 11:49:10.654413 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:10.664179 master-0 kubenswrapper[4180]: I0220 11:49:10.664128 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:10.666288 master-0 kubenswrapper[4180]: I0220 11:49:10.666247 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:10.667366 master-0 kubenswrapper[4180]: I0220 11:49:10.667309 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j82z\" (UniqueName: \"kubernetes.io/projected/906307ef-d988-49e7-9d63-39116a2c4880-kube-api-access-5j82z\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:10.678476 master-0 kubenswrapper[4180]: I0220 11:49:10.678435 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx26k\" (UniqueName: \"kubernetes.io/projected/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-kube-api-access-jx26k\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:10.679316 master-0 kubenswrapper[4180]: I0220 11:49:10.679276 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw" Feb 20 11:49:10.707640 master-0 kubenswrapper[4180]: I0220 11:49:10.707225 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zkbq\" (UniqueName: \"kubernetes.io/projected/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-kube-api-access-2zkbq\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.735149 master-0 kubenswrapper[4180]: I0220 11:49:10.732209 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0b28c90-d5b6-44f3-867c-020ece32ac7d-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:10.754991 master-0 kubenswrapper[4180]: I0220 11:49:10.754960 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcnmk\" (UniqueName: \"kubernetes.io/projected/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-kube-api-access-rcnmk\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:10.767434 master-0 kubenswrapper[4180]: I0220 11:49:10.767399 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wnh5\" (UniqueName: \"kubernetes.io/projected/22bba1b3-587d-4802-b4ae-946827c3fa7a-kube-api-access-2wnh5\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:10.778298 master-0 kubenswrapper[4180]: I0220 11:49:10.774634 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:10.787741 master-0 kubenswrapper[4180]: I0220 11:49:10.787703 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26x7b\" (UniqueName: \"kubernetes.io/projected/4d060bff-3c25-4eeb-bdd3-e20fb2687645-kube-api-access-26x7b\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:10.803580 master-0 kubenswrapper[4180]: I0220 11:49:10.803516 4180 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vvm8\" (UniqueName: \"kubernetes.io/projected/5360f3f5-2d07-432f-af45-22659538c55e-kube-api-access-7vvm8\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:10.821669 master-0 kubenswrapper[4180]: I0220 11:49:10.821105 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd"] Feb 20 11:49:10.821669 master-0 kubenswrapper[4180]: I0220 11:49:10.821154 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85"] Feb 20 11:49:10.824169 master-0 kubenswrapper[4180]: I0220 11:49:10.824142 4180 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 20 11:49:10.849659 master-0 kubenswrapper[4180]: I0220 11:49:10.849635 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:10.870921 master-0 kubenswrapper[4180]: I0220 11:49:10.870387 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx"] Feb 20 11:49:10.875283 master-0 kubenswrapper[4180]: I0220 11:49:10.875245 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:10.876614 master-0 kubenswrapper[4180]: W0220 11:49:10.876575 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1df81fcc_f967_4874_ad16_1a89f0e7875a.slice/crio-e597c41c82bb3cdfce7c1bbc08b1c76dcd4cf2cd3b4feeb956d08f44152b7ef9 WatchSource:0}: Error finding container e597c41c82bb3cdfce7c1bbc08b1c76dcd4cf2cd3b4feeb956d08f44152b7ef9: Status 404 returned error can't find the container with id e597c41c82bb3cdfce7c1bbc08b1c76dcd4cf2cd3b4feeb956d08f44152b7ef9 Feb 20 11:49:10.887585 master-0 kubenswrapper[4180]: I0220 11:49:10.886630 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj"] Feb 20 11:49:10.892458 master-0 kubenswrapper[4180]: W0220 11:49:10.892409 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1388469_5e55_4c1b_97c3_c88777f29ae7.slice/crio-0aa23336820d4847f443dc2f86a2ade4113e5076452290a0fb2cf4f2ca4f4941 WatchSource:0}: Error finding container 0aa23336820d4847f443dc2f86a2ade4113e5076452290a0fb2cf4f2ca4f4941: Status 404 returned error can't find the container with id 0aa23336820d4847f443dc2f86a2ade4113e5076452290a0fb2cf4f2ca4f4941 Feb 20 11:49:10.895367 master-0 kubenswrapper[4180]: I0220 11:49:10.894821 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: I0220 11:49:10.897458 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: I0220 11:49:10.897496 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: I0220 11:49:10.897547 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: I0220 11:49:10.897578 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: I0220 11:49:10.897598 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: I0220 11:49:10.897651 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: I0220 11:49:10.897677 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: E0220 11:49:10.897834 4180 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: E0220 11:49:10.897901 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls podName:7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca nodeName:}" failed. No retries permitted until 2026-02-20 11:49:11.897882801 +0000 UTC m=+113.072934621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-r9ntt" (UID: "7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca") : secret "image-registry-operator-tls" not found Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: E0220 11:49:10.897950 4180 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: E0220 11:49:10.897971 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls podName:eb135cff-1a2e-468d-80ab-f7db3f57552a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:11.897965313 +0000 UTC m=+113.073017133 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls") pod "machine-config-operator-7f8c75f984-vvvjt" (UID: "eb135cff-1a2e-468d-80ab-f7db3f57552a") : secret "mco-proxy-tls" not found Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: E0220 11:49:10.898009 4180 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: E0220 11:49:10.898028 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert podName:dbce6cdc-040a-48e1-8a81-b6ff9c180eba nodeName:}" failed. No retries permitted until 2026-02-20 11:49:11.898021495 +0000 UTC m=+113.073073315 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-mr99g" (UID: "dbce6cdc-040a-48e1-8a81-b6ff9c180eba") : secret "package-server-manager-serving-cert" not found Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: E0220 11:49:10.898064 4180 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: E0220 11:49:10.898321 4180 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: E0220 11:49:10.898514 4180 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:10.900796 master-0 kubenswrapper[4180]: E0220 11:49:10.898586 4180 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 11:49:10.901268 master-0 kubenswrapper[4180]: E0220 11:49:10.899581 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert podName:d65a0af4-c96f-44f8-9384-6bae4585983b nodeName:}" failed. No retries permitted until 2026-02-20 11:49:11.898073747 +0000 UTC m=+113.073125567 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert") pod "olm-operator-5499d7f7bb-6qtzc" (UID: "d65a0af4-c96f-44f8-9384-6bae4585983b") : secret "olm-operator-serving-cert" not found Feb 20 11:49:10.901268 master-0 kubenswrapper[4180]: E0220 11:49:10.899604 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:11.899596721 +0000 UTC m=+113.074648541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:10.901268 master-0 kubenswrapper[4180]: E0220 11:49:10.899618 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls podName:db2a7cb1-1d05-4b24-86ed-f823fad5013e nodeName:}" failed. No retries permitted until 2026-02-20 11:49:11.899611792 +0000 UTC m=+113.074663602 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls") pod "ingress-operator-6569778c84-kw2v6" (UID: "db2a7cb1-1d05-4b24-86ed-f823fad5013e") : secret "metrics-tls" not found Feb 20 11:49:10.901268 master-0 kubenswrapper[4180]: E0220 11:49:10.899633 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:11.899625602 +0000 UTC m=+113.074677422 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "node-tuning-operator-tls" not found Feb 20 11:49:10.905318 master-0 kubenswrapper[4180]: I0220 11:49:10.905060 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:10.925986 master-0 kubenswrapper[4180]: W0220 11:49:10.925944 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod906307ef_d988_49e7_9d63_39116a2c4880.slice/crio-65f0cac0248f829995872c710eae2661c9c322f7d317b3a4dc6cd36bbbee0b47 WatchSource:0}: Error finding container 65f0cac0248f829995872c710eae2661c9c322f7d317b3a4dc6cd36bbbee0b47: Status 404 returned error can't find the container with id 65f0cac0248f829995872c710eae2661c9c322f7d317b3a4dc6cd36bbbee0b47 Feb 20 11:49:10.932012 master-0 kubenswrapper[4180]: I0220 11:49:10.931963 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2"] Feb 20 11:49:10.947302 master-0 kubenswrapper[4180]: I0220 11:49:10.947258 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc"] Feb 20 11:49:10.947858 master-0 kubenswrapper[4180]: W0220 11:49:10.947821 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3a36bb_9d11_48b3_a3b5_07b47738ef97.slice/crio-8f7330d7b1c8d5e165b36ef69bd54c2550cc5df6e53223d95c4871726c4c1402 WatchSource:0}: Error finding container 8f7330d7b1c8d5e165b36ef69bd54c2550cc5df6e53223d95c4871726c4c1402: Status 404 returned error can't find the container with id 8f7330d7b1c8d5e165b36ef69bd54c2550cc5df6e53223d95c4871726c4c1402 Feb 20 11:49:10.957059 master-0 kubenswrapper[4180]: W0220 11:49:10.957029 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce2b6fde_de56_49c3_9bd6_e81c679b02bc.slice/crio-318e8d0079ec56751e5bcf03b814977bae46333d7a42c62cfe81d3ed0047c4ac WatchSource:0}: Error finding container 318e8d0079ec56751e5bcf03b814977bae46333d7a42c62cfe81d3ed0047c4ac: Status 404 returned error can't find the container with id 318e8d0079ec56751e5bcf03b814977bae46333d7a42c62cfe81d3ed0047c4ac Feb 20 11:49:10.985410 master-0 kubenswrapper[4180]: I0220 11:49:10.985369 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw"] Feb 20 11:49:10.990596 master-0 kubenswrapper[4180]: W0220 11:49:10.990515 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod839bf5b1_b242_4bbd_bc09_cf6abcf7f734.slice/crio-827635bac05f32a4d1b33aabd85a52eb2d7b3922ab83e829cdc824722116be6c WatchSource:0}: Error finding container 827635bac05f32a4d1b33aabd85a52eb2d7b3922ab83e829cdc824722116be6c: Status 404 returned error can't find the container with id 827635bac05f32a4d1b33aabd85a52eb2d7b3922ab83e829cdc824722116be6c Feb 20 11:49:10.997486 master-0 kubenswrapper[4180]: I0220 11:49:10.994071 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk"] Feb 20 11:49:11.003655 master-0 kubenswrapper[4180]: I0220 11:49:10.999846 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:11.003655 master-0 kubenswrapper[4180]: I0220 11:49:10.999898 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:11.003655 master-0 kubenswrapper[4180]: I0220 11:49:10.999934 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:11.003655 master-0 kubenswrapper[4180]: I0220 11:49:10.999966 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:11.003655 master-0 kubenswrapper[4180]: I0220 11:49:11.000204 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:11.003655 master-0 kubenswrapper[4180]: E0220 11:49:11.002002 4180 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:11.003655 master-0 kubenswrapper[4180]: E0220 11:49:11.002047 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls podName:22bba1b3-587d-4802-b4ae-946827c3fa7a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:12.002033083 +0000 UTC m=+113.177084903 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-5zl5l" (UID: "22bba1b3-587d-4802-b4ae-946827c3fa7a") : secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:11.003655 master-0 kubenswrapper[4180]: E0220 11:49:11.002089 4180 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 11:49:11.003655 master-0 kubenswrapper[4180]: E0220 11:49:11.002107 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics podName:6dfca740-0387-428a-b957-3e8a09c6e352 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:12.002100995 +0000 UTC m=+113.177152815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-nr4tg" (UID: "6dfca740-0387-428a-b957-3e8a09c6e352") : secret "marketplace-operator-metrics" not found Feb 20 11:49:11.003655 master-0 kubenswrapper[4180]: E0220 11:49:11.002141 4180 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:11.003655 master-0 kubenswrapper[4180]: E0220 11:49:11.002158 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls podName:b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:12.002153057 +0000 UTC m=+113.177204877 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls") pod "dns-operator-8c7d49845-qhx9j" (UID: "b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8") : secret "metrics-tls" not found Feb 20 11:49:11.003655 master-0 kubenswrapper[4180]: E0220 11:49:11.002192 4180 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 11:49:11.003655 master-0 kubenswrapper[4180]: E0220 11:49:11.002210 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs podName:dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:12.002204848 +0000 UTC m=+113.177256668 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-jgv89" (UID: "dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783") : secret "multus-admission-controller-secret" not found Feb 20 11:49:11.003655 master-0 kubenswrapper[4180]: E0220 11:49:11.002242 4180 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 20 11:49:11.003655 master-0 kubenswrapper[4180]: E0220 11:49:11.002258 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert podName:4d060bff-3c25-4eeb-bdd3-e20fb2687645 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:12.00225263 +0000 UTC m=+113.177304450 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert") pod "catalog-operator-596f79dd6f-bjxbt" (UID: "4d060bff-3c25-4eeb-bdd3-e20fb2687645") : secret "catalog-operator-serving-cert" not found Feb 20 11:49:11.043332 master-0 kubenswrapper[4180]: I0220 11:49:11.043299 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v"] Feb 20 11:49:11.067108 master-0 kubenswrapper[4180]: I0220 11:49:11.067071 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75"] Feb 20 11:49:11.074572 master-0 kubenswrapper[4180]: W0220 11:49:11.074545 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5360f3f5_2d07_432f_af45_22659538c55e.slice/crio-c5619c16f90d5aa3883b6b245c376b14384e785794b212891be3d9cc98f2155b WatchSource:0}: Error finding container c5619c16f90d5aa3883b6b245c376b14384e785794b212891be3d9cc98f2155b: Status 404 returned error can't find the container with id c5619c16f90d5aa3883b6b245c376b14384e785794b212891be3d9cc98f2155b Feb 20 11:49:11.093174 master-0 kubenswrapper[4180]: I0220 11:49:11.093147 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw"] Feb 20 11:49:11.101829 master-0 kubenswrapper[4180]: W0220 11:49:11.101808 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c3aa45a_44cc_48fb_a478_ce01a70c4b02.slice/crio-d5397ea31b615de3ed7b896751d9092d0868b0406feb889378a7e38993fb96df WatchSource:0}: Error finding container d5397ea31b615de3ed7b896751d9092d0868b0406feb889378a7e38993fb96df: Status 404 returned error can't find the container with id d5397ea31b615de3ed7b896751d9092d0868b0406feb889378a7e38993fb96df Feb 20 11:49:11.144817 master-0 kubenswrapper[4180]: I0220 11:49:11.144756 4180 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8"] Feb 20 11:49:11.155166 master-0 kubenswrapper[4180]: W0220 11:49:11.155114 4180 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fca5d50_eb5f_4dbb_bdf6_8e07231406f9.slice/crio-2a5fb83b35a727aa019fe00cd3fd649fdc6a109862d8c91e0031dff4209d98e3 WatchSource:0}: Error finding container 2a5fb83b35a727aa019fe00cd3fd649fdc6a109862d8c91e0031dff4209d98e3: Status 404 returned error can't find the container with id 2a5fb83b35a727aa019fe00cd3fd649fdc6a109862d8c91e0031dff4209d98e3 Feb 20 11:49:11.158608 master-0 kubenswrapper[4180]: E0220 11:49:11.158492 4180 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-storage-version-migrator-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc,Command:[cluster-kube-storage-version-migrator-operator start],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8p4w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-storage-version-migrator-operator-fc889cfd5-stms8_openshift-kube-storage-version-migrator-operator(1fca5d50-eb5f-4dbb-bdf6-8e07231406f9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 20 11:49:11.159950 master-0 kubenswrapper[4180]: E0220 11:49:11.159902 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" podUID="1fca5d50-eb5f-4dbb-bdf6-8e07231406f9" Feb 20 11:49:11.499587 master-0 kubenswrapper[4180]: I0220 11:49:11.491780 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" event={"ID":"f1388469-5e55-4c1b-97c3-c88777f29ae7","Type":"ContainerStarted","Data":"0aa23336820d4847f443dc2f86a2ade4113e5076452290a0fb2cf4f2ca4f4941"} Feb 20 11:49:11.499587 master-0 kubenswrapper[4180]: I0220 11:49:11.494353 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" event={"ID":"f98aeaf7-bf1a-46af-bf1b-85713baa4c67","Type":"ContainerStarted","Data":"cbd2814207ea73c81ee03ec39936289eb40513d40ec1dfdddcdf33cff0834b18"} Feb 20 11:49:11.499587 master-0 kubenswrapper[4180]: I0220 11:49:11.496706 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" event={"ID":"01e90033-9ddf-41b4-ab61-e89add6c2fde","Type":"ContainerStarted","Data":"a92cb32c4be6840fe62cceeff083a250664f650a02bcc7c9c164c3636c13a84d"} Feb 20 11:49:11.499587 master-0 kubenswrapper[4180]: I0220 11:49:11.498316 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-gkxzr" event={"ID":"906307ef-d988-49e7-9d63-39116a2c4880","Type":"ContainerStarted","Data":"65f0cac0248f829995872c710eae2661c9c322f7d317b3a4dc6cd36bbbee0b47"} Feb 20 11:49:11.499587 master-0 kubenswrapper[4180]: I0220 11:49:11.499424 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" event={"ID":"6c3aa45a-44cc-48fb-a478-ce01a70c4b02","Type":"ContainerStarted","Data":"d5397ea31b615de3ed7b896751d9092d0868b0406feb889378a7e38993fb96df"} Feb 20 11:49:11.500439 master-0 kubenswrapper[4180]: I0220 11:49:11.500391 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" event={"ID":"5360f3f5-2d07-432f-af45-22659538c55e","Type":"ContainerStarted","Data":"c5619c16f90d5aa3883b6b245c376b14384e785794b212891be3d9cc98f2155b"} Feb 20 11:49:11.501565 master-0 kubenswrapper[4180]: I0220 11:49:11.501511 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" event={"ID":"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9","Type":"ContainerStarted","Data":"2a5fb83b35a727aa019fe00cd3fd649fdc6a109862d8c91e0031dff4209d98e3"} Feb 20 11:49:11.502492 master-0 kubenswrapper[4180]: I0220 11:49:11.502454 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" event={"ID":"1df81fcc-f967-4874-ad16-1a89f0e7875a","Type":"ContainerStarted","Data":"e597c41c82bb3cdfce7c1bbc08b1c76dcd4cf2cd3b4feeb956d08f44152b7ef9"} Feb 20 11:49:11.504341 master-0 kubenswrapper[4180]: I0220 11:49:11.504157 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" event={"ID":"ce2b6fde-de56-49c3-9bd6-e81c679b02bc","Type":"ContainerStarted","Data":"318e8d0079ec56751e5bcf03b814977bae46333d7a42c62cfe81d3ed0047c4ac"} Feb 20 11:49:11.504833 master-0 kubenswrapper[4180]: E0220 11:49:11.504765 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" podUID="1fca5d50-eb5f-4dbb-bdf6-8e07231406f9" Feb 20 11:49:11.506332 master-0 kubenswrapper[4180]: I0220 11:49:11.505775 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" event={"ID":"e0b28c90-d5b6-44f3-867c-020ece32ac7d","Type":"ContainerStarted","Data":"73c4ac8066ad3eb7342716309b7b8a802bf833f8fcd163ad12901b630f6305c2"} Feb 20 11:49:11.506332 master-0 kubenswrapper[4180]: I0220 11:49:11.505799 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" event={"ID":"e0b28c90-d5b6-44f3-867c-020ece32ac7d","Type":"ContainerStarted","Data":"78ca76bb28058c596e989b94f315e85b6607b7b0e487f9746f2eff407fceb169"} Feb 20 11:49:11.507376 master-0 kubenswrapper[4180]: I0220 11:49:11.507344 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw" event={"ID":"839bf5b1-b242-4bbd-bc09-cf6abcf7f734","Type":"ContainerStarted","Data":"827635bac05f32a4d1b33aabd85a52eb2d7b3922ab83e829cdc824722116be6c"} Feb 20 11:49:11.508297 master-0 kubenswrapper[4180]: I0220 11:49:11.508198 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" event={"ID":"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145","Type":"ContainerStarted","Data":"d45bdb88cf4fb87c1f9683f4dd82403ae62e23be61f87cc716489058be0075c3"} Feb 20 11:49:11.508882 master-0 kubenswrapper[4180]: I0220 11:49:11.508830 4180 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" event={"ID":"1d3a36bb-9d11-48b3-a3b5-07b47738ef97","Type":"ContainerStarted","Data":"8f7330d7b1c8d5e165b36ef69bd54c2550cc5df6e53223d95c4871726c4c1402"} Feb 20 11:49:11.664555 master-0 kubenswrapper[4180]: I0220 11:49:11.664476 4180 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:49:11.671363 master-0 kubenswrapper[4180]: I0220 11:49:11.670800 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 20 11:49:11.673398 master-0 kubenswrapper[4180]: I0220 11:49:11.672059 4180 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 20 11:49:11.911226 master-0 kubenswrapper[4180]: I0220 11:49:11.910587 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:11.911226 master-0 kubenswrapper[4180]: I0220 11:49:11.910650 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:11.911226 master-0 kubenswrapper[4180]: E0220 11:49:11.910813 4180 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 11:49:11.911226 master-0 kubenswrapper[4180]: E0220 11:49:11.910893 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:13.910870606 +0000 UTC m=+115.085922426 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "node-tuning-operator-tls" not found Feb 20 11:49:11.911226 master-0 kubenswrapper[4180]: I0220 11:49:11.910983 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:11.911226 master-0 kubenswrapper[4180]: E0220 11:49:11.911001 4180 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 20 11:49:11.911226 master-0 kubenswrapper[4180]: I0220 11:49:11.911058 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:11.911226 master-0 kubenswrapper[4180]: E0220 11:49:11.911085 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert podName:d65a0af4-c96f-44f8-9384-6bae4585983b nodeName:}" failed. No retries permitted until 2026-02-20 11:49:13.911063481 +0000 UTC m=+115.086115301 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert") pod "olm-operator-5499d7f7bb-6qtzc" (UID: "d65a0af4-c96f-44f8-9384-6bae4585983b") : secret "olm-operator-serving-cert" not found Feb 20 11:49:11.911226 master-0 kubenswrapper[4180]: I0220 11:49:11.911109 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:11.911226 master-0 kubenswrapper[4180]: I0220 11:49:11.911146 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:11.911226 master-0 kubenswrapper[4180]: E0220 11:49:11.911170 4180 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:11.911226 master-0 kubenswrapper[4180]: E0220 11:49:11.911203 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls podName:db2a7cb1-1d05-4b24-86ed-f823fad5013e nodeName:}" failed. No retries permitted until 2026-02-20 11:49:13.911193585 +0000 UTC m=+115.086245545 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls") pod "ingress-operator-6569778c84-kw2v6" (UID: "db2a7cb1-1d05-4b24-86ed-f823fad5013e") : secret "metrics-tls" not found Feb 20 11:49:11.911226 master-0 kubenswrapper[4180]: E0220 11:49:11.911233 4180 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:11.912671 master-0 kubenswrapper[4180]: E0220 11:49:11.911270 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:13.911260567 +0000 UTC m=+115.086312387 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:11.912671 master-0 kubenswrapper[4180]: E0220 11:49:11.911292 4180 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 20 11:49:11.912671 master-0 kubenswrapper[4180]: E0220 11:49:11.911319 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls podName:eb135cff-1a2e-468d-80ab-f7db3f57552a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:13.911311659 +0000 UTC m=+115.086363609 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls") pod "machine-config-operator-7f8c75f984-vvvjt" (UID: "eb135cff-1a2e-468d-80ab-f7db3f57552a") : secret "mco-proxy-tls" not found Feb 20 11:49:11.912671 master-0 kubenswrapper[4180]: I0220 11:49:11.911436 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:11.912671 master-0 kubenswrapper[4180]: E0220 11:49:11.911518 4180 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 11:49:11.912671 master-0 kubenswrapper[4180]: E0220 11:49:11.911554 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls podName:7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca nodeName:}" failed. No retries permitted until 2026-02-20 11:49:13.911546396 +0000 UTC m=+115.086598216 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-r9ntt" (UID: "7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca") : secret "image-registry-operator-tls" not found Feb 20 11:49:11.912671 master-0 kubenswrapper[4180]: E0220 11:49:11.911731 4180 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 11:49:11.912671 master-0 kubenswrapper[4180]: E0220 11:49:11.911831 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert podName:dbce6cdc-040a-48e1-8a81-b6ff9c180eba nodeName:}" failed. No retries permitted until 2026-02-20 11:49:13.911812573 +0000 UTC m=+115.086864393 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-mr99g" (UID: "dbce6cdc-040a-48e1-8a81-b6ff9c180eba") : secret "package-server-manager-serving-cert" not found Feb 20 11:49:12.013238 master-0 kubenswrapper[4180]: I0220 11:49:12.012506 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:12.013238 master-0 kubenswrapper[4180]: I0220 11:49:12.012579 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:12.013238 master-0 kubenswrapper[4180]: I0220 11:49:12.012618 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:12.013238 master-0 kubenswrapper[4180]: I0220 11:49:12.012677 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:12.013238 master-0 kubenswrapper[4180]: E0220 11:49:12.012782 4180 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:12.013238 master-0 kubenswrapper[4180]: E0220 11:49:12.012832 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls podName:22bba1b3-587d-4802-b4ae-946827c3fa7a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:14.012818673 +0000 UTC m=+115.187870493 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-5zl5l" (UID: "22bba1b3-587d-4802-b4ae-946827c3fa7a") : secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:12.013238 master-0 kubenswrapper[4180]: E0220 11:49:12.013127 4180 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:12.013238 master-0 kubenswrapper[4180]: E0220 11:49:12.013151 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls podName:b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:14.013143443 +0000 UTC m=+115.188195263 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls") pod "dns-operator-8c7d49845-qhx9j" (UID: "b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8") : secret "metrics-tls" not found Feb 20 11:49:12.013722 master-0 kubenswrapper[4180]: I0220 11:49:12.013687 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:12.013841 master-0 kubenswrapper[4180]: E0220 11:49:12.013374 4180 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 20 11:49:12.013996 master-0 kubenswrapper[4180]: E0220 11:49:12.013408 4180 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 11:49:12.013996 master-0 kubenswrapper[4180]: E0220 11:49:12.013739 4180 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 11:49:12.013996 master-0 kubenswrapper[4180]: E0220 11:49:12.013925 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert podName:4d060bff-3c25-4eeb-bdd3-e20fb2687645 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:14.013889385 +0000 UTC m=+115.188941205 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert") pod "catalog-operator-596f79dd6f-bjxbt" (UID: "4d060bff-3c25-4eeb-bdd3-e20fb2687645") : secret "catalog-operator-serving-cert" not found Feb 20 11:49:12.013996 master-0 kubenswrapper[4180]: E0220 11:49:12.013949 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics podName:6dfca740-0387-428a-b957-3e8a09c6e352 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:14.013938266 +0000 UTC m=+115.188990086 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-nr4tg" (UID: "6dfca740-0387-428a-b957-3e8a09c6e352") : secret "marketplace-operator-metrics" not found Feb 20 11:49:12.013996 master-0 kubenswrapper[4180]: E0220 11:49:12.013986 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs podName:dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:14.013980307 +0000 UTC m=+115.189032127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-jgv89" (UID: "dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783") : secret "multus-admission-controller-secret" not found Feb 20 11:49:12.513111 master-0 kubenswrapper[4180]: E0220 11:49:12.512943 4180 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" podUID="1fca5d50-eb5f-4dbb-bdf6-8e07231406f9" Feb 20 11:49:12.527588 master-0 kubenswrapper[4180]: I0220 11:49:12.527535 4180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" podStartSLOduration=77.527503997 podStartE2EDuration="1m17.527503997s" podCreationTimestamp="2026-02-20 11:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:49:11.53861162 +0000 UTC m=+112.713663460" watchObservedRunningTime="2026-02-20 11:49:12.527503997 +0000 UTC m=+113.702555817" Feb 20 11:49:13.935438 master-0 kubenswrapper[4180]: I0220 11:49:13.935118 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:13.936589 master-0 kubenswrapper[4180]: I0220 11:49:13.935455 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:13.936589 master-0 kubenswrapper[4180]: E0220 11:49:13.935371 4180 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:13.936589 master-0 kubenswrapper[4180]: I0220 11:49:13.935561 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:13.936589 master-0 kubenswrapper[4180]: E0220 11:49:13.935622 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.93559563 +0000 UTC m=+119.110647480 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:13.936589 master-0 kubenswrapper[4180]: E0220 11:49:13.935747 4180 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 20 11:49:13.936589 master-0 kubenswrapper[4180]: E0220 11:49:13.935800 4180 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 11:49:13.936589 master-0 kubenswrapper[4180]: E0220 11:49:13.935865 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls podName:eb135cff-1a2e-468d-80ab-f7db3f57552a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.935834017 +0000 UTC m=+119.110885877 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls") pod "machine-config-operator-7f8c75f984-vvvjt" (UID: "eb135cff-1a2e-468d-80ab-f7db3f57552a") : secret "mco-proxy-tls" not found Feb 20 11:49:13.936589 master-0 kubenswrapper[4180]: E0220 11:49:13.935898 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert podName:dbce6cdc-040a-48e1-8a81-b6ff9c180eba nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.935882298 +0000 UTC m=+119.110934158 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-mr99g" (UID: "dbce6cdc-040a-48e1-8a81-b6ff9c180eba") : secret "package-server-manager-serving-cert" not found Feb 20 11:49:13.936589 master-0 kubenswrapper[4180]: I0220 11:49:13.935963 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:13.936589 master-0 kubenswrapper[4180]: I0220 11:49:13.936033 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:13.936589 master-0 kubenswrapper[4180]: I0220 11:49:13.936075 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:13.936589 master-0 kubenswrapper[4180]: E0220 11:49:13.936149 4180 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 11:49:13.936589 master-0 kubenswrapper[4180]: E0220 11:49:13.936149 4180 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 11:49:13.936589 master-0 kubenswrapper[4180]: E0220 11:49:13.936199 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls podName:7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.936181287 +0000 UTC m=+119.111233137 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-r9ntt" (UID: "7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca") : secret "image-registry-operator-tls" not found Feb 20 11:49:13.938506 master-0 kubenswrapper[4180]: E0220 11:49:13.936223 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.936211688 +0000 UTC m=+119.111263548 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "node-tuning-operator-tls" not found Feb 20 11:49:13.938506 master-0 kubenswrapper[4180]: E0220 11:49:13.936356 4180 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 20 11:49:13.938506 master-0 kubenswrapper[4180]: E0220 11:49:13.936457 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert podName:d65a0af4-c96f-44f8-9384-6bae4585983b nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.936441395 +0000 UTC m=+119.111493245 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert") pod "olm-operator-5499d7f7bb-6qtzc" (UID: "d65a0af4-c96f-44f8-9384-6bae4585983b") : secret "olm-operator-serving-cert" not found Feb 20 11:49:13.938506 master-0 kubenswrapper[4180]: I0220 11:49:13.936693 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:13.938506 master-0 kubenswrapper[4180]: E0220 11:49:13.936873 4180 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:13.938506 master-0 kubenswrapper[4180]: E0220 11:49:13.936936 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls podName:db2a7cb1-1d05-4b24-86ed-f823fad5013e nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.936918609 +0000 UTC m=+119.111970469 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls") pod "ingress-operator-6569778c84-kw2v6" (UID: "db2a7cb1-1d05-4b24-86ed-f823fad5013e") : secret "metrics-tls" not found Feb 20 11:49:14.038370 master-0 kubenswrapper[4180]: I0220 11:49:14.038279 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:14.038629 master-0 kubenswrapper[4180]: E0220 11:49:14.038469 4180 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:14.038629 master-0 kubenswrapper[4180]: I0220 11:49:14.038576 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:14.038629 master-0 kubenswrapper[4180]: E0220 11:49:14.038598 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls podName:22bba1b3-587d-4802-b4ae-946827c3fa7a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:18.038571108 +0000 UTC m=+119.213622958 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-5zl5l" (UID: "22bba1b3-587d-4802-b4ae-946827c3fa7a") : secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:14.038840 master-0 kubenswrapper[4180]: I0220 11:49:14.038688 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:14.038840 master-0 kubenswrapper[4180]: I0220 11:49:14.038761 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:14.038840 master-0 kubenswrapper[4180]: I0220 11:49:14.038824 4180 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:14.039012 master-0 kubenswrapper[4180]: E0220 11:49:14.038983 4180 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 20 11:49:14.039119 master-0 kubenswrapper[4180]: E0220 11:49:14.039045 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert podName:4d060bff-3c25-4eeb-bdd3-e20fb2687645 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:18.039024181 +0000 UTC m=+119.214076041 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert") pod "catalog-operator-596f79dd6f-bjxbt" (UID: "4d060bff-3c25-4eeb-bdd3-e20fb2687645") : secret "catalog-operator-serving-cert" not found Feb 20 11:49:14.039119 master-0 kubenswrapper[4180]: E0220 11:49:14.039084 4180 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:14.039311 master-0 kubenswrapper[4180]: E0220 11:49:14.039135 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls podName:b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:18.039119284 +0000 UTC m=+119.214171134 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls") pod "dns-operator-8c7d49845-qhx9j" (UID: "b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8") : secret "metrics-tls" not found Feb 20 11:49:14.039311 master-0 kubenswrapper[4180]: E0220 11:49:14.039145 4180 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 11:49:14.039311 master-0 kubenswrapper[4180]: E0220 11:49:14.039203 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs podName:dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:18.039185036 +0000 UTC m=+119.214237016 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-jgv89" (UID: "dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783") : secret "multus-admission-controller-secret" not found Feb 20 11:49:14.039311 master-0 kubenswrapper[4180]: E0220 11:49:14.039203 4180 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 11:49:14.039311 master-0 kubenswrapper[4180]: E0220 11:49:14.039255 4180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics podName:6dfca740-0387-428a-b957-3e8a09c6e352 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:18.039243028 +0000 UTC m=+119.214294888 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-nr4tg" (UID: "6dfca740-0387-428a-b957-3e8a09c6e352") : secret "marketplace-operator-metrics" not found Feb 20 11:49:14.117039 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 20 11:49:14.146826 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 20 11:49:14.147323 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 20 11:49:14.155442 master-0 systemd[1]: kubelet.service: Consumed 9.897s CPU time. Feb 20 11:49:14.174654 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 20 11:49:14.364260 master-0 kubenswrapper[7756]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 11:49:14.364260 master-0 kubenswrapper[7756]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 20 11:49:14.364260 master-0 kubenswrapper[7756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 11:49:14.364260 master-0 kubenswrapper[7756]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 11:49:14.365686 master-0 kubenswrapper[7756]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 20 11:49:14.365686 master-0 kubenswrapper[7756]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 11:49:14.373765 master-0 kubenswrapper[7756]: I0220 11:49:14.372573 7756 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377721 7756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377755 7756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377761 7756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377765 7756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377769 7756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377775 7756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377778 7756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377782 7756 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377786 7756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377790 7756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377793 7756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377797 7756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377801 7756 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377804 7756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377808 7756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377813 7756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377818 7756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377823 7756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 11:49:14.377774 master-0 kubenswrapper[7756]: W0220 11:49:14.377827 7756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377832 7756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377836 7756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377840 7756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377844 7756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377848 7756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377852 7756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377856 7756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377861 7756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377865 7756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377869 7756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377872 7756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377876 7756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377884 7756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377887 7756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377891 7756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377895 7756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377899 7756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377902 7756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 11:49:14.378992 master-0 kubenswrapper[7756]: W0220 11:49:14.377906 7756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377911 7756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377916 7756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377920 7756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377924 7756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377928 7756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377932 7756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377936 7756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377939 7756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377943 7756 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377946 7756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377950 7756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377954 7756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377957 7756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377961 7756 feature_gate.go:330] unrecognized feature gate: Example Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377968 7756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377971 7756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377975 7756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377980 7756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377983 7756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 11:49:14.381795 master-0 kubenswrapper[7756]: W0220 11:49:14.377987 7756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: W0220 11:49:14.377991 7756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: W0220 11:49:14.377994 7756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: W0220 11:49:14.377998 7756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: W0220 11:49:14.378001 7756 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: W0220 11:49:14.378004 7756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: W0220 11:49:14.378008 7756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: W0220 11:49:14.378011 7756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: W0220 11:49:14.378015 7756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: W0220 11:49:14.378020 7756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: W0220 11:49:14.378025 7756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: W0220 11:49:14.378029 7756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: W0220 11:49:14.378033 7756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: W0220 11:49:14.378037 7756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: W0220 11:49:14.378041 7756 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: I0220 11:49:14.378130 7756 flags.go:64] FLAG: --address="0.0.0.0" Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: I0220 11:49:14.378139 7756 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: I0220 11:49:14.378145 7756 flags.go:64] FLAG: --anonymous-auth="true" Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: I0220 11:49:14.378150 7756 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: I0220 11:49:14.378156 7756 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: I0220 11:49:14.378160 7756 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 20 11:49:14.383074 master-0 kubenswrapper[7756]: I0220 11:49:14.378166 7756 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378171 7756 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378176 7756 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378180 7756 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378185 7756 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378189 7756 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378196 7756 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378200 7756 flags.go:64] FLAG: --cgroup-root="" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378205 7756 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378210 7756 flags.go:64] FLAG: --client-ca-file="" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378214 7756 flags.go:64] FLAG: --cloud-config="" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378219 7756 flags.go:64] FLAG: --cloud-provider="" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378224 7756 flags.go:64] FLAG: --cluster-dns="[]" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378229 7756 flags.go:64] FLAG: --cluster-domain="" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378233 7756 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378238 7756 flags.go:64] FLAG: --config-dir="" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378241 7756 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378246 7756 flags.go:64] FLAG: --container-log-max-files="5" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378251 7756 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378256 7756 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378260 7756 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378265 7756 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378269 7756 flags.go:64] FLAG: --contention-profiling="false" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378273 7756 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378276 7756 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 20 11:49:14.385346 master-0 kubenswrapper[7756]: I0220 11:49:14.378281 7756 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378285 7756 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378290 7756 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378294 7756 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378298 7756 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378302 7756 flags.go:64] FLAG: --enable-load-reader="false" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378306 7756 flags.go:64] FLAG: --enable-server="true" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378310 7756 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378316 7756 flags.go:64] FLAG: --event-burst="100" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378320 7756 flags.go:64] FLAG: --event-qps="50" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378325 7756 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378329 7756 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378333 7756 flags.go:64] FLAG: --eviction-hard="" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378338 7756 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378342 7756 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378346 7756 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378351 7756 flags.go:64] FLAG: --eviction-soft="" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378355 7756 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378359 7756 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378363 7756 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378367 7756 flags.go:64] FLAG: --experimental-mounter-path="" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378372 7756 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378377 7756 flags.go:64] FLAG: --fail-swap-on="true" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378381 7756 flags.go:64] FLAG: --feature-gates="" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378386 7756 flags.go:64] FLAG: --file-check-frequency="20s" Feb 20 11:49:14.386078 master-0 kubenswrapper[7756]: I0220 11:49:14.378390 7756 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378395 7756 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378399 7756 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378403 7756 flags.go:64] FLAG: --healthz-port="10248" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378407 7756 flags.go:64] FLAG: --help="false" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378411 7756 flags.go:64] FLAG: --hostname-override="" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378415 7756 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378420 7756 flags.go:64] FLAG: --http-check-frequency="20s" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378424 7756 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378427 7756 flags.go:64] FLAG: --image-credential-provider-config="" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378431 7756 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378435 7756 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378439 7756 flags.go:64] FLAG: --image-service-endpoint="" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378443 7756 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378447 7756 flags.go:64] FLAG: --kube-api-burst="100" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378451 7756 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378456 7756 flags.go:64] FLAG: --kube-api-qps="50" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378460 7756 flags.go:64] FLAG: --kube-reserved="" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378464 7756 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378468 7756 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378472 7756 flags.go:64] FLAG: --kubelet-cgroups="" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378476 7756 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378480 7756 flags.go:64] FLAG: --lock-file="" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378484 7756 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378488 7756 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 20 11:49:14.386851 master-0 kubenswrapper[7756]: I0220 11:49:14.378493 7756 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378499 7756 flags.go:64] FLAG: --log-json-split-stream="false" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378503 7756 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378507 7756 flags.go:64] FLAG: --log-text-split-stream="false" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378511 7756 flags.go:64] FLAG: --logging-format="text" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378515 7756 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378519 7756 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378537 7756 flags.go:64] FLAG: --manifest-url="" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378542 7756 flags.go:64] FLAG: --manifest-url-header="" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378548 7756 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378552 7756 flags.go:64] FLAG: --max-open-files="1000000" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378557 7756 flags.go:64] FLAG: --max-pods="110" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378561 7756 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378565 7756 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378569 7756 flags.go:64] FLAG: --memory-manager-policy="None" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378573 7756 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378577 7756 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378581 7756 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378585 7756 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378596 7756 flags.go:64] FLAG: --node-status-max-images="50" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378600 7756 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378604 7756 flags.go:64] FLAG: --oom-score-adj="-999" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378608 7756 flags.go:64] FLAG: --pod-cidr="" Feb 20 11:49:14.387645 master-0 kubenswrapper[7756]: I0220 11:49:14.378616 7756 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378622 7756 flags.go:64] FLAG: --pod-manifest-path="" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378626 7756 flags.go:64] FLAG: --pod-max-pids="-1" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378631 7756 flags.go:64] FLAG: --pods-per-core="0" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378634 7756 flags.go:64] FLAG: --port="10250" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378645 7756 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378649 7756 flags.go:64] FLAG: --provider-id="" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378653 7756 flags.go:64] FLAG: --qos-reserved="" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378657 7756 flags.go:64] FLAG: --read-only-port="10255" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378662 7756 flags.go:64] FLAG: --register-node="true" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378666 7756 flags.go:64] FLAG: --register-schedulable="true" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378670 7756 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378678 7756 flags.go:64] FLAG: --registry-burst="10" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378682 7756 flags.go:64] FLAG: --registry-qps="5" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378686 7756 flags.go:64] FLAG: --reserved-cpus="" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378689 7756 flags.go:64] FLAG: --reserved-memory="" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378695 7756 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378699 7756 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378703 7756 flags.go:64] FLAG: --rotate-certificates="false" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378707 7756 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378711 7756 flags.go:64] FLAG: --runonce="false" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378715 7756 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378720 7756 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378724 7756 flags.go:64] FLAG: --seccomp-default="false" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378728 7756 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378732 7756 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 20 11:49:14.388216 master-0 kubenswrapper[7756]: I0220 11:49:14.378736 7756 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378740 7756 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378744 7756 flags.go:64] FLAG: --storage-driver-password="root" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378749 7756 flags.go:64] FLAG: --storage-driver-secure="false" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378752 7756 flags.go:64] FLAG: --storage-driver-table="stats" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378757 7756 flags.go:64] FLAG: --storage-driver-user="root" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378763 7756 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378768 7756 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378772 7756 flags.go:64] FLAG: --system-cgroups="" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378776 7756 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378782 7756 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378789 7756 flags.go:64] FLAG: --tls-cert-file="" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378793 7756 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378799 7756 flags.go:64] FLAG: --tls-min-version="" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378802 7756 flags.go:64] FLAG: --tls-private-key-file="" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378806 7756 flags.go:64] FLAG: --topology-manager-policy="none" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378811 7756 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378815 7756 flags.go:64] FLAG: --topology-manager-scope="container" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378819 7756 flags.go:64] FLAG: --v="2" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378824 7756 flags.go:64] FLAG: --version="false" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378829 7756 flags.go:64] FLAG: --vmodule="" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378835 7756 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: I0220 11:49:14.378839 7756 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: W0220 11:49:14.378947 7756 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 11:49:14.388936 master-0 kubenswrapper[7756]: W0220 11:49:14.378952 7756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.378956 7756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.378960 7756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.378964 7756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.378967 7756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.378971 7756 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.378974 7756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.378978 7756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.378984 7756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.378988 7756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.378992 7756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.378996 7756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.379001 7756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.379005 7756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.379011 7756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.379016 7756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.379019 7756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.379023 7756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.379027 7756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 11:49:14.389512 master-0 kubenswrapper[7756]: W0220 11:49:14.379033 7756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379036 7756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379040 7756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379043 7756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379047 7756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379050 7756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379054 7756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379058 7756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379062 7756 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379065 7756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379069 7756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379072 7756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379076 7756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379079 7756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379083 7756 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379086 7756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379090 7756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379094 7756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379097 7756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379100 7756 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 11:49:14.390048 master-0 kubenswrapper[7756]: W0220 11:49:14.379104 7756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379107 7756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379111 7756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379115 7756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379120 7756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379125 7756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379128 7756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379134 7756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379138 7756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379142 7756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379145 7756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379149 7756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379155 7756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379158 7756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379162 7756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379165 7756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379169 7756 feature_gate.go:330] unrecognized feature gate: Example Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379175 7756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379178 7756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379182 7756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 11:49:14.390567 master-0 kubenswrapper[7756]: W0220 11:49:14.379185 7756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 11:49:14.391087 master-0 kubenswrapper[7756]: W0220 11:49:14.379189 7756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 11:49:14.391087 master-0 kubenswrapper[7756]: W0220 11:49:14.379193 7756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 11:49:14.391087 master-0 kubenswrapper[7756]: W0220 11:49:14.379196 7756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 11:49:14.391087 master-0 kubenswrapper[7756]: W0220 11:49:14.379200 7756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 11:49:14.391087 master-0 kubenswrapper[7756]: W0220 11:49:14.379204 7756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 11:49:14.391087 master-0 kubenswrapper[7756]: W0220 11:49:14.379210 7756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 11:49:14.391087 master-0 kubenswrapper[7756]: W0220 11:49:14.379213 7756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 11:49:14.391087 master-0 kubenswrapper[7756]: W0220 11:49:14.379217 7756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 11:49:14.391087 master-0 kubenswrapper[7756]: W0220 11:49:14.379221 7756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 11:49:14.391087 master-0 kubenswrapper[7756]: W0220 11:49:14.379224 7756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 11:49:14.391087 master-0 kubenswrapper[7756]: W0220 11:49:14.379228 7756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 11:49:14.391087 master-0 kubenswrapper[7756]: I0220 11:49:14.379241 7756 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 11:49:14.397429 master-0 kubenswrapper[7756]: I0220 11:49:14.397373 7756 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 20 11:49:14.397429 master-0 kubenswrapper[7756]: I0220 11:49:14.397423 7756 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 20 11:49:14.397665 master-0 kubenswrapper[7756]: W0220 11:49:14.397600 7756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 11:49:14.397710 master-0 kubenswrapper[7756]: W0220 11:49:14.397686 7756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 11:49:14.397752 master-0 kubenswrapper[7756]: W0220 11:49:14.397708 7756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 11:49:14.397752 master-0 kubenswrapper[7756]: W0220 11:49:14.397723 7756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 11:49:14.397826 master-0 kubenswrapper[7756]: W0220 11:49:14.397775 7756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 11:49:14.397826 master-0 kubenswrapper[7756]: W0220 11:49:14.397789 7756 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 11:49:14.397826 master-0 kubenswrapper[7756]: W0220 11:49:14.397801 7756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 11:49:14.397826 master-0 kubenswrapper[7756]: W0220 11:49:14.397813 7756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 11:49:14.397826 master-0 kubenswrapper[7756]: W0220 11:49:14.397822 7756 feature_gate.go:330] unrecognized feature gate: Example Feb 20 11:49:14.397826 master-0 kubenswrapper[7756]: W0220 11:49:14.397832 7756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397842 7756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397852 7756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397861 7756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397869 7756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397877 7756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397885 7756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397894 7756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397902 7756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397910 7756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397918 7756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397925 7756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397933 7756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397941 7756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397949 7756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397957 7756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397965 7756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397973 7756 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397981 7756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 11:49:14.397999 master-0 kubenswrapper[7756]: W0220 11:49:14.397989 7756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398000 7756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398013 7756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398024 7756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398035 7756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398045 7756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398056 7756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398067 7756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398076 7756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398084 7756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398092 7756 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398100 7756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398108 7756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398117 7756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398125 7756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398134 7756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398141 7756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398149 7756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398157 7756 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398165 7756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 11:49:14.398510 master-0 kubenswrapper[7756]: W0220 11:49:14.398173 7756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398180 7756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398191 7756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398204 7756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398215 7756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398224 7756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398235 7756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398245 7756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398255 7756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398263 7756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398272 7756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398280 7756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398288 7756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398296 7756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398304 7756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398312 7756 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398320 7756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398328 7756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398335 7756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 11:49:14.399053 master-0 kubenswrapper[7756]: W0220 11:49:14.398343 7756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 11:49:14.399521 master-0 kubenswrapper[7756]: W0220 11:49:14.398351 7756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 11:49:14.399521 master-0 kubenswrapper[7756]: W0220 11:49:14.398359 7756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 11:49:14.399521 master-0 kubenswrapper[7756]: W0220 11:49:14.398368 7756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 11:49:14.399521 master-0 kubenswrapper[7756]: W0220 11:49:14.398375 7756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 11:49:14.399521 master-0 kubenswrapper[7756]: I0220 11:49:14.398388 7756 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 11:49:14.399521 master-0 kubenswrapper[7756]: W0220 11:49:14.398673 7756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 11:49:14.399521 master-0 kubenswrapper[7756]: W0220 11:49:14.398690 7756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 11:49:14.399521 master-0 kubenswrapper[7756]: W0220 11:49:14.398699 7756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 11:49:14.399521 master-0 kubenswrapper[7756]: W0220 11:49:14.398707 7756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 11:49:14.399521 master-0 kubenswrapper[7756]: W0220 11:49:14.398715 7756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 11:49:14.399521 master-0 kubenswrapper[7756]: W0220 11:49:14.398724 7756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 11:49:14.399521 master-0 kubenswrapper[7756]: W0220 11:49:14.398732 7756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 11:49:14.399521 master-0 kubenswrapper[7756]: W0220 11:49:14.398740 7756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 11:49:14.399521 master-0 kubenswrapper[7756]: W0220 11:49:14.398748 7756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 11:49:14.399521 master-0 kubenswrapper[7756]: W0220 11:49:14.398756 7756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398764 7756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398773 7756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398780 7756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398788 7756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398796 7756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398804 7756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398812 7756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398823 7756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398834 7756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398843 7756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398851 7756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398861 7756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398869 7756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398877 7756 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398886 7756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398894 7756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398902 7756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398910 7756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 11:49:14.399913 master-0 kubenswrapper[7756]: W0220 11:49:14.398917 7756 feature_gate.go:330] unrecognized feature gate: Example Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.398925 7756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.398933 7756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.398940 7756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.398949 7756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.398963 7756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.398971 7756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.398979 7756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.398986 7756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.398994 7756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.399002 7756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.399010 7756 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.399019 7756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.399027 7756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.399037 7756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.399045 7756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.399054 7756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.399064 7756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.399074 7756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.399082 7756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 11:49:14.400428 master-0 kubenswrapper[7756]: W0220 11:49:14.399090 7756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399098 7756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399106 7756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399114 7756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399122 7756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399130 7756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399138 7756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399146 7756 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399153 7756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399161 7756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399169 7756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399176 7756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399186 7756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399197 7756 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399206 7756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399216 7756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399225 7756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399235 7756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399244 7756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 11:49:14.400995 master-0 kubenswrapper[7756]: W0220 11:49:14.399253 7756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 11:49:14.401473 master-0 kubenswrapper[7756]: W0220 11:49:14.399262 7756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 11:49:14.401473 master-0 kubenswrapper[7756]: W0220 11:49:14.399271 7756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 11:49:14.401473 master-0 kubenswrapper[7756]: W0220 11:49:14.399279 7756 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 11:49:14.401473 master-0 kubenswrapper[7756]: W0220 11:49:14.399288 7756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 11:49:14.401473 master-0 kubenswrapper[7756]: I0220 11:49:14.399300 7756 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 11:49:14.401473 master-0 kubenswrapper[7756]: I0220 11:49:14.399607 7756 server.go:940] "Client rotation is on, will bootstrap in background" Feb 20 11:49:14.402381 master-0 kubenswrapper[7756]: I0220 11:49:14.402337 7756 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 20 11:49:14.402503 master-0 kubenswrapper[7756]: I0220 11:49:14.402469 7756 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 20 11:49:14.402970 master-0 kubenswrapper[7756]: I0220 11:49:14.402933 7756 server.go:997] "Starting client certificate rotation" Feb 20 11:49:14.402970 master-0 kubenswrapper[7756]: I0220 11:49:14.402961 7756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 20 11:49:14.403344 master-0 kubenswrapper[7756]: I0220 11:49:14.403215 7756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-21 11:39:43 +0000 UTC, rotation deadline is 2026-02-21 05:18:12.548516798 +0000 UTC Feb 20 11:49:14.403379 master-0 kubenswrapper[7756]: I0220 11:49:14.403343 7756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h28m58.145180275s for next certificate rotation Feb 20 11:49:14.404071 master-0 kubenswrapper[7756]: I0220 11:49:14.404032 7756 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 11:49:14.406370 master-0 kubenswrapper[7756]: I0220 11:49:14.406315 7756 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 11:49:14.418765 master-0 kubenswrapper[7756]: I0220 11:49:14.418718 7756 log.go:25] "Validated CRI v1 runtime API" Feb 20 11:49:14.422623 master-0 kubenswrapper[7756]: I0220 11:49:14.422591 7756 log.go:25] "Validated CRI v1 image API" Feb 20 11:49:14.424780 master-0 kubenswrapper[7756]: I0220 11:49:14.424731 7756 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 20 11:49:14.430161 master-0 kubenswrapper[7756]: I0220 11:49:14.430097 7756 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 e4a1b3a0-c6e7-4552-b1bb-6cc9ae049a6f:/dev/vda3] Feb 20 11:49:14.430463 master-0 kubenswrapper[7756]: I0220 11:49:14.430148 7756 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0aa23336820d4847f443dc2f86a2ade4113e5076452290a0fb2cf4f2ca4f4941/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0aa23336820d4847f443dc2f86a2ade4113e5076452290a0fb2cf4f2ca4f4941/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1206e415177b826b05bc4efd16176f68cc29c42141e8fa6d0d360426d4f33a85/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1206e415177b826b05bc4efd16176f68cc29c42141e8fa6d0d360426d4f33a85/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/23b2cdbe43b5f53ee3da0198ee8c38e0997aeb51d9b1fb66eb114d3637b2718c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/23b2cdbe43b5f53ee3da0198ee8c38e0997aeb51d9b1fb66eb114d3637b2718c/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2a5fb83b35a727aa019fe00cd3fd649fdc6a109862d8c91e0031dff4209d98e3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2a5fb83b35a727aa019fe00cd3fd649fdc6a109862d8c91e0031dff4209d98e3/userdata/shm major:0 minor:277 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/318e8d0079ec56751e5bcf03b814977bae46333d7a42c62cfe81d3ed0047c4ac/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/318e8d0079ec56751e5bcf03b814977bae46333d7a42c62cfe81d3ed0047c4ac/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/638616f7252126f59d4bcab9c5e05a063b0ebaded1038582ec0b4152c67c3d10/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/638616f7252126f59d4bcab9c5e05a063b0ebaded1038582ec0b4152c67c3d10/userdata/shm major:0 minor:110 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/65f0cac0248f829995872c710eae2661c9c322f7d317b3a4dc6cd36bbbee0b47/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/65f0cac0248f829995872c710eae2661c9c322f7d317b3a4dc6cd36bbbee0b47/userdata/shm major:0 minor:309 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/78ca76bb28058c596e989b94f315e85b6607b7b0e487f9746f2eff407fceb169/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/78ca76bb28058c596e989b94f315e85b6607b7b0e487f9746f2eff407fceb169/userdata/shm major:0 minor:300 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/827635bac05f32a4d1b33aabd85a52eb2d7b3922ab83e829cdc824722116be6c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/827635bac05f32a4d1b33aabd85a52eb2d7b3922ab83e829cdc824722116be6c/userdata/shm major:0 minor:286 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f7330d7b1c8d5e165b36ef69bd54c2550cc5df6e53223d95c4871726c4c1402/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f7330d7b1c8d5e165b36ef69bd54c2550cc5df6e53223d95c4871726c4c1402/userdata/shm major:0 minor:282 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9c62c9e4d7c03ed804b559ff9f9468e7a7f91ed8870a1b6239bb6b24438d3b6a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9c62c9e4d7c03ed804b559ff9f9468e7a7f91ed8870a1b6239bb6b24438d3b6a/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a92cb32c4be6840fe62cceeff083a250664f650a02bcc7c9c164c3636c13a84d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a92cb32c4be6840fe62cceeff083a250664f650a02bcc7c9c164c3636c13a84d/userdata/shm major:0 minor:293 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ad27979ee67ec73db6166a66f6c8de5d02655b589472440fd2f397e6aebb3ab2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ad27979ee67ec73db6166a66f6c8de5d02655b589472440fd2f397e6aebb3ab2/userdata/shm major:0 minor:143 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b48391bd94beb64b336f15ad176f98f36973e5e545db832340669d5eac56bf63/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b48391bd94beb64b336f15ad176f98f36973e5e545db832340669d5eac56bf63/userdata/shm major:0 minor:127 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5619c16f90d5aa3883b6b245c376b14384e785794b212891be3d9cc98f2155b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5619c16f90d5aa3883b6b245c376b14384e785794b212891be3d9cc98f2155b/userdata/shm major:0 minor:304 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cbd2814207ea73c81ee03ec39936289eb40513d40ec1dfdddcdf33cff0834b18/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cbd2814207ea73c81ee03ec39936289eb40513d40ec1dfdddcdf33cff0834b18/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d2c649879e879ea783f4e70fae9dbd4ad2a036263190c8c86c941dcd3804935b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d2c649879e879ea783f4e70fae9dbd4ad2a036263190c8c86c941dcd3804935b/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d45bdb88cf4fb87c1f9683f4dd82403ae62e23be61f87cc716489058be0075c3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d45bdb88cf4fb87c1f9683f4dd82403ae62e23be61f87cc716489058be0075c3/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d5397ea31b615de3ed7b896751d9092d0868b0406feb889378a7e38993fb96df/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d5397ea31b615de3ed7b896751d9092d0868b0406feb889378a7e38993fb96df/userdata/shm major:0 minor:308 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/daeb204866928dea63cda5d95ee5bd6ef7be131f67e5efa51523f3185688b49e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/daeb204866928dea63cda5d95ee5bd6ef7be131f67e5efa51523f3185688b49e/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e0641894015187b3510c96c6c6bd4f01c441dcb52b2dacc02ae9839b7ddf2146/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e0641894015187b3510c96c6c6bd4f01c441dcb52b2dacc02ae9839b7ddf2146/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e597c41c82bb3cdfce7c1bbc08b1c76dcd4cf2cd3b4feeb956d08f44152b7ef9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e597c41c82bb3cdfce7c1bbc08b1c76dcd4cf2cd3b4feeb956d08f44152b7ef9/userdata/shm major:0 minor:274 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fb6f6ab6826113043c422e9cb31e951a4709e29a8f548f2f0410e49be87f511d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fb6f6ab6826113043c422e9cb31e951a4709e29a8f548f2f0410e49be87f511d/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fcd9999695d850ee86685844ce22164c47296c700a3e8af3d20ba2a180990b4a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fcd9999695d850ee86685844ce22164c47296c700a3e8af3d20ba2a180990b4a/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/01e90033-9ddf-41b4-ab61-e89add6c2fde/volumes/kubernetes.io~projected/kube-api-access-j2tk7:{mountpoint:/var/lib/kubelet/pods/01e90033-9ddf-41b4-ab61-e89add6c2fde/volumes/kubernetes.io~projected/kube-api-access-j2tk7 major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/01e90033-9ddf-41b4-ab61-e89add6c2fde/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/01e90033-9ddf-41b4-ab61-e89add6c2fde/volumes/kubernetes.io~secret/serving-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/volumes/kubernetes.io~projected/kube-api-access-8k2dv:{mountpoint:/var/lib/kubelet/pods/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/volumes/kubernetes.io~projected/kube-api-access-8k2dv major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07281644-2789-424f-8429-aa4448dda01e/volumes/kubernetes.io~projected/kube-api-access-l5pw4:{mountpoint:/var/lib/kubelet/pods/07281644-2789-424f-8429-aa4448dda01e/volumes/kubernetes.io~projected/kube-api-access-l5pw4 major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1709ef31-9ddd-42bf-9a95-4be4502a0828/volumes/kubernetes.io~projected/kube-api-access-79j9f:{mountpoint:/var/lib/kubelet/pods/1709ef31-9ddd-42bf-9a95-4be4502a0828/volumes/kubernetes.io~projected/kube-api-access-79j9f major:0 minor:135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~projected/kube-api-access-lvjcp:{mountpoint:/var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~projected/kube-api-access-lvjcp major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~secret/etcd-client major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~secret/serving-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1df81fcc-f967-4874-ad16-1a89f0e7875a/volumes/kubernetes.io~projected/kube-api-access-7mggv:{mountpoint:/var/lib/kubelet/pods/1df81fcc-f967-4874-ad16-1a89f0e7875a/volumes/kubernetes.io~projected/kube-api-access-7mggv major:0 minor:270 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1df81fcc-f967-4874-ad16-1a89f0e7875a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1df81fcc-f967-4874-ad16-1a89f0e7875a/volumes/kubernetes.io~secret/serving-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9/volumes/kubernetes.io~projected/kube-api-access-8p4w6:{mountpoint:/var/lib/kubelet/pods/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9/volumes/kubernetes.io~projected/kube-api-access-8p4w6 major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9/volumes/kubernetes.io~secret/serving-cert major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22bba1b3-587d-4802-b4ae-946827c3fa7a/volumes/kubernetes.io~projected/kube-api-access-2wnh5:{mountpoint:/var/lib/kubelet/pods/22bba1b3-587d-4802-b4ae-946827c3fa7a/volumes/kubernetes.io~projected/kube-api-access-2wnh5 major:0 minor:291 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/312ca024-c8f0-4994-8f9a-b707607341fe/volumes/kubernetes.io~projected/kube-api-access-bpnmz:{mountpoint:/var/lib/kubelet/pods/312ca024-c8f0-4994-8f9a-b707607341fe/volumes/kubernetes.io~projected/kube-api-access-bpnmz major:0 minor:107 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/312ca024-c8f0-4994-8f9a-b707607341fe/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/312ca024-c8f0-4994-8f9a-b707607341fe/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31969539-bfd1-466f-8697-f13cbbd957df/volumes/kubernetes.io~projected/kube-api-access-7ts6s:{mountpoint:/var/lib/kubelet/pods/31969539-bfd1-466f-8697-f13cbbd957df/volumes/kubernetes.io~projected/kube-api-access-7ts6s major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31969539-bfd1-466f-8697-f13cbbd957df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/31969539-bfd1-466f-8697-f13cbbd957df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volumes/kubernetes.io~projected/kube-api-access-5j4cs:{mountpoint:/var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volumes/kubernetes.io~projected/kube-api-access-5j4cs major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/volumes/kubernetes.io~projected/kube-api-access-8nd7r:{mountpoint:/var/lib/kubelet/pods/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/volumes/kubernetes.io~projected/kube-api-access-8nd7r major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4d060bff-3c25-4eeb-bdd3-e20fb2687645/volumes/kubernetes.io~projected/kube-api-access-26x7b:{mountpoint:/var/lib/kubelet/pods/4d060bff-3c25-4eeb-bdd3-e20fb2687645/volumes/kubernetes.io~projected/kube-api-access-26x7b major:0 minor:292 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4d060bff-3c25-4eeb-bdd3-e20fb2687645/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/4d060bff-3c25-4eeb-bdd3-e20fb2687645/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/533fe3c7-504f-40aa-aab0-8d66ef27920f/volumes/kubernetes.io~projected/kube-api-access-jrwcs:{mountpoint:/var/lib/kubelet/pods/533fe3c7-504f-40aa-aab0-8d66ef27920f/volumes/kubernetes.io~projected/kube-api-access-jrwcs major:0 minor:108 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5360f3f5-2d07-432f-af45-22659538c55e/volumes/kubernetes.io~projected/kube-api-access-7vvm8:{mountpoint:/var/lib/kubelet/pods/5360f3f5-2d07-432f-af45-22659538c55e/volumes/kubernetes.io~projected/kube-api-access-7vvm8 major:0 minor:295 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5360f3f5-2d07-432f-af45-22659538c55e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/5360f3f5-2d07-432f-af45-22659538c55e/volumes/kubernetes.io~secret/serving-cert major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/67f890c8-05a1-4797-8da8-6194aea0df9a/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/67f890c8-05a1-4797-8da8-6194aea0df9a/volumes/kubernetes.io~projected/kube-api-access major:0 minor:109 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c3aa45a-44cc-48fb-a478-ce01a70c4b02/volumes/kubernetes.io~projected/kube-api-access-2zkbq:{mountpoint:/var/lib/kubelet/pods/6c3aa45a-44cc-48fb-a478-ce01a70c4b02/volumes/kubernetes.io~projected/kube-api-access-2zkbq major:0 minor:288 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c3aa45a-44cc-48fb-a478-ce01a70c4b02/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6c3aa45a-44cc-48fb-a478-ce01a70c4b02/volumes/kubernetes.io~secret/serving-cert major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6dfca740-0387-428a-b957-3e8a09c6e352/volumes/kubernetes.io~projected/kube-api-access-d4457:{mountpoint:/var/lib/kubelet/pods/6dfca740-0387-428a-b957-3e8a09c6e352/volumes/kubernetes.io~projected/kube-api-access-d4457 major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/volumes/kubernetes.io~projected/kube-api-access-lqxhp:{mountpoint:/var/lib/kubelet/pods/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/volumes/kubernetes.io~projected/kube-api-access-lqxhp major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/836a6d7e-9b26-425f-ae21-00422515d7fe/volumes/kubernetes.io~projected/kube-api-access-ms8wk:{mountpoint:/var/lib/kubelet/pods/836a6d7e-9b26-425f-ae21-00422515d7fe/volumes/kubernetes.io~projected/kube-api-access-ms8wk major:0 minor:164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/836a6d7e-9b26-425f-ae21-00422515d7fe/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/836a6d7e-9b26-425f-ae21-00422515d7fe/volumes/kubernetes.io~secret/webhook-cert major:0 minor:163 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/839bf5b1-b242-4bbd-bc09-cf6abcf7f734/volumes/kubernetes.io~projected/kube-api-access-pvxsh:{mountpoint:/var/lib/kubelet/pods/839bf5b1-b242-4bbd-bc09-cf6abcf7f734/volumes/kubernetes.io~projected/kube-api-access-pvxsh major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/906307ef-d988-49e7-9d63-39116a2c4880/volumes/kubernetes.io~projected/kube-api-access-5j82z:{mountpoint:/var/lib/kubelet/pods/906307ef-d988-49e7-9d63-39116a2c4880/volumes/kubernetes.io~projected/kube-api-access-5j82z major:0 minor:280 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8/volumes/kubernetes.io~projected/kube-api-access-rcnmk:{mountpoint:/var/lib/kubelet/pods/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8/volumes/kubernetes.io~projected/kube-api-access-rcnmk major:0 minor:290 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce2b6fde-de56-49c3-9bd6-e81c679b02bc/volumes/kubernetes.io~projected/kube-api-access-2k8n8:{mountpoint:/var/lib/kubelet/pods/ce2b6fde-de56-49c3-9bd6-e81c679b02bc/volumes/kubernetes.io~projected/kube-api-access-2k8n8 major:0 minor:276 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce2b6fde-de56-49c3-9bd6-e81c679b02bc/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/ce2b6fde-de56-49c3-9bd6-e81c679b02bc/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d65a0af4-c96f-44f8-9384-6bae4585983b/volumes/kubernetes.io~projected/kube-api-access-bpk24:{mountpoint:/var/lib/kubelet/pods/d65a0af4-c96f-44f8-9384-6bae4585983b/volumes/kubernetes.io~projected/kube-api-access-bpk24 major:0 minor:266 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d65a0af4-c96f-44f8-9384-6bae4585983b/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/d65a0af4-c96f-44f8-9384-6bae4585983b/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db2a7cb1-1d05-4b24-86ed-f823fad5013e/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/db2a7cb1-1d05-4b24-86ed-f823fad5013e/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db2a7cb1-1d05-4b24-86ed-f823fad5013e/volumes/kubernetes.io~projected/kube-api-access-6td56:{mountpoint:/var/lib/kubelet/pods/db2a7cb1-1d05-4b24-86ed-f823fad5013e/volumes/kubernetes.io~projected/kube-api-access-6td56 major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dbce6cdc-040a-48e1-8a81-b6ff9c180eba/volumes/kubernetes.io~projected/kube-api-access-z2kct:{mountpoint:/var/lib/kubelet/pods/dbce6cdc-040a-48e1-8a81-b6ff9c180eba/volumes/kubernetes.io~projected/kube-api-access-z2kct major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783/volumes/kubernetes.io~projected/kube-api-access-jx26k:{mountpoint:/var/lib/kubelet/pods/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783/volumes/kubernetes.io~projected/kube-api-access-jx26k major:0 minor:285 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e0b28c90-d5b6-44f3-867c-020ece32ac7d/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e0b28c90-d5b6-44f3-867c-020ece32ac7d/volumes/kubernetes.io~projected/kube-api-access major:0 minor:289 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e0b28c90-d5b6-44f3-867c-020ece32ac7d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e0b28c90-d5b6-44f3-867c-020ece32ac7d/volumes/kubernetes.io~secret/serving-cert major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eb135cff-1a2e-468d-80ab-f7db3f57552a/volumes/kubernetes.io~projected/kube-api-access-tk5sc:{mountpoint:/var/lib/kubelet/pods/eb135cff-1a2e-468d-80ab-f7db3f57552a/volumes/kubernetes.io~projected/kube-api-access-tk5sc major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f1388469-5e55-4c1b-97c3-c88777f29ae7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/f1388469-5e55-4c1b-97c3-c88777f29ae7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f1388469-5e55-4c1b-97c3-c88777f29ae7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f1388469-5e55-4c1b-97c3-c88777f29ae7/volumes/kubernetes.io~secret/serving-cert major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f98aeaf7-bf1a-46af-bf1b-85713baa4c67/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/f98aeaf7-bf1a-46af-bf1b-85713baa4c67/volumes/kubernetes.io~projected/kube-api-access major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f98aeaf7-bf1a-46af-bf1b-85713baa4c67/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f98aeaf7-bf1a-46af-bf1b-85713baa4c67/volumes/kubernetes.io~secret/serving-cert major:0 minor:245 fsType:tmpfs blockSize:0} overlay_0-112:{mountpoint:/var/lib/containers/storage/overlay/a0709a6e5ed1770dae849b00a9e10d6fbe9286956d53800d219fad5d23a3fea8/merged major:0 minor:112 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/38f10990a5df06d3a2c9c64e81a15c4b79a393e7c3ab25a9f7beab2843b63e5d/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/a81ec534eaaa1bdabe936bfbe727741f37c7b2a7cb569f434ddad6df5b6f263b/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-129:{mountpoint:/var/lib/containers/storage/overlay/5ead3c2fe9331eee2392c60bf3d1430faf4c5b72ab359230a6f80cb9ce800024/merged major:0 minor:129 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/365195f281d9ffa8b5e1ae2fbc731eac8b5a135ebe5ef3528b5f70f92fd571aa/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/f6b4fb8651bac521ae7784eda67f45f5d35005b38755d8975940556eb2262fd9/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/5d498c3a8294fedcc9d94bd1dbe29b45f6517e03adc5006ed4f2964d466eb992/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-145:{mountpoint:/var/lib/containers/storage/overlay/45c0b99da5d66b0798fc61ef580f24e38dee8ceb190011774726bba3e8fff709/merged major:0 minor:145 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/be69df5233b5af0c4e3370bc8cf303f59459e2ef1d455eabb519f4fa40402cec/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/d469f84cae0c1588db395b1c865d99c7a18020839d0bf74e66f2594ecb946424/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/f7fa3e49692e2c82192386b57c71c192386de18e3a812e5bb79008861aa47289/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/d49e3416cb986f9a651c666682c110f4fb734709abc650627fcdd4aa04efed50/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/e5d06a3335de6480cadc16ef6806428eac761ec1c4de61ef5135916221fd600f/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/c4457d3ef99a05549ebd926d7a03708d9f14421ebff8f10d486e1e5375567809/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/9a50879f693bf49f1958361b72b697efa29d966960118671a041b0df202a75be/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/6a75ba6e6a689efe7d539fff12f1b219de9af43f40c1f3c5281a59a9039d7862/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/1204a7d44ce1cd4f8e62d2f7d57112a4e984b156fdcef260b7f422b8d54f241d/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/5f023b95f6754db86158e9fb51578abc76440ac93211ba1e306cebe18d24a394/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-178:{mountpoint:/var/lib/containers/storage/overlay/f91e86a9a8abc36eb8193a972418afe7eac6588b8a9023457f595149053bf86e/merged major:0 minor:178 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/c4d3d5230916f1539bebd3111522faff975309105e0e6394089928bc51517fbd/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-181:{mountpoint:/var/lib/containers/storage/overlay/4f24bc49c23526627efce00bdc5528171633d74e8d696ed86e015280690ef190/merged major:0 minor:181 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/38cfe351a3623e004bb59547ad926bbf93f6c3bf98a2182805aaf44c49d66151/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-192:{mountpoint:/var/lib/containers/storage/overlay/b8af98a8fb3619557c5d235fcf91ec76b069c629788e19c8023fd6f8788387c9/merged major:0 minor:192 fsType:overlay blockSize:0} overlay_0-200:{mountpoint:/var/lib/containers/storage/overlay/5bf16715eaed0bf1e5d465d599f51b31436ef0c615c7f55e4913f9eeeb1bfaab/merged major:0 minor:200 fsType:overlay blockSize:0} overlay_0-205:{mountpoint:/var/lib/containers/storage/overlay/7c610bf44f623fe8f94ec995c09015edfa4af1587cbffe7a059c7a4cad072a42/merged major:0 minor:205 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/240cf89cbcd8abc9a162f492ff1e98c487e8a8b01f998848db9e6ace84e55d7d/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/d8056bf682ee65a8a3f4bf943557caabe0efaaf5977278855a317b3e8fc3871a/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-220:{mountpoint:/var/lib/containers/storage/overlay/a3712d9182f65cb49ca86b5e488d31f94753347863238f36f7069b73f8605c13/merged major:0 minor:220 fsType:overlay blockSize:0} overlay_0-221:{mountpoint:/var/lib/containers/storage/overlay/353c37e500cd11a05a1699cabffb77c279258fda4d09352a8e92bfe06d3ddcd8/merged major:0 minor:221 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/5ef347ad83e552ec7e1bc00c1ca51245e8749adf576e709cde7047acd5c1ff08/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-296:{mountpoint:/var/lib/containers/storage/overlay/35bf4119eab559956fc810fed7abf821fa146c38402f736ed8f4b3ddb170550b/merged major:0 minor:296 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/7aecd28550392750f7569fa684701738e1eea651aa7b4434c78f80ecb5ae5dc3/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-302:{mountpoint:/var/lib/containers/storage/overlay/dfec895fbbef2cb5a492b8aca6351498d6f0380fecc837a68002e4fca2453a80/merged major:0 minor:302 fsType:overlay blockSize:0} overlay_0-306:{mountpoint:/var/lib/containers/storage/overlay/843b29e3b1b64772806fac526d0c2bdfa52aa840fb7b336a10b3de9446b6322b/merged major:0 minor:306 fsType:overlay blockSize:0} overlay_0-312:{mountpoint:/var/lib/containers/storage/overlay/9fa73a3c5987dd4b016484975c84611c66ad31f2adb8417387289e6cb250a01f/merged major:0 minor:312 fsType:overlay blockSize:0} overlay_0-314:{mountpoint:/var/lib/containers/storage/overlay/a0f1273e9383fe485d886c7a2a383c9fd3d111ff7e2c1f2aacfcb2d009e98706/merged major:0 minor:314 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/e802df0597871f848b404bc72df01c6f4e2b4497e11dd46e7500b966d110fa15/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-318:{mountpoint:/var/lib/containers/storage/overlay/36c643938b75ac36c687ef3cbb4112c2ee738a06748769a8d691fa5dfea33ebf/merged major:0 minor:318 fsType:overlay blockSize:0} overlay_0-320:{mountpoint:/var/lib/containers/storage/overlay/451ffb193175408b1597622ead90101ad8677531d255a5cc34b3c971f75986b2/merged major:0 minor:320 fsType:overlay blockSize:0} overlay_0-322:{mountpoint:/var/lib/containers/storage/overlay/a2ef9aec2c425d431d78d5f3c9e34bfc3980e8157963c83db3a8e51264e829c6/merged major:0 minor:322 fsType:overlay blockSize:0} overlay_0-324:{mountpoint:/var/lib/containers/storage/overlay/a4f79bc2374c99fa2a82d1d7a5f6c75c874e4aa951512ddc3d23495cd38414f1/merged major:0 minor:324 fsType:overlay blockSize:0} overlay_0-326:{mountpoint:/var/lib/containers/storage/overlay/8a696c78f381a7d480e4d7227b831c94ed505e65ee35149b48e3ec0b1afedc40/merged major:0 minor:326 fsType:overlay blockSize:0} overlay_0-328:{mountpoint:/var/lib/containers/storage/overlay/9c368d32b1f2586a5e53b2c2703871907b8222e7bc24d5c7eea326433593bad8/merged major:0 minor:328 fsType:overlay blockSize:0} overlay_0-330:{mountpoint:/var/lib/containers/storage/overlay/93a9596948d7f4928710ce170248e951d90c6ae95edd9762b4e3785261bee2a9/merged major:0 minor:330 fsType:overlay blockSize:0} overlay_0-45:{mountpoint:/var/lib/containers/storage/overlay/f56b3d219752f4d0d0fd31922d659c8e012cd0f204b12b039b0a30b7ec39c364/merged major:0 minor:45 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/d6340ee2577a93a9050e84ab79be3efc38116dfbf9761a1a8ce2bb7a3dad1973/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/a12786102db47431a8e36263466b8f8b8ea48004cb6e64f3a360d1b98f103b78/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/84f25db7164fce8fff9ab8665a03b6f8dda0ddaf1b77c2576fb2dfa38d4895ed/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/4ee5ebbdda5bd890bff9ef4d8b5b3a2b40a71bdadb3669b8aaa769f2c668eb2e/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/56f800df1e0a01f685ee05550d57fe0af12b9d35038cf63183c2c1ab6f09e571/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/05c070557148b99b3792173fe2df968f881d4b17e3ee95f2e3919623c0b9da40/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/3cc0b47a5c248f8c9c7f8d3a7864fd8346a8ff59c5d15cc1283831b654d0a415/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/5969efca3c6a06bfabf04f77c23a2d8b43ccdc9ad28b4af0b0c101b03ec2a20d/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/66bcb24c18c8de1d2f8882f28234c9b901e704aa1a8c5f560bd9d235a7ae7014/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/77b4fca98472b6ef265191fa54bcbf511bbd5d8d57c27b23dabe3a74adbb5674/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/8bd7c2c26e88952fb6bbf7b9d129e40a255b37b10f42dbbe11f85c528222cc1f/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/60c39457bd19abfebf0d65b729a1cdf1954a0b1d4283314d51e5a6a15c1e4564/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-87:{mountpoint:/var/lib/containers/storage/overlay/78e70559151dd37da2835c4f838501725b6ffb9bd2ff26a6e4821bcb0f30ba5e/merged major:0 minor:87 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/0a1734b9fcdc31a0e68655d63c602e7380131fe9bade1bff18ea4f1b14205511/merged major:0 minor:89 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/035a421ec1c71d9e2d83cefc9b2fbffdab2ec9e22a0a8d9ac11533add5ed7ebc/merged major:0 minor:97 fsType:overlay blockSize:0} overlay_0-99:{mountpoint:/var/lib/containers/storage/overlay/5be3406c7bca65074a5b86cc7255fa236a564143d9442c96bb97be83567fd8e6/merged major:0 minor:99 fsType:overlay blockSize:0}] Feb 20 11:49:14.471015 master-0 kubenswrapper[7756]: I0220 11:49:14.465062 7756 manager.go:217] Machine: {Timestamp:2026-02-20 11:49:14.46274407 +0000 UTC m=+0.204992158 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514149376 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:c1d3cbc82ca3451894ea40b65f988770 SystemUUID:c1d3cbc8-2ca3-4518-94ea-40b65f988770 BootID:5aa007af-ada2-4850-bae5-7cd3dd4060ba Filesystems:[{Device:/var/lib/kubelet/pods/6c3aa45a-44cc-48fb-a478-ce01a70c4b02/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:261 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5360f3f5-2d07-432f-af45-22659538c55e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:259 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-181 DeviceMajor:0 DeviceMinor:181 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e0641894015187b3510c96c6c6bd4f01c441dcb52b2dacc02ae9839b7ddf2146/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-220 DeviceMajor:0 DeviceMinor:220 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:140 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4d060bff-3c25-4eeb-bdd3-e20fb2687645/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:260 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/312ca024-c8f0-4994-8f9a-b707607341fe/volumes/kubernetes.io~projected/kube-api-access-bpnmz DeviceMajor:0 DeviceMinor:107 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ce2b6fde-de56-49c3-9bd6-e81c679b02bc/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2a5fb83b35a727aa019fe00cd3fd649fdc6a109862d8c91e0031dff4209d98e3/userdata/shm DeviceMajor:0 DeviceMinor:277 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/906307ef-d988-49e7-9d63-39116a2c4880/volumes/kubernetes.io~projected/kube-api-access-5j82z DeviceMajor:0 DeviceMinor:280 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/836a6d7e-9b26-425f-ae21-00422515d7fe/volumes/kubernetes.io~projected/kube-api-access-ms8wk DeviceMajor:0 DeviceMinor:164 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-145 DeviceMajor:0 DeviceMinor:145 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/volumes/kubernetes.io~projected/kube-api-access-8nd7r DeviceMajor:0 DeviceMinor:251 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/db2a7cb1-1d05-4b24-86ed-f823fad5013e/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:257 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/827635bac05f32a4d1b33aabd85a52eb2d7b3922ab83e829cdc824722116be6c/userdata/shm DeviceMajor:0 DeviceMinor:286 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-318 DeviceMajor:0 DeviceMinor:318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fb6f6ab6826113043c422e9cb31e951a4709e29a8f548f2f0410e49be87f511d/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1df81fcc-f967-4874-ad16-1a89f0e7875a/volumes/kubernetes.io~projected/kube-api-access-7mggv DeviceMajor:0 DeviceMinor:270 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-324 DeviceMajor:0 DeviceMinor:324 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-87 DeviceMajor:0 DeviceMinor:87 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/638616f7252126f59d4bcab9c5e05a063b0ebaded1038582ec0b4152c67c3d10/userdata/shm DeviceMajor:0 DeviceMinor:110 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1206e415177b826b05bc4efd16176f68cc29c42141e8fa6d0d360426d4f33a85/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:244 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/312ca024-c8f0-4994-8f9a-b707607341fe/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/22bba1b3-587d-4802-b4ae-946827c3fa7a/volumes/kubernetes.io~projected/kube-api-access-2wnh5 DeviceMajor:0 DeviceMinor:291 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e0b28c90-d5b6-44f3-867c-020ece32ac7d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:262 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f1388469-5e55-4c1b-97c3-c88777f29ae7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:247 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/07281644-2789-424f-8429-aa4448dda01e/volumes/kubernetes.io~projected/kube-api-access-l5pw4 DeviceMajor:0 DeviceMinor:123 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ce2b6fde-de56-49c3-9bd6-e81c679b02bc/volumes/kubernetes.io~projected/kube-api-access-2k8n8 DeviceMajor:0 DeviceMinor:276 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-328 DeviceMajor:0 DeviceMinor:328 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-330 DeviceMajor:0 DeviceMinor:330 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/01e90033-9ddf-41b4-ab61-e89add6c2fde/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:241 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d65a0af4-c96f-44f8-9384-6bae4585983b/volumes/kubernetes.io~projected/kube-api-access-bpk24 DeviceMajor:0 DeviceMinor:266 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4d060bff-3c25-4eeb-bdd3-e20fb2687645/volumes/kubernetes.io~projected/kube-api-access-26x7b DeviceMajor:0 DeviceMinor:292 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/daeb204866928dea63cda5d95ee5bd6ef7be131f67e5efa51523f3185688b49e/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f1388469-5e55-4c1b-97c3-c88777f29ae7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:249 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/db2a7cb1-1d05-4b24-86ed-f823fad5013e/volumes/kubernetes.io~projected/kube-api-access-6td56 DeviceMajor:0 DeviceMinor:253 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d2c649879e879ea783f4e70fae9dbd4ad2a036263190c8c86c941dcd3804935b/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b48391bd94beb64b336f15ad176f98f36973e5e545db832340669d5eac56bf63/userdata/shm DeviceMajor:0 DeviceMinor:127 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-205 DeviceMajor:0 DeviceMinor:205 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:243 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1709ef31-9ddd-42bf-9a95-4be4502a0828/volumes/kubernetes.io~projected/kube-api-access-79j9f DeviceMajor:0 DeviceMinor:135 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/31969539-bfd1-466f-8697-f13cbbd957df/volumes/kubernetes.io~projected/kube-api-access-7ts6s DeviceMajor:0 DeviceMinor:139 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dbce6cdc-040a-48e1-8a81-b6ff9c180eba/volumes/kubernetes.io~projected/kube-api-access-z2kct DeviceMajor:0 DeviceMinor:255 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:273 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5360f3f5-2d07-432f-af45-22659538c55e/volumes/kubernetes.io~projected/kube-api-access-7vvm8 DeviceMajor:0 DeviceMinor:295 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5619c16f90d5aa3883b6b245c376b14384e785794b212891be3d9cc98f2155b/userdata/shm DeviceMajor:0 DeviceMinor:304 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d5397ea31b615de3ed7b896751d9092d0868b0406feb889378a7e38993fb96df/userdata/shm DeviceMajor:0 DeviceMinor:308 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8/volumes/kubernetes.io~projected/kube-api-access-rcnmk DeviceMajor:0 DeviceMinor:290 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-302 DeviceMajor:0 DeviceMinor:302 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fcd9999695d850ee86685844ce22164c47296c700a3e8af3d20ba2a180990b4a/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-200 DeviceMajor:0 DeviceMinor:200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f7330d7b1c8d5e165b36ef69bd54c2550cc5df6e53223d95c4871726c4c1402/userdata/shm DeviceMajor:0 DeviceMinor:282 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/23b2cdbe43b5f53ee3da0198ee8c38e0997aeb51d9b1fb66eb114d3637b2718c/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d45bdb88cf4fb87c1f9683f4dd82403ae62e23be61f87cc716489058be0075c3/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-314 DeviceMajor:0 DeviceMinor:314 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-99 DeviceMajor:0 DeviceMinor:99 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f98aeaf7-bf1a-46af-bf1b-85713baa4c67/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:263 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0aa23336820d4847f443dc2f86a2ade4113e5076452290a0fb2cf4f2ca4f4941/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-312 DeviceMajor:0 DeviceMinor:312 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-320 DeviceMajor:0 DeviceMinor:320 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-326 DeviceMajor:0 DeviceMinor:326 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-221 DeviceMajor:0 DeviceMinor:221 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-322 DeviceMajor:0 DeviceMinor:322 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/01e90033-9ddf-41b4-ab61-e89add6c2fde/volumes/kubernetes.io~projected/kube-api-access-j2tk7 DeviceMajor:0 DeviceMinor:258 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-306 DeviceMajor:0 DeviceMinor:306 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-45 DeviceMajor:0 DeviceMinor:45 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/839bf5b1-b242-4bbd-bc09-cf6abcf7f734/volumes/kubernetes.io~projected/kube-api-access-pvxsh DeviceMajor:0 DeviceMinor:250 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e597c41c82bb3cdfce7c1bbc08b1c76dcd4cf2cd3b4feeb956d08f44152b7ef9/userdata/shm DeviceMajor:0 DeviceMinor:274 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-296 DeviceMajor:0 DeviceMinor:296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:242 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f98aeaf7-bf1a-46af-bf1b-85713baa4c67/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:245 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/67f890c8-05a1-4797-8da8-6194aea0df9a/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:109 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-129 DeviceMajor:0 DeviceMinor:129 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-112 DeviceMajor:0 DeviceMinor:112 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/533fe3c7-504f-40aa-aab0-8d66ef27920f/volumes/kubernetes.io~projected/kube-api-access-jrwcs DeviceMajor:0 DeviceMinor:108 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d65a0af4-c96f-44f8-9384-6bae4585983b/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:248 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257074688 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cbd2814207ea73c81ee03ec39936289eb40513d40ec1dfdddcdf33cff0834b18/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6c3aa45a-44cc-48fb-a478-ce01a70c4b02/volumes/kubernetes.io~projected/kube-api-access-2zkbq DeviceMajor:0 DeviceMinor:288 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/78ca76bb28058c596e989b94f315e85b6607b7b0e487f9746f2eff407fceb169/userdata/shm DeviceMajor:0 DeviceMinor:300 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ad27979ee67ec73db6166a66f6c8de5d02655b589472440fd2f397e6aebb3ab2/userdata/shm DeviceMajor:0 DeviceMinor:143 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/volumes/kubernetes.io~projected/kube-api-access-8k2dv DeviceMajor:0 DeviceMinor:246 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-178 DeviceMajor:0 DeviceMinor:178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-192 DeviceMajor:0 DeviceMinor:192 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9/volumes/kubernetes.io~projected/kube-api-access-8p4w6 DeviceMajor:0 DeviceMinor:256 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e0b28c90-d5b6-44f3-867c-020ece32ac7d/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:289 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volumes/kubernetes.io~projected/kube-api-access-5j4cs DeviceMajor:0 DeviceMinor:141 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~projected/kube-api-access-lvjcp DeviceMajor:0 DeviceMinor:252 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/eb135cff-1a2e-468d-80ab-f7db3f57552a/volumes/kubernetes.io~projected/kube-api-access-tk5sc DeviceMajor:0 DeviceMinor:254 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6dfca740-0387-428a-b957-3e8a09c6e352/volumes/kubernetes.io~projected/kube-api-access-d4457 DeviceMajor:0 DeviceMinor:279 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a92cb32c4be6840fe62cceeff083a250664f650a02bcc7c9c164c3636c13a84d/userdata/shm DeviceMajor:0 DeviceMinor:293 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9c62c9e4d7c03ed804b559ff9f9468e7a7f91ed8870a1b6239bb6b24438d3b6a/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/836a6d7e-9b26-425f-ae21-00422515d7fe/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:163 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1df81fcc-f967-4874-ad16-1a89f0e7875a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/volumes/kubernetes.io~projected/kube-api-access-lqxhp DeviceMajor:0 DeviceMinor:264 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/31969539-bfd1-466f-8697-f13cbbd957df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:138 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783/volumes/kubernetes.io~projected/kube-api-access-jx26k DeviceMajor:0 DeviceMinor:285 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/65f0cac0248f829995872c710eae2661c9c322f7d317b3a4dc6cd36bbbee0b47/userdata/shm DeviceMajor:0 DeviceMinor:309 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/318e8d0079ec56751e5bcf03b814977bae46333d7a42c62cfe81d3ed0047c4ac/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0aa23336820d484 MacAddress:0e:2e:32:bf:46:9e Speed:10000 Mtu:8900} {Name:2a5fb83b35a727a MacAddress:1a:8a:cc:70:8f:ae Speed:10000 Mtu:8900} {Name:318e8d0079ec567 MacAddress:12:6b:9c:a4:69:40 Speed:10000 Mtu:8900} {Name:78ca76bb28058c5 MacAddress:ce:d9:31:51:3c:1b Speed:10000 Mtu:8900} {Name:827635bac05f32a MacAddress:12:24:48:6c:f2:ef Speed:10000 Mtu:8900} {Name:8f7330d7b1c8d5e MacAddress:32:92:48:65:cc:20 Speed:10000 Mtu:8900} {Name:a92cb32c4be6840 MacAddress:0a:b1:e9:cd:f5:1e Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:22:87:73:30:49:0a Speed:0 Mtu:8900} {Name:c5619c16f90d5aa MacAddress:fa:63:bc:1c:3f:70 Speed:10000 Mtu:8900} {Name:cbd2814207ea73c MacAddress:a6:dc:69:93:aa:10 Speed:10000 Mtu:8900} {Name:d45bdb88cf4fb87 MacAddress:5e:80:24:a5:22:ad Speed:10000 Mtu:8900} {Name:d5397ea31b615de MacAddress:9a:30:1e:05:4a:1f Speed:10000 Mtu:8900} {Name:e597c41c82bb3cd MacAddress:8a:36:dc:c7:98:23 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:8e:d0:9c Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:ad:cf:59 Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:7e:cc:dd:86:6c:ff Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514149376 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 20 11:49:14.471015 master-0 kubenswrapper[7756]: I0220 11:49:14.470984 7756 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 20 11:49:14.471806 master-0 kubenswrapper[7756]: I0220 11:49:14.471346 7756 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 20 11:49:14.471863 master-0 kubenswrapper[7756]: I0220 11:49:14.471833 7756 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 20 11:49:14.472172 master-0 kubenswrapper[7756]: I0220 11:49:14.472105 7756 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 20 11:49:14.472491 master-0 kubenswrapper[7756]: I0220 11:49:14.472166 7756 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 20 11:49:14.472579 master-0 kubenswrapper[7756]: I0220 11:49:14.472556 7756 topology_manager.go:138] "Creating topology manager with none policy" Feb 20 11:49:14.472579 master-0 kubenswrapper[7756]: I0220 11:49:14.472578 7756 container_manager_linux.go:303] "Creating device plugin manager" Feb 20 11:49:14.472645 master-0 kubenswrapper[7756]: I0220 11:49:14.472594 7756 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 11:49:14.472645 master-0 kubenswrapper[7756]: I0220 11:49:14.472634 7756 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 11:49:14.472911 master-0 kubenswrapper[7756]: I0220 11:49:14.472878 7756 state_mem.go:36] "Initialized new in-memory state store" Feb 20 11:49:14.473051 master-0 kubenswrapper[7756]: I0220 11:49:14.473020 7756 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 20 11:49:14.473178 master-0 kubenswrapper[7756]: I0220 11:49:14.473148 7756 kubelet.go:418] "Attempting to sync node with API server" Feb 20 11:49:14.475017 master-0 kubenswrapper[7756]: I0220 11:49:14.474938 7756 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 20 11:49:14.475063 master-0 kubenswrapper[7756]: I0220 11:49:14.475048 7756 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 20 11:49:14.475421 master-0 kubenswrapper[7756]: I0220 11:49:14.475072 7756 kubelet.go:324] "Adding apiserver pod source" Feb 20 11:49:14.475504 master-0 kubenswrapper[7756]: I0220 11:49:14.475472 7756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 20 11:49:14.477518 master-0 kubenswrapper[7756]: I0220 11:49:14.477477 7756 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 20 11:49:14.477844 master-0 kubenswrapper[7756]: I0220 11:49:14.477806 7756 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 20 11:49:14.478288 master-0 kubenswrapper[7756]: I0220 11:49:14.478252 7756 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 20 11:49:14.478478 master-0 kubenswrapper[7756]: I0220 11:49:14.478445 7756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 20 11:49:14.478552 master-0 kubenswrapper[7756]: I0220 11:49:14.478486 7756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 20 11:49:14.478552 master-0 kubenswrapper[7756]: I0220 11:49:14.478503 7756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 20 11:49:14.478552 master-0 kubenswrapper[7756]: I0220 11:49:14.478517 7756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 20 11:49:14.478639 master-0 kubenswrapper[7756]: I0220 11:49:14.478571 7756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 20 11:49:14.478639 master-0 kubenswrapper[7756]: I0220 11:49:14.478587 7756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 20 11:49:14.478639 master-0 kubenswrapper[7756]: I0220 11:49:14.478601 7756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 20 11:49:14.478639 master-0 kubenswrapper[7756]: I0220 11:49:14.478615 7756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 20 11:49:14.478639 master-0 kubenswrapper[7756]: I0220 11:49:14.478632 7756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 20 11:49:14.478760 master-0 kubenswrapper[7756]: I0220 11:49:14.478646 7756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 20 11:49:14.478760 master-0 kubenswrapper[7756]: I0220 11:49:14.478666 7756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 20 11:49:14.478760 master-0 kubenswrapper[7756]: I0220 11:49:14.478691 7756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 20 11:49:14.478760 master-0 kubenswrapper[7756]: I0220 11:49:14.478732 7756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 20 11:49:14.479258 master-0 kubenswrapper[7756]: I0220 11:49:14.479219 7756 server.go:1280] "Started kubelet" Feb 20 11:49:14.480470 master-0 kubenswrapper[7756]: I0220 11:49:14.480394 7756 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 20 11:49:14.480516 master-0 kubenswrapper[7756]: I0220 11:49:14.480474 7756 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 20 11:49:14.481143 master-0 kubenswrapper[7756]: I0220 11:49:14.481086 7756 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 20 11:49:14.481190 master-0 kubenswrapper[7756]: I0220 11:49:14.481155 7756 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 20 11:49:14.481449 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 20 11:49:14.482386 master-0 kubenswrapper[7756]: I0220 11:49:14.482356 7756 server.go:449] "Adding debug handlers to kubelet server" Feb 20 11:49:14.482873 master-0 kubenswrapper[7756]: I0220 11:49:14.482830 7756 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 20 11:49:14.486201 master-0 kubenswrapper[7756]: I0220 11:49:14.486162 7756 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 20 11:49:14.494424 master-0 kubenswrapper[7756]: I0220 11:49:14.494376 7756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 20 11:49:14.494548 master-0 kubenswrapper[7756]: I0220 11:49:14.494494 7756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-21 11:39:43 +0000 UTC, rotation deadline is 2026-02-21 05:12:59.506948025 +0000 UTC Feb 20 11:49:14.494548 master-0 kubenswrapper[7756]: I0220 11:49:14.494519 7756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h23m45.012431021s for next certificate rotation Feb 20 11:49:14.494548 master-0 kubenswrapper[7756]: I0220 11:49:14.494504 7756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 20 11:49:14.494802 master-0 kubenswrapper[7756]: I0220 11:49:14.494755 7756 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 20 11:49:14.494802 master-0 kubenswrapper[7756]: I0220 11:49:14.494800 7756 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 20 11:49:14.494913 master-0 kubenswrapper[7756]: I0220 11:49:14.494868 7756 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 20 11:49:14.495597 master-0 kubenswrapper[7756]: I0220 11:49:14.495564 7756 factory.go:55] Registering systemd factory Feb 20 11:49:14.495597 master-0 kubenswrapper[7756]: I0220 11:49:14.495589 7756 factory.go:221] Registration of the systemd container factory successfully Feb 20 11:49:14.496081 master-0 kubenswrapper[7756]: I0220 11:49:14.496037 7756 factory.go:153] Registering CRI-O factory Feb 20 11:49:14.496117 master-0 kubenswrapper[7756]: I0220 11:49:14.496090 7756 factory.go:221] Registration of the crio container factory successfully Feb 20 11:49:14.496418 master-0 kubenswrapper[7756]: I0220 11:49:14.496391 7756 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 20 11:49:14.496479 master-0 kubenswrapper[7756]: I0220 11:49:14.496458 7756 factory.go:103] Registering Raw factory Feb 20 11:49:14.496522 master-0 kubenswrapper[7756]: I0220 11:49:14.496495 7756 manager.go:1196] Started watching for new ooms in manager Feb 20 11:49:14.496755 master-0 kubenswrapper[7756]: I0220 11:49:14.496709 7756 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 20 11:49:14.497657 master-0 kubenswrapper[7756]: I0220 11:49:14.497628 7756 manager.go:319] Starting recovery of all containers Feb 20 11:49:14.502175 master-0 kubenswrapper[7756]: I0220 11:49:14.502102 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07281644-2789-424f-8429-aa4448dda01e" volumeName="kubernetes.io/projected/07281644-2789-424f-8429-aa4448dda01e-kube-api-access-l5pw4" seLinuxMountContext="" Feb 20 11:49:14.502242 master-0 kubenswrapper[7756]: I0220 11:49:14.502172 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="478be5e4-cf17-4ebf-a45a-c18cd2b69929" volumeName="kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-env-overrides" seLinuxMountContext="" Feb 20 11:49:14.502242 master-0 kubenswrapper[7756]: I0220 11:49:14.502196 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="478be5e4-cf17-4ebf-a45a-c18cd2b69929" volumeName="kubernetes.io/projected/478be5e4-cf17-4ebf-a45a-c18cd2b69929-kube-api-access-5j4cs" seLinuxMountContext="" Feb 20 11:49:14.502242 master-0 kubenswrapper[7756]: I0220 11:49:14.502215 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="67f890c8-05a1-4797-8da8-6194aea0df9a" volumeName="kubernetes.io/projected/67f890c8-05a1-4797-8da8-6194aea0df9a-kube-api-access" seLinuxMountContext="" Feb 20 11:49:14.502242 master-0 kubenswrapper[7756]: I0220 11:49:14.502236 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="836a6d7e-9b26-425f-ae21-00422515d7fe" volumeName="kubernetes.io/projected/836a6d7e-9b26-425f-ae21-00422515d7fe-kube-api-access-ms8wk" seLinuxMountContext="" Feb 20 11:49:14.502356 master-0 kubenswrapper[7756]: I0220 11:49:14.502257 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01e90033-9ddf-41b4-ab61-e89add6c2fde" volumeName="kubernetes.io/projected/01e90033-9ddf-41b4-ab61-e89add6c2fde-kube-api-access-j2tk7" seLinuxMountContext="" Feb 20 11:49:14.502356 master-0 kubenswrapper[7756]: I0220 11:49:14.502277 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31969539-bfd1-466f-8697-f13cbbd957df" volumeName="kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-env-overrides" seLinuxMountContext="" Feb 20 11:49:14.502410 master-0 kubenswrapper[7756]: I0220 11:49:14.502356 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4d060bff-3c25-4eeb-bdd3-e20fb2687645" volumeName="kubernetes.io/projected/4d060bff-3c25-4eeb-bdd3-e20fb2687645-kube-api-access-26x7b" seLinuxMountContext="" Feb 20 11:49:14.502410 master-0 kubenswrapper[7756]: I0220 11:49:14.502382 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" volumeName="kubernetes.io/projected/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-kube-api-access-2zkbq" seLinuxMountContext="" Feb 20 11:49:14.502410 master-0 kubenswrapper[7756]: I0220 11:49:14.502403 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db2a7cb1-1d05-4b24-86ed-f823fad5013e" volumeName="kubernetes.io/configmap/db2a7cb1-1d05-4b24-86ed-f823fad5013e-trusted-ca" seLinuxMountContext="" Feb 20 11:49:14.502498 master-0 kubenswrapper[7756]: I0220 11:49:14.502423 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1709ef31-9ddd-42bf-9a95-4be4502a0828" volumeName="kubernetes.io/projected/1709ef31-9ddd-42bf-9a95-4be4502a0828-kube-api-access-79j9f" seLinuxMountContext="" Feb 20 11:49:14.502498 master-0 kubenswrapper[7756]: I0220 11:49:14.502444 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" volumeName="kubernetes.io/secret/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-serving-cert" seLinuxMountContext="" Feb 20 11:49:14.502498 master-0 kubenswrapper[7756]: I0220 11:49:14.502464 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce2b6fde-de56-49c3-9bd6-e81c679b02bc" volumeName="kubernetes.io/projected/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-kube-api-access-2k8n8" seLinuxMountContext="" Feb 20 11:49:14.502498 master-0 kubenswrapper[7756]: I0220 11:49:14.502488 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0b28c90-d5b6-44f3-867c-020ece32ac7d" volumeName="kubernetes.io/secret/e0b28c90-d5b6-44f3-867c-020ece32ac7d-serving-cert" seLinuxMountContext="" Feb 20 11:49:14.502650 master-0 kubenswrapper[7756]: I0220 11:49:14.502508 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1df81fcc-f967-4874-ad16-1a89f0e7875a" volumeName="kubernetes.io/secret/1df81fcc-f967-4874-ad16-1a89f0e7875a-serving-cert" seLinuxMountContext="" Feb 20 11:49:14.502650 master-0 kubenswrapper[7756]: I0220 11:49:14.502551 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cbb46f1-1c33-42fc-8371-6a1bea8c28ff" volumeName="kubernetes.io/configmap/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-trusted-ca" seLinuxMountContext="" Feb 20 11:49:14.502650 master-0 kubenswrapper[7756]: I0220 11:49:14.502573 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dfca740-0387-428a-b957-3e8a09c6e352" volumeName="kubernetes.io/configmap/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-trusted-ca" seLinuxMountContext="" Feb 20 11:49:14.502650 master-0 kubenswrapper[7756]: I0220 11:49:14.502600 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fca5d50-eb5f-4dbb-bdf6-8e07231406f9" volumeName="kubernetes.io/secret/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-serving-cert" seLinuxMountContext="" Feb 20 11:49:14.502650 master-0 kubenswrapper[7756]: I0220 11:49:14.502618 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" volumeName="kubernetes.io/secret/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-serving-cert" seLinuxMountContext="" Feb 20 11:49:14.502650 master-0 kubenswrapper[7756]: I0220 11:49:14.502644 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="312ca024-c8f0-4994-8f9a-b707607341fe" volumeName="kubernetes.io/secret/312ca024-c8f0-4994-8f9a-b707607341fe-metrics-tls" seLinuxMountContext="" Feb 20 11:49:14.502808 master-0 kubenswrapper[7756]: I0220 11:49:14.502663 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="836a6d7e-9b26-425f-ae21-00422515d7fe" volumeName="kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-env-overrides" seLinuxMountContext="" Feb 20 11:49:14.502808 master-0 kubenswrapper[7756]: I0220 11:49:14.502685 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f98aeaf7-bf1a-46af-bf1b-85713baa4c67" volumeName="kubernetes.io/secret/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-serving-cert" seLinuxMountContext="" Feb 20 11:49:14.502808 master-0 kubenswrapper[7756]: I0220 11:49:14.502703 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01e90033-9ddf-41b4-ab61-e89add6c2fde" volumeName="kubernetes.io/secret/01e90033-9ddf-41b4-ab61-e89add6c2fde-serving-cert" seLinuxMountContext="" Feb 20 11:49:14.502808 master-0 kubenswrapper[7756]: I0220 11:49:14.502721 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5360f3f5-2d07-432f-af45-22659538c55e" volumeName="kubernetes.io/projected/5360f3f5-2d07-432f-af45-22659538c55e-kube-api-access-7vvm8" seLinuxMountContext="" Feb 20 11:49:14.502808 master-0 kubenswrapper[7756]: I0220 11:49:14.502739 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="67f890c8-05a1-4797-8da8-6194aea0df9a" volumeName="kubernetes.io/configmap/67f890c8-05a1-4797-8da8-6194aea0df9a-service-ca" seLinuxMountContext="" Feb 20 11:49:14.502808 master-0 kubenswrapper[7756]: I0220 11:49:14.502758 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca" volumeName="kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-kube-api-access-lqxhp" seLinuxMountContext="" Feb 20 11:49:14.503044 master-0 kubenswrapper[7756]: I0220 11:49:14.502821 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d65a0af4-c96f-44f8-9384-6bae4585983b" volumeName="kubernetes.io/projected/d65a0af4-c96f-44f8-9384-6bae4585983b-kube-api-access-bpk24" seLinuxMountContext="" Feb 20 11:49:14.503044 master-0 kubenswrapper[7756]: I0220 11:49:14.502845 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783" volumeName="kubernetes.io/projected/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-kube-api-access-jx26k" seLinuxMountContext="" Feb 20 11:49:14.503044 master-0 kubenswrapper[7756]: I0220 11:49:14.502866 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22bba1b3-587d-4802-b4ae-946827c3fa7a" volumeName="kubernetes.io/configmap/22bba1b3-587d-4802-b4ae-946827c3fa7a-telemetry-config" seLinuxMountContext="" Feb 20 11:49:14.503044 master-0 kubenswrapper[7756]: I0220 11:49:14.502884 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="478be5e4-cf17-4ebf-a45a-c18cd2b69929" volumeName="kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-script-lib" seLinuxMountContext="" Feb 20 11:49:14.503044 master-0 kubenswrapper[7756]: I0220 11:49:14.502902 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="533fe3c7-504f-40aa-aab0-8d66ef27920f" volumeName="kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-daemon-config" seLinuxMountContext="" Feb 20 11:49:14.503044 master-0 kubenswrapper[7756]: I0220 11:49:14.502922 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="478be5e4-cf17-4ebf-a45a-c18cd2b69929" volumeName="kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-config" seLinuxMountContext="" Feb 20 11:49:14.503044 master-0 kubenswrapper[7756]: I0220 11:49:14.502940 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" volumeName="kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-config" seLinuxMountContext="" Feb 20 11:49:14.503044 master-0 kubenswrapper[7756]: I0220 11:49:14.502959 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5360f3f5-2d07-432f-af45-22659538c55e" volumeName="kubernetes.io/configmap/5360f3f5-2d07-432f-af45-22659538c55e-config" seLinuxMountContext="" Feb 20 11:49:14.503044 master-0 kubenswrapper[7756]: I0220 11:49:14.502983 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" volumeName="kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-service-ca-bundle" seLinuxMountContext="" Feb 20 11:49:14.503044 master-0 kubenswrapper[7756]: I0220 11:49:14.503004 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="836a6d7e-9b26-425f-ae21-00422515d7fe" volumeName="kubernetes.io/secret/836a6d7e-9b26-425f-ae21-00422515d7fe-webhook-cert" seLinuxMountContext="" Feb 20 11:49:14.503044 master-0 kubenswrapper[7756]: I0220 11:49:14.503023 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db2a7cb1-1d05-4b24-86ed-f823fad5013e" volumeName="kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-kube-api-access-6td56" seLinuxMountContext="" Feb 20 11:49:14.503044 master-0 kubenswrapper[7756]: I0220 11:49:14.503042 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dbce6cdc-040a-48e1-8a81-b6ff9c180eba" volumeName="kubernetes.io/projected/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-kube-api-access-z2kct" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503062 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1388469-5e55-4c1b-97c3-c88777f29ae7" volumeName="kubernetes.io/projected/f1388469-5e55-4c1b-97c3-c88777f29ae7-kube-api-access" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503082 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07281644-2789-424f-8429-aa4448dda01e" volumeName="kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-sysctl-allowlist" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503101 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07281644-2789-424f-8429-aa4448dda01e" volumeName="kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-binary-copy" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503120 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" volumeName="kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-serving-cert" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503139 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1df81fcc-f967-4874-ad16-1a89f0e7875a" volumeName="kubernetes.io/configmap/1df81fcc-f967-4874-ad16-1a89f0e7875a-config" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503157 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="312ca024-c8f0-4994-8f9a-b707607341fe" volumeName="kubernetes.io/projected/312ca024-c8f0-4994-8f9a-b707607341fe-kube-api-access-bpnmz" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503174 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" volumeName="kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-trusted-ca-bundle" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503192 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce2b6fde-de56-49c3-9bd6-e81c679b02bc" volumeName="kubernetes.io/secret/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503209 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d65a0af4-c96f-44f8-9384-6bae4585983b" volumeName="kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-profile-collector-cert" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503227 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07281644-2789-424f-8429-aa4448dda01e" volumeName="kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-whereabouts-configmap" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503246 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f98aeaf7-bf1a-46af-bf1b-85713baa4c67" volumeName="kubernetes.io/configmap/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-config" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503266 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db2a7cb1-1d05-4b24-86ed-f823fad5013e" volumeName="kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-bound-sa-token" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503284 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" volumeName="kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-ca" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503303 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" volumeName="kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-service-ca" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503327 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" volumeName="kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-client" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503347 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1df81fcc-f967-4874-ad16-1a89f0e7875a" volumeName="kubernetes.io/projected/1df81fcc-f967-4874-ad16-1a89f0e7875a-kube-api-access-7mggv" seLinuxMountContext="" Feb 20 11:49:14.503349 master-0 kubenswrapper[7756]: I0220 11:49:14.503368 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31969539-bfd1-466f-8697-f13cbbd957df" volumeName="kubernetes.io/secret/31969539-bfd1-466f-8697-f13cbbd957df-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503388 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cbb46f1-1c33-42fc-8371-6a1bea8c28ff" volumeName="kubernetes.io/projected/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-kube-api-access-8nd7r" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503410 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dfca740-0387-428a-b957-3e8a09c6e352" volumeName="kubernetes.io/projected/6dfca740-0387-428a-b957-3e8a09c6e352-kube-api-access-d4457" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503431 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01e90033-9ddf-41b4-ab61-e89add6c2fde" volumeName="kubernetes.io/configmap/01e90033-9ddf-41b4-ab61-e89add6c2fde-config" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503450 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8" volumeName="kubernetes.io/projected/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-kube-api-access-rcnmk" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503469 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1388469-5e55-4c1b-97c3-c88777f29ae7" volumeName="kubernetes.io/secret/f1388469-5e55-4c1b-97c3-c88777f29ae7-serving-cert" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503487 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="839bf5b1-b242-4bbd-bc09-cf6abcf7f734" volumeName="kubernetes.io/projected/839bf5b1-b242-4bbd-bc09-cf6abcf7f734-kube-api-access-pvxsh" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503506 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22bba1b3-587d-4802-b4ae-946827c3fa7a" volumeName="kubernetes.io/projected/22bba1b3-587d-4802-b4ae-946827c3fa7a-kube-api-access-2wnh5" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503549 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="533fe3c7-504f-40aa-aab0-8d66ef27920f" volumeName="kubernetes.io/projected/533fe3c7-504f-40aa-aab0-8d66ef27920f-kube-api-access-jrwcs" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503571 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb135cff-1a2e-468d-80ab-f7db3f57552a" volumeName="kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-auth-proxy-config" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503592 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1388469-5e55-4c1b-97c3-c88777f29ae7" volumeName="kubernetes.io/configmap/f1388469-5e55-4c1b-97c3-c88777f29ae7-config" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503611 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fca5d50-eb5f-4dbb-bdf6-8e07231406f9" volumeName="kubernetes.io/configmap/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-config" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503637 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5360f3f5-2d07-432f-af45-22659538c55e" volumeName="kubernetes.io/secret/5360f3f5-2d07-432f-af45-22659538c55e-serving-cert" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503656 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="836a6d7e-9b26-425f-ae21-00422515d7fe" volumeName="kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-ovnkube-identity-cm" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503674 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="906307ef-d988-49e7-9d63-39116a2c4880" volumeName="kubernetes.io/projected/906307ef-d988-49e7-9d63-39116a2c4880-kube-api-access-5j82z" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503699 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4d060bff-3c25-4eeb-bdd3-e20fb2687645" volumeName="kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-profile-collector-cert" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503727 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="906307ef-d988-49e7-9d63-39116a2c4880" volumeName="kubernetes.io/configmap/906307ef-d988-49e7-9d63-39116a2c4880-iptables-alerter-script" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503745 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce2b6fde-de56-49c3-9bd6-e81c679b02bc" volumeName="kubernetes.io/empty-dir/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-operand-assets" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503762 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0b28c90-d5b6-44f3-867c-020ece32ac7d" volumeName="kubernetes.io/projected/e0b28c90-d5b6-44f3-867c-020ece32ac7d-kube-api-access" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503780 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f98aeaf7-bf1a-46af-bf1b-85713baa4c67" volumeName="kubernetes.io/projected/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-kube-api-access" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503798 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="533fe3c7-504f-40aa-aab0-8d66ef27920f" volumeName="kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-cni-binary-copy" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503817 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="478be5e4-cf17-4ebf-a45a-c18cd2b69929" volumeName="kubernetes.io/secret/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovn-node-metrics-cert" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503956 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" volumeName="kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-config" seLinuxMountContext="" Feb 20 11:49:14.504064 master-0 kubenswrapper[7756]: I0220 11:49:14.503983 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca" volumeName="kubernetes.io/configmap/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-trusted-ca" seLinuxMountContext="" Feb 20 11:49:14.507659 master-0 kubenswrapper[7756]: I0220 11:49:14.507596 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb135cff-1a2e-468d-80ab-f7db3f57552a" volumeName="kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-images" seLinuxMountContext="" Feb 20 11:49:14.507751 master-0 kubenswrapper[7756]: I0220 11:49:14.507667 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31969539-bfd1-466f-8697-f13cbbd957df" volumeName="kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-ovnkube-config" seLinuxMountContext="" Feb 20 11:49:14.507751 master-0 kubenswrapper[7756]: I0220 11:49:14.507698 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0b28c90-d5b6-44f3-867c-020ece32ac7d" volumeName="kubernetes.io/configmap/e0b28c90-d5b6-44f3-867c-020ece32ac7d-config" seLinuxMountContext="" Feb 20 11:49:14.507751 master-0 kubenswrapper[7756]: I0220 11:49:14.507737 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb135cff-1a2e-468d-80ab-f7db3f57552a" volumeName="kubernetes.io/projected/eb135cff-1a2e-468d-80ab-f7db3f57552a-kube-api-access-tk5sc" seLinuxMountContext="" Feb 20 11:49:14.507869 master-0 kubenswrapper[7756]: I0220 11:49:14.507763 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fca5d50-eb5f-4dbb-bdf6-8e07231406f9" volumeName="kubernetes.io/projected/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-kube-api-access-8p4w6" seLinuxMountContext="" Feb 20 11:49:14.507869 master-0 kubenswrapper[7756]: I0220 11:49:14.507788 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" volumeName="kubernetes.io/projected/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-kube-api-access-8k2dv" seLinuxMountContext="" Feb 20 11:49:14.507869 master-0 kubenswrapper[7756]: I0220 11:49:14.507813 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" volumeName="kubernetes.io/projected/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-kube-api-access-lvjcp" seLinuxMountContext="" Feb 20 11:49:14.507869 master-0 kubenswrapper[7756]: I0220 11:49:14.507841 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31969539-bfd1-466f-8697-f13cbbd957df" volumeName="kubernetes.io/projected/31969539-bfd1-466f-8697-f13cbbd957df-kube-api-access-7ts6s" seLinuxMountContext="" Feb 20 11:49:14.508004 master-0 kubenswrapper[7756]: I0220 11:49:14.507875 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca" volumeName="kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-bound-sa-token" seLinuxMountContext="" Feb 20 11:49:14.508004 master-0 kubenswrapper[7756]: I0220 11:49:14.507894 7756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" volumeName="kubernetes.io/empty-dir/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-available-featuregates" seLinuxMountContext="" Feb 20 11:49:14.508004 master-0 kubenswrapper[7756]: I0220 11:49:14.507912 7756 reconstruct.go:97] "Volume reconstruction finished" Feb 20 11:49:14.508004 master-0 kubenswrapper[7756]: I0220 11:49:14.507924 7756 reconciler.go:26] "Reconciler: start to sync state" Feb 20 11:49:14.513255 master-0 kubenswrapper[7756]: I0220 11:49:14.512636 7756 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 20 11:49:14.575190 master-0 kubenswrapper[7756]: I0220 11:49:14.575067 7756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 20 11:49:14.577290 master-0 kubenswrapper[7756]: I0220 11:49:14.577276 7756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 20 11:49:14.577395 master-0 kubenswrapper[7756]: I0220 11:49:14.577378 7756 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 20 11:49:14.577482 master-0 kubenswrapper[7756]: I0220 11:49:14.577472 7756 kubelet.go:2335] "Starting kubelet main sync loop" Feb 20 11:49:14.577714 master-0 kubenswrapper[7756]: E0220 11:49:14.577683 7756 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 20 11:49:14.580605 master-0 kubenswrapper[7756]: I0220 11:49:14.580562 7756 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 20 11:49:14.595782 master-0 kubenswrapper[7756]: I0220 11:49:14.595716 7756 generic.go:334] "Generic (PLEG): container finished" podID="478be5e4-cf17-4ebf-a45a-c18cd2b69929" containerID="5b4211a2cc9a2198d36fabbec1b685ffa0d3133fee06da2f4d880279f8a4b229" exitCode=0 Feb 20 11:49:14.610765 master-0 kubenswrapper[7756]: I0220 11:49:14.610690 7756 generic.go:334] "Generic (PLEG): container finished" podID="ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" containerID="136d6f3a9793756201eb14c53a4ba43141e49068fbce78152349e9d918491065" exitCode=0 Feb 20 11:49:14.618679 master-0 kubenswrapper[7756]: I0220 11:49:14.618547 7756 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="916faa0bd31e938470f1917fc27df9d9c5c42d01e4d8c634e516e1d594156790" exitCode=0 Feb 20 11:49:14.620466 master-0 kubenswrapper[7756]: I0220 11:49:14.620421 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 20 11:49:14.620890 master-0 kubenswrapper[7756]: I0220 11:49:14.620852 7756 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="53e7dc45156105f926a77b4b48981d5e387a572098dd2e0e299ab01a43056605" exitCode=1 Feb 20 11:49:14.620957 master-0 kubenswrapper[7756]: I0220 11:49:14.620887 7756 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="fbba6df4a59d8edb9a6ffa0ddbac2d1f8af28cf04b9ed9d72f140a13ab377500" exitCode=0 Feb 20 11:49:14.629008 master-0 kubenswrapper[7756]: I0220 11:49:14.628950 7756 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="63a4ec3dde8f5a0e5831c20c7c43b03806a786d19e88fcb36793fe30ce83f9e5" exitCode=1 Feb 20 11:49:14.645147 master-0 kubenswrapper[7756]: I0220 11:49:14.645082 7756 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="9c73ec43a36008a1472e95cc448d96cb453a34c7d0f5983c1a526f4f124df839" exitCode=0 Feb 20 11:49:14.645147 master-0 kubenswrapper[7756]: I0220 11:49:14.645124 7756 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="f69d0d27fc97dfc5ca9cd544f311dfc218b6f712d28eef596d03ab2168409d7f" exitCode=0 Feb 20 11:49:14.645147 master-0 kubenswrapper[7756]: I0220 11:49:14.645132 7756 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="3aec6ee8f7b5920e9465051d7cfad692f6df3984abc458694d67b2ca16e3fc95" exitCode=0 Feb 20 11:49:14.645147 master-0 kubenswrapper[7756]: I0220 11:49:14.645140 7756 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="f1fbf807f82eab937178a587053f37db417fee5bbaad310485c0ef4a2b0f6684" exitCode=0 Feb 20 11:49:14.645147 master-0 kubenswrapper[7756]: I0220 11:49:14.645150 7756 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="59c4640ef16d19d630f393a377f5a55900e0d594bb8e948836367e29624486c7" exitCode=0 Feb 20 11:49:14.645147 master-0 kubenswrapper[7756]: I0220 11:49:14.645158 7756 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="fd6a9476a5e46b15b6371b4f9b6a262cda38dc0b2ce85f673487d39ba4902d2c" exitCode=0 Feb 20 11:49:14.659708 master-0 kubenswrapper[7756]: I0220 11:49:14.659659 7756 generic.go:334] "Generic (PLEG): container finished" podID="952aa6bb-4f60-4582-b978-52ebf9218755" containerID="5ad7139b014a017e9214a9b49d5763ba0bf59d3613eecad560b203e714e96877" exitCode=0 Feb 20 11:49:14.662119 master-0 kubenswrapper[7756]: I0220 11:49:14.662098 7756 manager.go:324] Recovery completed Feb 20 11:49:14.677925 master-0 kubenswrapper[7756]: E0220 11:49:14.677872 7756 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 20 11:49:14.710876 master-0 kubenswrapper[7756]: I0220 11:49:14.710801 7756 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 20 11:49:14.710876 master-0 kubenswrapper[7756]: I0220 11:49:14.710833 7756 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 20 11:49:14.710876 master-0 kubenswrapper[7756]: I0220 11:49:14.710882 7756 state_mem.go:36] "Initialized new in-memory state store" Feb 20 11:49:14.711264 master-0 kubenswrapper[7756]: I0220 11:49:14.711116 7756 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 20 11:49:14.711264 master-0 kubenswrapper[7756]: I0220 11:49:14.711130 7756 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 20 11:49:14.711264 master-0 kubenswrapper[7756]: I0220 11:49:14.711155 7756 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 20 11:49:14.711264 master-0 kubenswrapper[7756]: I0220 11:49:14.711161 7756 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 20 11:49:14.711264 master-0 kubenswrapper[7756]: I0220 11:49:14.711169 7756 policy_none.go:49] "None policy: Start" Feb 20 11:49:14.712537 master-0 kubenswrapper[7756]: I0220 11:49:14.712493 7756 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 20 11:49:14.712537 master-0 kubenswrapper[7756]: I0220 11:49:14.712517 7756 state_mem.go:35] "Initializing new in-memory state store" Feb 20 11:49:14.712782 master-0 kubenswrapper[7756]: I0220 11:49:14.712746 7756 state_mem.go:75] "Updated machine memory state" Feb 20 11:49:14.712782 master-0 kubenswrapper[7756]: I0220 11:49:14.712778 7756 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 20 11:49:14.728987 master-0 kubenswrapper[7756]: I0220 11:49:14.728944 7756 manager.go:334] "Starting Device Plugin manager" Feb 20 11:49:14.729345 master-0 kubenswrapper[7756]: I0220 11:49:14.729322 7756 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 20 11:49:14.729467 master-0 kubenswrapper[7756]: I0220 11:49:14.729426 7756 server.go:79] "Starting device plugin registration server" Feb 20 11:49:14.730255 master-0 kubenswrapper[7756]: I0220 11:49:14.730237 7756 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 20 11:49:14.730403 master-0 kubenswrapper[7756]: I0220 11:49:14.730349 7756 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 20 11:49:14.730692 master-0 kubenswrapper[7756]: I0220 11:49:14.730639 7756 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 20 11:49:14.730933 master-0 kubenswrapper[7756]: I0220 11:49:14.730879 7756 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 20 11:49:14.730933 master-0 kubenswrapper[7756]: I0220 11:49:14.730928 7756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 20 11:49:14.830882 master-0 kubenswrapper[7756]: I0220 11:49:14.830812 7756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 11:49:14.833051 master-0 kubenswrapper[7756]: I0220 11:49:14.832997 7756 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 11:49:14.833162 master-0 kubenswrapper[7756]: I0220 11:49:14.833064 7756 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 11:49:14.833162 master-0 kubenswrapper[7756]: I0220 11:49:14.833099 7756 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 11:49:14.833312 master-0 kubenswrapper[7756]: I0220 11:49:14.833231 7756 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 11:49:14.844001 master-0 kubenswrapper[7756]: I0220 11:49:14.843798 7756 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 20 11:49:14.844231 master-0 kubenswrapper[7756]: I0220 11:49:14.844191 7756 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 20 11:49:14.879913 master-0 kubenswrapper[7756]: I0220 11:49:14.879155 7756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Feb 20 11:49:14.880186 master-0 kubenswrapper[7756]: I0220 11:49:14.879995 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"6f48bf3168ea3ca5cdb5d4b4fe30f40410c99744121d1afe1db8ccea90206a28"} Feb 20 11:49:14.880186 master-0 kubenswrapper[7756]: I0220 11:49:14.880092 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"d2c649879e879ea783f4e70fae9dbd4ad2a036263190c8c86c941dcd3804935b"} Feb 20 11:49:14.880186 master-0 kubenswrapper[7756]: I0220 11:49:14.880120 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17066e06e0b2e2b17534c5886b653c787b7eefd7c251f9787e27a3b174b19ab1" Feb 20 11:49:14.880186 master-0 kubenswrapper[7756]: I0220 11:49:14.880137 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"553dca30a8dfd11fe251075900d8d07349a66d2b7a86bc97b7536eb7dfb88315"} Feb 20 11:49:14.880186 master-0 kubenswrapper[7756]: I0220 11:49:14.880147 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"a1efa78f7f5d27240191b971820a5d5e18a579348d72495e656c080f9213d5fe"} Feb 20 11:49:14.880186 master-0 kubenswrapper[7756]: I0220 11:49:14.880157 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerDied","Data":"916faa0bd31e938470f1917fc27df9d9c5c42d01e4d8c634e516e1d594156790"} Feb 20 11:49:14.880186 master-0 kubenswrapper[7756]: I0220 11:49:14.880168 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"9c62c9e4d7c03ed804b559ff9f9468e7a7f91ed8870a1b6239bb6b24438d3b6a"} Feb 20 11:49:14.880186 master-0 kubenswrapper[7756]: I0220 11:49:14.880176 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"6a6cafc7c486ca7c318193e8cb75dc02c40abcaf8818e09b14c243a316830547"} Feb 20 11:49:14.880186 master-0 kubenswrapper[7756]: I0220 11:49:14.880187 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"53e7dc45156105f926a77b4b48981d5e387a572098dd2e0e299ab01a43056605"} Feb 20 11:49:14.880186 master-0 kubenswrapper[7756]: I0220 11:49:14.880200 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"fbba6df4a59d8edb9a6ffa0ddbac2d1f8af28cf04b9ed9d72f140a13ab377500"} Feb 20 11:49:14.880703 master-0 kubenswrapper[7756]: I0220 11:49:14.880210 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"fb6f6ab6826113043c422e9cb31e951a4709e29a8f548f2f0410e49be87f511d"} Feb 20 11:49:14.880703 master-0 kubenswrapper[7756]: I0220 11:49:14.880225 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"6d3121ed9f14f1a68a11c14e19a8ba5e47d812ae84b3f62cc56772a81aa8f139"} Feb 20 11:49:14.880703 master-0 kubenswrapper[7756]: I0220 11:49:14.880235 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"f1682d7b4b37ab8ab7b0e93abba0b5ee3a264e78978d6dc34d6d434f13d2a6ae"} Feb 20 11:49:14.880703 master-0 kubenswrapper[7756]: I0220 11:49:14.880245 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"63a4ec3dde8f5a0e5831c20c7c43b03806a786d19e88fcb36793fe30ce83f9e5"} Feb 20 11:49:14.880703 master-0 kubenswrapper[7756]: I0220 11:49:14.880256 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"fcd9999695d850ee86685844ce22164c47296c700a3e8af3d20ba2a180990b4a"} Feb 20 11:49:14.880703 master-0 kubenswrapper[7756]: I0220 11:49:14.880267 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"96886d8a032fd5d62adc57b52a624b84e10414b0186d56899d96874f35313ca3"} Feb 20 11:49:14.880703 master-0 kubenswrapper[7756]: I0220 11:49:14.880278 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"f5f43068fbb5a9da164f8ee835b3b81c0e487b16f18a0855b330e8a595241a1a"} Feb 20 11:49:14.880703 master-0 kubenswrapper[7756]: I0220 11:49:14.880287 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"e0641894015187b3510c96c6c6bd4f01c441dcb52b2dacc02ae9839b7ddf2146"} Feb 20 11:49:14.880703 master-0 kubenswrapper[7756]: I0220 11:49:14.880322 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74210ef7f7477a5cc9d9d264c0abf5069d4cce4d0d3176995bf660061a1084b1" Feb 20 11:49:14.880703 master-0 kubenswrapper[7756]: I0220 11:49:14.880335 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaff8df8130a8f21e8f2fac6966945ec9db98da32ff593d737d8c12e79e27bd7" Feb 20 11:49:14.883911 master-0 kubenswrapper[7756]: I0220 11:49:14.883846 7756 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="kube-system/bootstrap-kube-scheduler-master-0" oldPodUID="56c3cb71c9851003c8de7e7c5db4b87e" podUID="33705bb4-8996-4330-a613-4a7a1601592c" Feb 20 11:49:14.890645 master-0 kubenswrapper[7756]: W0220 11:49:14.890368 7756 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 20 11:49:14.890645 master-0 kubenswrapper[7756]: E0220 11:49:14.890440 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:49:14.890645 master-0 kubenswrapper[7756]: E0220 11:49:14.890553 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:49:14.895802 master-0 kubenswrapper[7756]: E0220 11:49:14.895764 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:49:14.915350 master-0 kubenswrapper[7756]: I0220 11:49:14.915281 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:14.915350 master-0 kubenswrapper[7756]: I0220 11:49:14.915342 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:14.915554 master-0 kubenswrapper[7756]: I0220 11:49:14.915382 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:14.915554 master-0 kubenswrapper[7756]: I0220 11:49:14.915415 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:14.915554 master-0 kubenswrapper[7756]: I0220 11:49:14.915445 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:49:14.915554 master-0 kubenswrapper[7756]: I0220 11:49:14.915467 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:14.915554 master-0 kubenswrapper[7756]: I0220 11:49:14.915494 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:49:14.915554 master-0 kubenswrapper[7756]: I0220 11:49:14.915517 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:49:14.915554 master-0 kubenswrapper[7756]: I0220 11:49:14.915553 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:14.915812 master-0 kubenswrapper[7756]: I0220 11:49:14.915576 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:14.915812 master-0 kubenswrapper[7756]: I0220 11:49:14.915601 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:14.915812 master-0 kubenswrapper[7756]: I0220 11:49:14.915626 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:49:14.915812 master-0 kubenswrapper[7756]: I0220 11:49:14.915649 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:49:14.915812 master-0 kubenswrapper[7756]: I0220 11:49:14.915672 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:14.915812 master-0 kubenswrapper[7756]: I0220 11:49:14.915696 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:14.915812 master-0 kubenswrapper[7756]: I0220 11:49:14.915721 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:14.915812 master-0 kubenswrapper[7756]: I0220 11:49:14.915747 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:49:14.999763 master-0 kubenswrapper[7756]: E0220 11:49:14.998893 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:15.016315 master-0 kubenswrapper[7756]: I0220 11:49:15.016172 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:15.016315 master-0 kubenswrapper[7756]: I0220 11:49:15.016263 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016372 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016469 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016521 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016600 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016639 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016679 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016686 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016639 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016732 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016755 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016799 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016755 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016847 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016862 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016889 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016876 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016936 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.016973 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.017003 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.017066 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.017070 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.017109 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.017157 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.017156 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.017205 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.017225 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.017249 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.017264 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.017314 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.017336 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.017360 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:49:15.017638 master-0 kubenswrapper[7756]: I0220 11:49:15.017397 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:49:15.476620 master-0 kubenswrapper[7756]: I0220 11:49:15.476547 7756 apiserver.go:52] "Watching apiserver" Feb 20 11:49:15.490466 master-0 kubenswrapper[7756]: I0220 11:49:15.490417 7756 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 20 11:49:15.491461 master-0 kubenswrapper[7756]: I0220 11:49:15.491405 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-s6zmp","openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd","openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj","openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8","kube-system/bootstrap-kube-scheduler-master-0","openshift-ingress-operator/ingress-operator-6569778c84-kw2v6","openshift-network-operator/iptables-alerter-gkxzr","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8","openshift-network-diagnostics/network-check-target-h5w2t","openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw","openshift-dns-operator/dns-operator-8c7d49845-qhx9j","openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85","openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75","openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-marketplace/marketplace-operator-6f5488b997-nr4tg","openshift-multus/multus-9qpc7","openshift-multus/network-metrics-daemon-29622","openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt","openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g","kube-system/bootstrap-kube-controller-manager-master-0","openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst","openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l","openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89","openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk","openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v","openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt","openshift-multus/multus-additional-cni-plugins-f2l64","openshift-network-node-identity/network-node-identity-psm4s","openshift-ovn-kubernetes/ovnkube-node-7l848","openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw","openshift-etcd/etcd-master-0-master-0","openshift-network-operator/network-operator-7d7db75979-fv598","openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc","openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc","openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw"] Feb 20 11:49:15.491832 master-0 kubenswrapper[7756]: I0220 11:49:15.491801 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:15.491975 master-0 kubenswrapper[7756]: I0220 11:49:15.491917 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 11:49:15.494090 master-0 kubenswrapper[7756]: I0220 11:49:15.494047 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:15.495734 master-0 kubenswrapper[7756]: I0220 11:49:15.495682 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:15.498452 master-0 kubenswrapper[7756]: I0220 11:49:15.498416 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:15.498979 master-0 kubenswrapper[7756]: I0220 11:49:15.498951 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:15.499184 master-0 kubenswrapper[7756]: I0220 11:49:15.499151 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:15.500037 master-0 kubenswrapper[7756]: I0220 11:49:15.499993 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 20 11:49:15.501644 master-0 kubenswrapper[7756]: I0220 11:49:15.501504 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:15.505203 master-0 kubenswrapper[7756]: I0220 11:49:15.504704 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 20 11:49:15.505203 master-0 kubenswrapper[7756]: I0220 11:49:15.504753 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 20 11:49:15.505203 master-0 kubenswrapper[7756]: I0220 11:49:15.505113 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 20 11:49:15.507835 master-0 kubenswrapper[7756]: I0220 11:49:15.507796 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.508119 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.508503 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.509241 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.509291 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.509312 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.509360 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.509508 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.509551 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.509687 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.509727 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.509769 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.509687 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.509858 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.510020 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.510162 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.509768 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 20 11:49:15.510307 master-0 kubenswrapper[7756]: I0220 11:49:15.510348 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.510499 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.510914 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.510951 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.510919 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.511053 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.511159 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.511218 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.511593 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.511818 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.511945 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.512068 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.512097 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.512164 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.512197 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.512329 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.512408 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.512422 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.512444 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.512460 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.512519 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.512593 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.512678 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.512754 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.512812 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.513010 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.513141 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.513313 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.513376 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 20 11:49:15.514653 master-0 kubenswrapper[7756]: I0220 11:49:15.513541 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:49:15.518454 master-0 kubenswrapper[7756]: I0220 11:49:15.516596 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 20 11:49:15.518454 master-0 kubenswrapper[7756]: I0220 11:49:15.517350 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 20 11:49:15.518639 master-0 kubenswrapper[7756]: I0220 11:49:15.518567 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:15.518639 master-0 kubenswrapper[7756]: I0220 11:49:15.518620 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2a7cb1-1d05-4b24-86ed-f823fad5013e-trusted-ca\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:15.518761 master-0 kubenswrapper[7756]: I0220 11:49:15.518668 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.518960 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519013 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519055 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519102 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519135 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1388469-5e55-4c1b-97c3-c88777f29ae7-config\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519167 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2tk7\" (UniqueName: \"kubernetes.io/projected/01e90033-9ddf-41b4-ab61-e89add6c2fde-kube-api-access-j2tk7\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519204 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1388469-5e55-4c1b-97c3-c88777f29ae7-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519238 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2kct\" (UniqueName: \"kubernetes.io/projected/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-kube-api-access-z2kct\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519239 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519279 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519317 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqxhp\" (UniqueName: \"kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-kube-api-access-lqxhp\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519360 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p4w6\" (UniqueName: \"kubernetes.io/projected/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-kube-api-access-8p4w6\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519439 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1388469-5e55-4c1b-97c3-c88777f29ae7-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519469 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e90033-9ddf-41b4-ab61-e89add6c2fde-serving-cert\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519493 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e90033-9ddf-41b4-ab61-e89add6c2fde-config\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519510 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-bound-sa-token\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519840 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1388469-5e55-4c1b-97c3-c88777f29ae7-config\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.519978 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.520002 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e90033-9ddf-41b4-ab61-e89add6c2fde-serving-cert\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.520003 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1388469-5e55-4c1b-97c3-c88777f29ae7-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.520094 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.520142 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.520221 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.520265 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6td56\" (UniqueName: \"kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-kube-api-access-6td56\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.520362 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.520629 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:15.520913 master-0 kubenswrapper[7756]: I0220 11:49:15.520817 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:15.525566 master-0 kubenswrapper[7756]: I0220 11:49:15.525504 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:15.527297 master-0 kubenswrapper[7756]: I0220 11:49:15.520888 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e90033-9ddf-41b4-ab61-e89add6c2fde-config\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:15.527833 master-0 kubenswrapper[7756]: I0220 11:49:15.527788 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 20 11:49:15.528205 master-0 kubenswrapper[7756]: I0220 11:49:15.528175 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 20 11:49:15.528720 master-0 kubenswrapper[7756]: I0220 11:49:15.528687 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 11:49:15.529168 master-0 kubenswrapper[7756]: I0220 11:49:15.529108 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 20 11:49:15.529979 master-0 kubenswrapper[7756]: I0220 11:49:15.529924 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 20 11:49:15.533063 master-0 kubenswrapper[7756]: I0220 11:49:15.533001 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 20 11:49:15.533515 master-0 kubenswrapper[7756]: I0220 11:49:15.533179 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 20 11:49:15.533515 master-0 kubenswrapper[7756]: I0220 11:49:15.533203 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 20 11:49:15.533515 master-0 kubenswrapper[7756]: I0220 11:49:15.533254 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 20 11:49:15.533515 master-0 kubenswrapper[7756]: I0220 11:49:15.533278 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 20 11:49:15.533515 master-0 kubenswrapper[7756]: I0220 11:49:15.533309 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 20 11:49:15.533515 master-0 kubenswrapper[7756]: I0220 11:49:15.533342 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 20 11:49:15.533515 master-0 kubenswrapper[7756]: I0220 11:49:15.533429 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 20 11:49:15.533886 master-0 kubenswrapper[7756]: I0220 11:49:15.533610 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 20 11:49:15.533886 master-0 kubenswrapper[7756]: I0220 11:49:15.533634 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 20 11:49:15.533886 master-0 kubenswrapper[7756]: I0220 11:49:15.533706 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 20 11:49:15.533886 master-0 kubenswrapper[7756]: I0220 11:49:15.533834 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 20 11:49:15.534094 master-0 kubenswrapper[7756]: I0220 11:49:15.534064 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 20 11:49:15.535686 master-0 kubenswrapper[7756]: I0220 11:49:15.535631 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 20 11:49:15.535958 master-0 kubenswrapper[7756]: I0220 11:49:15.535922 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 20 11:49:15.536117 master-0 kubenswrapper[7756]: I0220 11:49:15.536087 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 20 11:49:15.536958 master-0 kubenswrapper[7756]: I0220 11:49:15.536916 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 20 11:49:15.536958 master-0 kubenswrapper[7756]: I0220 11:49:15.536944 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 20 11:49:15.537214 master-0 kubenswrapper[7756]: I0220 11:49:15.537178 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 20 11:49:15.537214 master-0 kubenswrapper[7756]: I0220 11:49:15.537196 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 20 11:49:15.537335 master-0 kubenswrapper[7756]: I0220 11:49:15.537308 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 20 11:49:15.537498 master-0 kubenswrapper[7756]: I0220 11:49:15.537337 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 20 11:49:15.537498 master-0 kubenswrapper[7756]: I0220 11:49:15.537480 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 20 11:49:15.537498 master-0 kubenswrapper[7756]: I0220 11:49:15.537491 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 20 11:49:15.537696 master-0 kubenswrapper[7756]: I0220 11:49:15.537602 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 20 11:49:15.537753 master-0 kubenswrapper[7756]: I0220 11:49:15.537718 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 20 11:49:15.537800 master-0 kubenswrapper[7756]: I0220 11:49:15.537757 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 20 11:49:15.537861 master-0 kubenswrapper[7756]: I0220 11:49:15.537832 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 20 11:49:15.537904 master-0 kubenswrapper[7756]: I0220 11:49:15.537861 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 20 11:49:15.538026 master-0 kubenswrapper[7756]: I0220 11:49:15.537991 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 20 11:49:15.538132 master-0 kubenswrapper[7756]: I0220 11:49:15.538096 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 20 11:49:15.538262 master-0 kubenswrapper[7756]: I0220 11:49:15.538234 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 20 11:49:15.538382 master-0 kubenswrapper[7756]: I0220 11:49:15.538364 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 20 11:49:15.538565 master-0 kubenswrapper[7756]: I0220 11:49:15.538541 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 20 11:49:15.538649 master-0 kubenswrapper[7756]: I0220 11:49:15.537722 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 20 11:49:15.538649 master-0 kubenswrapper[7756]: I0220 11:49:15.538631 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 20 11:49:15.538814 master-0 kubenswrapper[7756]: I0220 11:49:15.538797 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 20 11:49:15.541550 master-0 kubenswrapper[7756]: I0220 11:49:15.541457 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 20 11:49:15.542178 master-0 kubenswrapper[7756]: I0220 11:49:15.542145 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 20 11:49:15.542232 master-0 kubenswrapper[7756]: I0220 11:49:15.542193 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 20 11:49:15.542380 master-0 kubenswrapper[7756]: I0220 11:49:15.542342 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 20 11:49:15.543936 master-0 kubenswrapper[7756]: I0220 11:49:15.543884 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 20 11:49:15.545641 master-0 kubenswrapper[7756]: I0220 11:49:15.545597 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 20 11:49:15.549171 master-0 kubenswrapper[7756]: I0220 11:49:15.549128 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 20 11:49:15.551573 master-0 kubenswrapper[7756]: I0220 11:49:15.551510 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 20 11:49:15.555092 master-0 kubenswrapper[7756]: I0220 11:49:15.554977 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 20 11:49:15.558200 master-0 kubenswrapper[7756]: I0220 11:49:15.558164 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 20 11:49:15.559476 master-0 kubenswrapper[7756]: I0220 11:49:15.559460 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 20 11:49:15.559644 master-0 kubenswrapper[7756]: I0220 11:49:15.559614 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 20 11:49:15.561171 master-0 kubenswrapper[7756]: I0220 11:49:15.561129 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2a7cb1-1d05-4b24-86ed-f823fad5013e-trusted-ca\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:15.581256 master-0 kubenswrapper[7756]: I0220 11:49:15.581212 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 20 11:49:15.598694 master-0 kubenswrapper[7756]: I0220 11:49:15.598612 7756 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 20 11:49:15.599122 master-0 kubenswrapper[7756]: I0220 11:49:15.599081 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 20 11:49:15.618784 master-0 kubenswrapper[7756]: I0220 11:49:15.618736 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 20 11:49:15.621505 master-0 kubenswrapper[7756]: I0220 11:49:15.621466 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-ca\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:15.621780 master-0 kubenswrapper[7756]: I0220 11:49:15.621750 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-systemd-units\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.621970 master-0 kubenswrapper[7756]: I0220 11:49:15.621944 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j4cs\" (UniqueName: \"kubernetes.io/projected/478be5e4-cf17-4ebf-a45a-c18cd2b69929-kube-api-access-5j4cs\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.622135 master-0 kubenswrapper[7756]: I0220 11:49:15.622108 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:15.622296 master-0 kubenswrapper[7756]: I0220 11:49:15.622271 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k2dv\" (UniqueName: \"kubernetes.io/projected/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-kube-api-access-8k2dv\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:15.622451 master-0 kubenswrapper[7756]: I0220 11:49:15.622427 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms8wk\" (UniqueName: \"kubernetes.io/projected/836a6d7e-9b26-425f-ae21-00422515d7fe-kube-api-access-ms8wk\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:49:15.622634 master-0 kubenswrapper[7756]: I0220 11:49:15.622605 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-hostroot\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.622784 master-0 kubenswrapper[7756]: I0220 11:49:15.622761 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26x7b\" (UniqueName: \"kubernetes.io/projected/4d060bff-3c25-4eeb-bdd3-e20fb2687645-kube-api-access-26x7b\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:15.622952 master-0 kubenswrapper[7756]: I0220 11:49:15.622926 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-serving-cert\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:15.623112 master-0 kubenswrapper[7756]: I0220 11:49:15.623087 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-etc-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.623251 master-0 kubenswrapper[7756]: I0220 11:49:15.623194 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-serving-cert\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:15.623331 master-0 kubenswrapper[7756]: I0220 11:49:15.621982 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-ca\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:15.623331 master-0 kubenswrapper[7756]: I0220 11:49:15.622751 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:15.623488 master-0 kubenswrapper[7756]: I0220 11:49:15.623232 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.623692 master-0 kubenswrapper[7756]: I0220 11:49:15.623666 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ts6s\" (UniqueName: \"kubernetes.io/projected/31969539-bfd1-466f-8697-f13cbbd957df-kube-api-access-7ts6s\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:49:15.623888 master-0 kubenswrapper[7756]: I0220 11:49:15.623862 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:15.624040 master-0 kubenswrapper[7756]: I0220 11:49:15.624016 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:15.624192 master-0 kubenswrapper[7756]: I0220 11:49:15.624167 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mggv\" (UniqueName: \"kubernetes.io/projected/1df81fcc-f967-4874-ad16-1a89f0e7875a-kube-api-access-7mggv\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:15.624370 master-0 kubenswrapper[7756]: I0220 11:49:15.624181 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:15.624477 master-0 kubenswrapper[7756]: I0220 11:49:15.624338 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:15.624590 master-0 kubenswrapper[7756]: I0220 11:49:15.624504 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:15.624590 master-0 kubenswrapper[7756]: I0220 11:49:15.624565 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-cnibin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.624725 master-0 kubenswrapper[7756]: I0220 11:49:15.624617 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk5sc\" (UniqueName: \"kubernetes.io/projected/eb135cff-1a2e-468d-80ab-f7db3f57552a-kube-api-access-tk5sc\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:15.624725 master-0 kubenswrapper[7756]: I0220 11:49:15.624651 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:49:15.624725 master-0 kubenswrapper[7756]: I0220 11:49:15.624691 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:15.625036 master-0 kubenswrapper[7756]: I0220 11:49:15.624953 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-config\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:15.625126 master-0 kubenswrapper[7756]: E0220 11:49:15.625034 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 11:49:15.625201 master-0 kubenswrapper[7756]: I0220 11:49:15.625037 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvxsh\" (UniqueName: \"kubernetes.io/projected/839bf5b1-b242-4bbd-bc09-cf6abcf7f734-kube-api-access-pvxsh\") pod \"csi-snapshot-controller-operator-6fb4df594f-8x7xw\" (UID: \"839bf5b1-b242-4bbd-bc09-cf6abcf7f734\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw" Feb 20 11:49:15.625201 master-0 kubenswrapper[7756]: E0220 11:49:15.625148 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert podName:dbce6cdc-040a-48e1-8a81-b6ff9c180eba nodeName:}" failed. No retries permitted until 2026-02-20 11:49:16.125110777 +0000 UTC m=+1.867358825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-mr99g" (UID: "dbce6cdc-040a-48e1-8a81-b6ff9c180eba") : secret "package-server-manager-serving-cert" not found Feb 20 11:49:15.625343 master-0 kubenswrapper[7756]: I0220 11:49:15.625212 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-cni-binary-copy\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.625343 master-0 kubenswrapper[7756]: I0220 11:49:15.625271 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:15.625343 master-0 kubenswrapper[7756]: I0220 11:49:15.625270 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:49:15.625518 master-0 kubenswrapper[7756]: I0220 11:49:15.625362 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-bin\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.625518 master-0 kubenswrapper[7756]: I0220 11:49:15.625444 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-netns\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.625518 master-0 kubenswrapper[7756]: I0220 11:49:15.625484 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-config\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:15.625518 master-0 kubenswrapper[7756]: I0220 11:49:15.625496 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:15.625794 master-0 kubenswrapper[7756]: I0220 11:49:15.625593 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:15.625794 master-0 kubenswrapper[7756]: I0220 11:49:15.625668 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-cni-binary-copy\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.625794 master-0 kubenswrapper[7756]: I0220 11:49:15.625695 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:15.625794 master-0 kubenswrapper[7756]: I0220 11:49:15.625772 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/312ca024-c8f0-4994-8f9a-b707607341fe-metrics-tls\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:49:15.626027 master-0 kubenswrapper[7756]: I0220 11:49:15.625826 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.626027 master-0 kubenswrapper[7756]: I0220 11:49:15.625880 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-socket-dir-parent\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.626027 master-0 kubenswrapper[7756]: I0220 11:49:15.625974 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:15.626193 master-0 kubenswrapper[7756]: I0220 11:49:15.626053 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-env-overrides\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.626193 master-0 kubenswrapper[7756]: I0220 11:49:15.626103 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/906307ef-d988-49e7-9d63-39116a2c4880-iptables-alerter-script\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:15.626193 master-0 kubenswrapper[7756]: I0220 11:49:15.626153 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/906307ef-d988-49e7-9d63-39116a2c4880-host-slash\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:15.626350 master-0 kubenswrapper[7756]: I0220 11:49:15.626208 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79j9f\" (UniqueName: \"kubernetes.io/projected/1709ef31-9ddd-42bf-9a95-4be4502a0828-kube-api-access-79j9f\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:15.626350 master-0 kubenswrapper[7756]: I0220 11:49:15.626220 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/312ca024-c8f0-4994-8f9a-b707607341fe-metrics-tls\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:49:15.626350 master-0 kubenswrapper[7756]: I0220 11:49:15.626263 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvjcp\" (UniqueName: \"kubernetes.io/projected/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-kube-api-access-lvjcp\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:15.626350 master-0 kubenswrapper[7756]: I0220 11:49:15.626307 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:49:15.626596 master-0 kubenswrapper[7756]: I0220 11:49:15.626356 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:15.626596 master-0 kubenswrapper[7756]: I0220 11:49:15.626410 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:15.626596 master-0 kubenswrapper[7756]: I0220 11:49:15.626518 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-netd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.626767 master-0 kubenswrapper[7756]: I0220 11:49:15.626609 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-images\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:15.626767 master-0 kubenswrapper[7756]: E0220 11:49:15.626632 7756 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 11:49:15.626767 master-0 kubenswrapper[7756]: I0220 11:49:15.626662 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-ovnkube-identity-cm\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:49:15.626767 master-0 kubenswrapper[7756]: E0220 11:49:15.626691 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls podName:7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca nodeName:}" failed. No retries permitted until 2026-02-20 11:49:16.126668253 +0000 UTC m=+1.868916301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-r9ntt" (UID: "7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca") : secret "image-registry-operator-tls" not found Feb 20 11:49:15.626767 master-0 kubenswrapper[7756]: I0220 11:49:15.626725 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:15.627048 master-0 kubenswrapper[7756]: I0220 11:49:15.626772 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:15.627048 master-0 kubenswrapper[7756]: I0220 11:49:15.626813 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-config\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:15.627048 master-0 kubenswrapper[7756]: I0220 11:49:15.626856 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-k8s-cni-cncf-io\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.627048 master-0 kubenswrapper[7756]: I0220 11:49:15.626946 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-node-log\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.627048 master-0 kubenswrapper[7756]: I0220 11:49:15.626983 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5360f3f5-2d07-432f-af45-22659538c55e-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:15.627048 master-0 kubenswrapper[7756]: I0220 11:49:15.627044 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:15.627368 master-0 kubenswrapper[7756]: I0220 11:49:15.627063 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-env-overrides\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.627368 master-0 kubenswrapper[7756]: I0220 11:49:15.627081 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-conf-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.627368 master-0 kubenswrapper[7756]: I0220 11:49:15.627123 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:15.627368 master-0 kubenswrapper[7756]: I0220 11:49:15.627163 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcnmk\" (UniqueName: \"kubernetes.io/projected/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-kube-api-access-rcnmk\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:15.627368 master-0 kubenswrapper[7756]: I0220 11:49:15.627199 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-os-release\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.627368 master-0 kubenswrapper[7756]: I0220 11:49:15.627235 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrwcs\" (UniqueName: \"kubernetes.io/projected/533fe3c7-504f-40aa-aab0-8d66ef27920f-kube-api-access-jrwcs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.627368 master-0 kubenswrapper[7756]: I0220 11:49:15.627271 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-config\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.627368 master-0 kubenswrapper[7756]: I0220 11:49:15.627334 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-script-lib\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.627884 master-0 kubenswrapper[7756]: I0220 11:49:15.627395 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vvm8\" (UniqueName: \"kubernetes.io/projected/5360f3f5-2d07-432f-af45-22659538c55e-kube-api-access-7vvm8\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:15.627884 master-0 kubenswrapper[7756]: I0220 11:49:15.627748 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:49:15.627884 master-0 kubenswrapper[7756]: I0220 11:49:15.627830 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nd7r\" (UniqueName: \"kubernetes.io/projected/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-kube-api-access-8nd7r\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:15.628074 master-0 kubenswrapper[7756]: I0220 11:49:15.627908 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-serving-cert\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:15.628074 master-0 kubenswrapper[7756]: I0220 11:49:15.627946 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:15.628074 master-0 kubenswrapper[7756]: I0220 11:49:15.627971 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:15.628074 master-0 kubenswrapper[7756]: I0220 11:49:15.628009 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:15.628289 master-0 kubenswrapper[7756]: I0220 11:49:15.628091 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-client\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:15.628289 master-0 kubenswrapper[7756]: I0220 11:49:15.628137 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df81fcc-f967-4874-ad16-1a89f0e7875a-config\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:15.628289 master-0 kubenswrapper[7756]: I0220 11:49:15.628170 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-binary-copy\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.628289 master-0 kubenswrapper[7756]: I0220 11:49:15.628220 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.628494 master-0 kubenswrapper[7756]: I0220 11:49:15.628331 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovn-node-metrics-cert\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.628494 master-0 kubenswrapper[7756]: I0220 11:49:15.628386 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:15.628494 master-0 kubenswrapper[7756]: I0220 11:49:15.628414 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5360f3f5-2d07-432f-af45-22659538c55e-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:15.628494 master-0 kubenswrapper[7756]: I0220 11:49:15.628427 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpnmz\" (UniqueName: \"kubernetes.io/projected/312ca024-c8f0-4994-8f9a-b707607341fe-kube-api-access-bpnmz\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:49:15.628775 master-0 kubenswrapper[7756]: I0220 11:49:15.628481 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-system-cni-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.628775 master-0 kubenswrapper[7756]: I0220 11:49:15.628609 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:15.628775 master-0 kubenswrapper[7756]: I0220 11:49:15.628652 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:15.628775 master-0 kubenswrapper[7756]: I0220 11:49:15.628658 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0b28c90-d5b6-44f3-867c-020ece32ac7d-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:15.629029 master-0 kubenswrapper[7756]: I0220 11:49:15.628969 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/906307ef-d988-49e7-9d63-39116a2c4880-iptables-alerter-script\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:15.629029 master-0 kubenswrapper[7756]: I0220 11:49:15.628968 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.628172 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.629452 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-images\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.629608 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-config\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.629658 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.629739 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.629760 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-client\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.629789 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.629831 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k8n8\" (UniqueName: \"kubernetes.io/projected/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-kube-api-access-2k8n8\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.629841 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-config\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.629862 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-binary-copy\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.629878 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zkbq\" (UniqueName: \"kubernetes.io/projected/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-kube-api-access-2zkbq\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.629981 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/22bba1b3-587d-4802-b4ae-946827c3fa7a-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630011 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630020 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-whereabouts-configmap\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630156 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-script-lib\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630201 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df81fcc-f967-4874-ad16-1a89f0e7875a-config\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630233 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-var-lib-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630293 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630327 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpk24\" (UniqueName: \"kubernetes.io/projected/d65a0af4-c96f-44f8-9384-6bae4585983b-kube-api-access-bpk24\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630367 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630389 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630379 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovn-node-metrics-cert\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630517 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630638 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/22bba1b3-587d-4802-b4ae-946827c3fa7a-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630676 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-serving-cert\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630693 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wnh5\" (UniqueName: \"kubernetes.io/projected/22bba1b3-587d-4802-b4ae-946827c3fa7a-kube-api-access-2wnh5\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630692 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-whereabouts-configmap\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630734 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:15.630795 master-0 kubenswrapper[7756]: I0220 11:49:15.630792 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.630886 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/312ca024-c8f0-4994-8f9a-b707607341fe-host-etc-kube\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.630945 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-bin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631078 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-cnibin\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631178 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-log-socket\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631270 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/67f890c8-05a1-4797-8da8-6194aea0df9a-service-ca\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631352 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-multus-certs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631373 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631393 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631477 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/67f890c8-05a1-4797-8da8-6194aea0df9a-service-ca\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631590 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/836a6d7e-9b26-425f-ae21-00422515d7fe-webhook-cert\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631646 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67f890c8-05a1-4797-8da8-6194aea0df9a-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631688 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx26k\" (UniqueName: \"kubernetes.io/projected/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-kube-api-access-jx26k\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631743 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-os-release\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631798 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-system-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631847 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-daemon-config\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631921 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631951 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-systemd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.631999 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-multus\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.632026 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-kubelet\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.632057 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j82z\" (UniqueName: \"kubernetes.io/projected/906307ef-d988-49e7-9d63-39116a2c4880-kube-api-access-5j82z\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.632088 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4457\" (UniqueName: \"kubernetes.io/projected/6dfca740-0387-428a-b957-3e8a09c6e352-kube-api-access-d4457\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.632147 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31969539-bfd1-466f-8697-f13cbbd957df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.632216 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.632228 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.632254 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-daemon-config\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.632283 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-env-overrides\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: E0220 11:49:15.632324 7756 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.632352 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-ovn\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: I0220 11:49:15.632384 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0b28c90-d5b6-44f3-867c-020ece32ac7d-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:15.632424 master-0 kubenswrapper[7756]: E0220 11:49:15.632457 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls podName:db2a7cb1-1d05-4b24-86ed-f823fad5013e nodeName:}" failed. No retries permitted until 2026-02-20 11:49:16.132415832 +0000 UTC m=+1.874664040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls") pod "ingress-operator-6569778c84-kw2v6" (UID: "db2a7cb1-1d05-4b24-86ed-f823fad5013e") : secret "metrics-tls" not found Feb 20 11:49:15.634193 master-0 kubenswrapper[7756]: I0220 11:49:15.632515 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5360f3f5-2d07-432f-af45-22659538c55e-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:15.634193 master-0 kubenswrapper[7756]: I0220 11:49:15.632605 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0b28c90-d5b6-44f3-867c-020ece32ac7d-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:15.634193 master-0 kubenswrapper[7756]: I0220 11:49:15.632611 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-kubelet\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.634193 master-0 kubenswrapper[7756]: I0220 11:49:15.632673 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1df81fcc-f967-4874-ad16-1a89f0e7875a-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:15.634193 master-0 kubenswrapper[7756]: I0220 11:49:15.632723 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-etc-kubernetes\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.634193 master-0 kubenswrapper[7756]: I0220 11:49:15.632768 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-slash\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.634193 master-0 kubenswrapper[7756]: I0220 11:49:15.632815 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5360f3f5-2d07-432f-af45-22659538c55e-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:15.634193 master-0 kubenswrapper[7756]: I0220 11:49:15.632816 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-netns\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.634193 master-0 kubenswrapper[7756]: I0220 11:49:15.632895 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-serving-cert\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:15.634193 master-0 kubenswrapper[7756]: I0220 11:49:15.632964 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5pw4\" (UniqueName: \"kubernetes.io/projected/07281644-2789-424f-8429-aa4448dda01e-kube-api-access-l5pw4\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.634193 master-0 kubenswrapper[7756]: I0220 11:49:15.633004 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zmwm\" (UniqueName: \"kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm\") pod \"network-check-target-h5w2t\" (UID: \"39ccf158-b40f-4dba-90e2-27b1409487b7\") " pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:49:15.634193 master-0 kubenswrapper[7756]: I0220 11:49:15.633043 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0b28c90-d5b6-44f3-867c-020ece32ac7d-config\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:15.634193 master-0 kubenswrapper[7756]: I0220 11:49:15.633143 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1df81fcc-f967-4874-ad16-1a89f0e7875a-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:15.634193 master-0 kubenswrapper[7756]: I0220 11:49:15.633408 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0b28c90-d5b6-44f3-867c-020ece32ac7d-config\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:15.634193 master-0 kubenswrapper[7756]: I0220 11:49:15.633643 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-serving-cert\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:15.638695 master-0 kubenswrapper[7756]: I0220 11:49:15.638637 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 20 11:49:15.643047 master-0 kubenswrapper[7756]: I0220 11:49:15.642996 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-env-overrides\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:49:15.658415 master-0 kubenswrapper[7756]: I0220 11:49:15.658343 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 20 11:49:15.662792 master-0 kubenswrapper[7756]: I0220 11:49:15.662733 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/836a6d7e-9b26-425f-ae21-00422515d7fe-webhook-cert\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:49:15.678747 master-0 kubenswrapper[7756]: I0220 11:49:15.678675 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 20 11:49:15.699448 master-0 kubenswrapper[7756]: I0220 11:49:15.699363 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 20 11:49:15.707890 master-0 kubenswrapper[7756]: I0220 11:49:15.707831 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-ovnkube-identity-cm\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:49:15.719136 master-0 kubenswrapper[7756]: I0220 11:49:15.719047 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 20 11:49:15.734411 master-0 kubenswrapper[7756]: I0220 11:49:15.733885 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-etc-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.734411 master-0 kubenswrapper[7756]: I0220 11:49:15.733957 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.734411 master-0 kubenswrapper[7756]: I0220 11:49:15.734066 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.734411 master-0 kubenswrapper[7756]: I0220 11:49:15.734157 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:15.734411 master-0 kubenswrapper[7756]: I0220 11:49:15.734201 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-cnibin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.734411 master-0 kubenswrapper[7756]: I0220 11:49:15.734218 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-etc-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.734411 master-0 kubenswrapper[7756]: I0220 11:49:15.734293 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:15.734411 master-0 kubenswrapper[7756]: I0220 11:49:15.734307 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-cnibin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.734411 master-0 kubenswrapper[7756]: E0220 11:49:15.734340 7756 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 11:49:15.735057 master-0 kubenswrapper[7756]: E0220 11:49:15.734426 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics podName:6dfca740-0387-428a-b957-3e8a09c6e352 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:16.23439596 +0000 UTC m=+1.976643998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-nr4tg" (UID: "6dfca740-0387-428a-b957-3e8a09c6e352") : secret "marketplace-operator-metrics" not found Feb 20 11:49:15.735057 master-0 kubenswrapper[7756]: I0220 11:49:15.734498 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-bin\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.735057 master-0 kubenswrapper[7756]: E0220 11:49:15.734516 7756 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 20 11:49:15.735057 master-0 kubenswrapper[7756]: I0220 11:49:15.734577 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-netns\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.735057 master-0 kubenswrapper[7756]: I0220 11:49:15.734614 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-bin\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.735057 master-0 kubenswrapper[7756]: I0220 11:49:15.734549 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-netns\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.735057 master-0 kubenswrapper[7756]: E0220 11:49:15.734664 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs podName:1709ef31-9ddd-42bf-9a95-4be4502a0828 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:16.234632867 +0000 UTC m=+1.976880905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs") pod "network-metrics-daemon-29622" (UID: "1709ef31-9ddd-42bf-9a95-4be4502a0828") : secret "metrics-daemon-secret" not found Feb 20 11:49:15.735057 master-0 kubenswrapper[7756]: I0220 11:49:15.734721 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:15.735057 master-0 kubenswrapper[7756]: I0220 11:49:15.734785 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:15.735057 master-0 kubenswrapper[7756]: I0220 11:49:15.734830 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:15.735057 master-0 kubenswrapper[7756]: E0220 11:49:15.734873 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 20 11:49:15.735057 master-0 kubenswrapper[7756]: E0220 11:49:15.734988 7756 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: E0220 11:49:15.735243 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert podName:4d060bff-3c25-4eeb-bdd3-e20fb2687645 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:16.235102271 +0000 UTC m=+1.977350289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert") pod "catalog-operator-596f79dd6f-bjxbt" (UID: "4d060bff-3c25-4eeb-bdd3-e20fb2687645") : secret "catalog-operator-serving-cert" not found Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.734872 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: E0220 11:49:15.736090 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:16.236056459 +0000 UTC m=+1.978304467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "node-tuning-operator-tls" not found Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: E0220 11:49:15.736184 7756 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: E0220 11:49:15.736229 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:16.236221144 +0000 UTC m=+1.978469152 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736272 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-socket-dir-parent\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736342 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/906307ef-d988-49e7-9d63-39116a2c4880-host-slash\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736388 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-netd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736425 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736470 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-k8s-cni-cncf-io\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736499 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-conf-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736539 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-node-log\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736570 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736569 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-socket-dir-parent\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736658 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-node-log\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736702 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/906307ef-d988-49e7-9d63-39116a2c4880-host-slash\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736756 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-netd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736793 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736829 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-conf-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: E0220 11:49:15.736908 7756 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: E0220 11:49:15.736940 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs podName:dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:16.236931725 +0000 UTC m=+1.979179733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-jgv89" (UID: "dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783") : secret "multus-admission-controller-secret" not found Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736970 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-os-release\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.737007 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-k8s-cni-cncf-io\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.737040 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.737102 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.736897 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.737150 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.737203 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-system-cni-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.737241 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: E0220 11:49:15.737266 7756 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.737285 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-var-lib-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.737310 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.737337 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: E0220 11:49:15.737359 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls podName:eb135cff-1a2e-468d-80ab-f7db3f57552a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:16.237337237 +0000 UTC m=+1.979585275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls") pod "machine-config-operator-7f8c75f984-vvvjt" (UID: "eb135cff-1a2e-468d-80ab-f7db3f57552a") : secret "mco-proxy-tls" not found Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.737414 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:15.737517 master-0 kubenswrapper[7756]: I0220 11:49:15.737595 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.740428 master-0 kubenswrapper[7756]: I0220 11:49:15.739171 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 20 11:49:15.740428 master-0 kubenswrapper[7756]: I0220 11:49:15.740188 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-os-release\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.740428 master-0 kubenswrapper[7756]: I0220 11:49:15.740243 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-var-lib-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.740719 master-0 kubenswrapper[7756]: I0220 11:49:15.740432 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:15.740719 master-0 kubenswrapper[7756]: I0220 11:49:15.740492 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/312ca024-c8f0-4994-8f9a-b707607341fe-host-etc-kube\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:49:15.740719 master-0 kubenswrapper[7756]: I0220 11:49:15.740562 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-bin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.740719 master-0 kubenswrapper[7756]: E0220 11:49:15.740593 7756 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:15.740719 master-0 kubenswrapper[7756]: I0220 11:49:15.740688 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-cnibin\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.740719 master-0 kubenswrapper[7756]: E0220 11:49:15.740704 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls podName:22bba1b3-587d-4802-b4ae-946827c3fa7a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:16.240675645 +0000 UTC m=+1.982923693 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-5zl5l" (UID: "22bba1b3-587d-4802-b4ae-946827c3fa7a") : secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:15.740719 master-0 kubenswrapper[7756]: I0220 11:49:15.740702 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/312ca024-c8f0-4994-8f9a-b707607341fe-host-etc-kube\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.740610 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-cnibin\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.740803 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-bin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: E0220 11:49:15.740927 7756 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: E0220 11:49:15.740963 7756 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: E0220 11:49:15.741017 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert podName:67f890c8-05a1-4797-8da8-6194aea0df9a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:16.240981634 +0000 UTC m=+1.983229682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert") pod "cluster-version-operator-5cfd9759cf-4pnsw" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a") : secret "cluster-version-operator-serving-cert" not found Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.741059 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-log-socket\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: E0220 11:49:15.741117 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls podName:b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:16.241101877 +0000 UTC m=+1.983349925 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls") pod "dns-operator-8c7d49845-qhx9j" (UID: "b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8") : secret "metrics-tls" not found Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.741157 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-log-socket\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.741391 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-system-cni-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: E0220 11:49:15.741427 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.741471 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-multus-certs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.741445 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-multus-certs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: E0220 11:49:15.741578 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert podName:d65a0af4-c96f-44f8-9384-6bae4585983b nodeName:}" failed. No retries permitted until 2026-02-20 11:49:16.241548261 +0000 UTC m=+1.983796309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert") pod "olm-operator-5499d7f7bb-6qtzc" (UID: "d65a0af4-c96f-44f8-9384-6bae4585983b") : secret "olm-operator-serving-cert" not found Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.741655 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.741723 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.741859 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.741887 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-os-release\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.741946 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-system-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.741997 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-systemd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.742022 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.742073 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-multus\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.742148 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-system-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.742182 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-kubelet\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.742223 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-systemd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.742261 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-multus\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.742577 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-kubelet\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.742958 master-0 kubenswrapper[7756]: I0220 11:49:15.742748 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-ovn\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.743454 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-os-release\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.742609 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-ovn\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.743595 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31969539-bfd1-466f-8697-f13cbbd957df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.743601 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-kubelet\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.743655 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-kubelet\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.743667 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-etc-kubernetes\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.743699 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-slash\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.743742 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-slash\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.743757 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-etc-kubernetes\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.743810 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-netns\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.743853 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmwm\" (UniqueName: \"kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm\") pod \"network-check-target-h5w2t\" (UID: \"39ccf158-b40f-4dba-90e2-27b1409487b7\") " pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.743948 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-netns\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.744043 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-systemd-units\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.744128 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-hostroot\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.744178 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-systemd-units\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.745025 master-0 kubenswrapper[7756]: I0220 11:49:15.744250 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-hostroot\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:15.770632 master-0 kubenswrapper[7756]: I0220 11:49:15.770476 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:15.793491 master-0 kubenswrapper[7756]: I0220 11:49:15.793432 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p4w6\" (UniqueName: \"kubernetes.io/projected/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-kube-api-access-8p4w6\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 11:49:15.798820 master-0 kubenswrapper[7756]: I0220 11:49:15.797263 7756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 11:49:15.809640 master-0 kubenswrapper[7756]: I0220 11:49:15.809467 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-bound-sa-token\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:15.829852 master-0 kubenswrapper[7756]: I0220 11:49:15.829810 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2tk7\" (UniqueName: \"kubernetes.io/projected/01e90033-9ddf-41b4-ab61-e89add6c2fde-kube-api-access-j2tk7\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 11:49:15.860433 master-0 kubenswrapper[7756]: I0220 11:49:15.860350 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1388469-5e55-4c1b-97c3-c88777f29ae7-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 11:49:15.877635 master-0 kubenswrapper[7756]: I0220 11:49:15.877565 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2kct\" (UniqueName: \"kubernetes.io/projected/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-kube-api-access-z2kct\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:15.899213 master-0 kubenswrapper[7756]: I0220 11:49:15.899132 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqxhp\" (UniqueName: \"kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-kube-api-access-lqxhp\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:15.920744 master-0 kubenswrapper[7756]: I0220 11:49:15.920657 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6td56\" (UniqueName: \"kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-kube-api-access-6td56\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:15.933250 master-0 kubenswrapper[7756]: I0220 11:49:15.933166 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 11:49:15.974479 master-0 kubenswrapper[7756]: I0220 11:49:15.974429 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j4cs\" (UniqueName: \"kubernetes.io/projected/478be5e4-cf17-4ebf-a45a-c18cd2b69929-kube-api-access-5j4cs\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:15.990857 master-0 kubenswrapper[7756]: I0220 11:49:15.990782 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k2dv\" (UniqueName: \"kubernetes.io/projected/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-kube-api-access-8k2dv\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:16.022382 master-0 kubenswrapper[7756]: I0220 11:49:16.022294 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms8wk\" (UniqueName: \"kubernetes.io/projected/836a6d7e-9b26-425f-ae21-00422515d7fe-kube-api-access-ms8wk\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 11:49:16.043242 master-0 kubenswrapper[7756]: I0220 11:49:16.043182 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26x7b\" (UniqueName: \"kubernetes.io/projected/4d060bff-3c25-4eeb-bdd3-e20fb2687645-kube-api-access-26x7b\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:16.061039 master-0 kubenswrapper[7756]: I0220 11:49:16.060983 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ts6s\" (UniqueName: \"kubernetes.io/projected/31969539-bfd1-466f-8697-f13cbbd957df-kube-api-access-7ts6s\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 11:49:16.080933 master-0 kubenswrapper[7756]: I0220 11:49:16.080911 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mggv\" (UniqueName: \"kubernetes.io/projected/1df81fcc-f967-4874-ad16-1a89f0e7875a-kube-api-access-7mggv\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 11:49:16.101630 master-0 kubenswrapper[7756]: I0220 11:49:16.101607 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk5sc\" (UniqueName: \"kubernetes.io/projected/eb135cff-1a2e-468d-80ab-f7db3f57552a-kube-api-access-tk5sc\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:16.119970 master-0 kubenswrapper[7756]: I0220 11:49:16.119922 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvxsh\" (UniqueName: \"kubernetes.io/projected/839bf5b1-b242-4bbd-bc09-cf6abcf7f734-kube-api-access-pvxsh\") pod \"csi-snapshot-controller-operator-6fb4df594f-8x7xw\" (UID: \"839bf5b1-b242-4bbd-bc09-cf6abcf7f734\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw" Feb 20 11:49:16.134990 master-0 kubenswrapper[7756]: I0220 11:49:16.134926 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79j9f\" (UniqueName: \"kubernetes.io/projected/1709ef31-9ddd-42bf-9a95-4be4502a0828-kube-api-access-79j9f\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:16.150080 master-0 kubenswrapper[7756]: I0220 11:49:16.150035 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:16.150280 master-0 kubenswrapper[7756]: I0220 11:49:16.150106 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:16.150280 master-0 kubenswrapper[7756]: I0220 11:49:16.150155 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:16.150357 master-0 kubenswrapper[7756]: E0220 11:49:16.150308 7756 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 11:49:16.150400 master-0 kubenswrapper[7756]: E0220 11:49:16.150359 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls podName:7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.150344251 +0000 UTC m=+2.892592259 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-r9ntt" (UID: "7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca") : secret "image-registry-operator-tls" not found Feb 20 11:49:16.150872 master-0 kubenswrapper[7756]: E0220 11:49:16.150790 7756 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:16.150872 master-0 kubenswrapper[7756]: E0220 11:49:16.150835 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls podName:db2a7cb1-1d05-4b24-86ed-f823fad5013e nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.150825465 +0000 UTC m=+2.893073473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls") pod "ingress-operator-6569778c84-kw2v6" (UID: "db2a7cb1-1d05-4b24-86ed-f823fad5013e") : secret "metrics-tls" not found Feb 20 11:49:16.151005 master-0 kubenswrapper[7756]: E0220 11:49:16.150856 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 11:49:16.151005 master-0 kubenswrapper[7756]: E0220 11:49:16.150966 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert podName:dbce6cdc-040a-48e1-8a81-b6ff9c180eba nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.150937048 +0000 UTC m=+2.893185086 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-mr99g" (UID: "dbce6cdc-040a-48e1-8a81-b6ff9c180eba") : secret "package-server-manager-serving-cert" not found Feb 20 11:49:16.151348 master-0 kubenswrapper[7756]: I0220 11:49:16.151312 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvjcp\" (UniqueName: \"kubernetes.io/projected/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-kube-api-access-lvjcp\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 11:49:16.169312 master-0 kubenswrapper[7756]: I0220 11:49:16.169259 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vvm8\" (UniqueName: \"kubernetes.io/projected/5360f3f5-2d07-432f-af45-22659538c55e-kube-api-access-7vvm8\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 11:49:16.201451 master-0 kubenswrapper[7756]: I0220 11:49:16.201398 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0b28c90-d5b6-44f3-867c-020ece32ac7d-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 11:49:16.222190 master-0 kubenswrapper[7756]: I0220 11:49:16.222154 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcnmk\" (UniqueName: \"kubernetes.io/projected/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-kube-api-access-rcnmk\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:16.244853 master-0 kubenswrapper[7756]: I0220 11:49:16.244778 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpnmz\" (UniqueName: \"kubernetes.io/projected/312ca024-c8f0-4994-8f9a-b707607341fe-kube-api-access-bpnmz\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 11:49:16.251550 master-0 kubenswrapper[7756]: I0220 11:49:16.251475 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:16.251834 master-0 kubenswrapper[7756]: E0220 11:49:16.251745 7756 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 11:49:16.251938 master-0 kubenswrapper[7756]: E0220 11:49:16.251913 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs podName:dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.251880836 +0000 UTC m=+2.994128884 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-jgv89" (UID: "dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783") : secret "multus-admission-controller-secret" not found Feb 20 11:49:16.252012 master-0 kubenswrapper[7756]: E0220 11:49:16.251931 7756 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 20 11:49:16.252066 master-0 kubenswrapper[7756]: E0220 11:49:16.252015 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls podName:eb135cff-1a2e-468d-80ab-f7db3f57552a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.251990279 +0000 UTC m=+2.994238327 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls") pod "machine-config-operator-7f8c75f984-vvvjt" (UID: "eb135cff-1a2e-468d-80ab-f7db3f57552a") : secret "mco-proxy-tls" not found Feb 20 11:49:16.252066 master-0 kubenswrapper[7756]: I0220 11:49:16.251796 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:16.252286 master-0 kubenswrapper[7756]: I0220 11:49:16.252073 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:16.252286 master-0 kubenswrapper[7756]: E0220 11:49:16.252231 7756 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 11:49:16.252286 master-0 kubenswrapper[7756]: I0220 11:49:16.252264 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:16.252450 master-0 kubenswrapper[7756]: E0220 11:49:16.252310 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert podName:67f890c8-05a1-4797-8da8-6194aea0df9a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.252287248 +0000 UTC m=+2.994535286 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert") pod "cluster-version-operator-5cfd9759cf-4pnsw" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a") : secret "cluster-version-operator-serving-cert" not found Feb 20 11:49:16.252450 master-0 kubenswrapper[7756]: E0220 11:49:16.252364 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 20 11:49:16.252450 master-0 kubenswrapper[7756]: I0220 11:49:16.252382 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:16.252450 master-0 kubenswrapper[7756]: E0220 11:49:16.252411 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert podName:d65a0af4-c96f-44f8-9384-6bae4585983b nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.252396491 +0000 UTC m=+2.994644539 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert") pod "olm-operator-5499d7f7bb-6qtzc" (UID: "d65a0af4-c96f-44f8-9384-6bae4585983b") : secret "olm-operator-serving-cert" not found Feb 20 11:49:16.252828 master-0 kubenswrapper[7756]: I0220 11:49:16.252454 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:16.252828 master-0 kubenswrapper[7756]: E0220 11:49:16.252471 7756 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:16.252828 master-0 kubenswrapper[7756]: E0220 11:49:16.252510 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls podName:22bba1b3-587d-4802-b4ae-946827c3fa7a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.252497284 +0000 UTC m=+2.994745332 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-5zl5l" (UID: "22bba1b3-587d-4802-b4ae-946827c3fa7a") : secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:16.252828 master-0 kubenswrapper[7756]: E0220 11:49:16.252611 7756 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:16.252828 master-0 kubenswrapper[7756]: E0220 11:49:16.252664 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls podName:b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.252650209 +0000 UTC m=+2.994898257 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls") pod "dns-operator-8c7d49845-qhx9j" (UID: "b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8") : secret "metrics-tls" not found Feb 20 11:49:16.252828 master-0 kubenswrapper[7756]: I0220 11:49:16.252729 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:16.252828 master-0 kubenswrapper[7756]: I0220 11:49:16.252788 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:16.252828 master-0 kubenswrapper[7756]: I0220 11:49:16.252827 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:16.252828 master-0 kubenswrapper[7756]: I0220 11:49:16.252867 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:16.252828 master-0 kubenswrapper[7756]: E0220 11:49:16.252897 7756 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 11:49:16.253567 master-0 kubenswrapper[7756]: I0220 11:49:16.252909 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:16.253567 master-0 kubenswrapper[7756]: E0220 11:49:16.252937 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics podName:6dfca740-0387-428a-b957-3e8a09c6e352 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.252925327 +0000 UTC m=+2.995173345 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-nr4tg" (UID: "6dfca740-0387-428a-b957-3e8a09c6e352") : secret "marketplace-operator-metrics" not found Feb 20 11:49:16.253567 master-0 kubenswrapper[7756]: E0220 11:49:16.252935 7756 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 20 11:49:16.253567 master-0 kubenswrapper[7756]: E0220 11:49:16.252991 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs podName:1709ef31-9ddd-42bf-9a95-4be4502a0828 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.252977028 +0000 UTC m=+2.995225076 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs") pod "network-metrics-daemon-29622" (UID: "1709ef31-9ddd-42bf-9a95-4be4502a0828") : secret "metrics-daemon-secret" not found Feb 20 11:49:16.253567 master-0 kubenswrapper[7756]: E0220 11:49:16.252997 7756 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:16.253567 master-0 kubenswrapper[7756]: E0220 11:49:16.253063 7756 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 11:49:16.253567 master-0 kubenswrapper[7756]: E0220 11:49:16.253082 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 20 11:49:16.253567 master-0 kubenswrapper[7756]: E0220 11:49:16.253104 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.253090492 +0000 UTC m=+2.995338540 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:16.253567 master-0 kubenswrapper[7756]: E0220 11:49:16.253130 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert podName:4d060bff-3c25-4eeb-bdd3-e20fb2687645 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.253117403 +0000 UTC m=+2.995365451 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert") pod "catalog-operator-596f79dd6f-bjxbt" (UID: "4d060bff-3c25-4eeb-bdd3-e20fb2687645") : secret "catalog-operator-serving-cert" not found Feb 20 11:49:16.253567 master-0 kubenswrapper[7756]: E0220 11:49:16.253152 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:17.253141333 +0000 UTC m=+2.995389371 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "node-tuning-operator-tls" not found Feb 20 11:49:16.264097 master-0 kubenswrapper[7756]: I0220 11:49:16.264038 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k8n8\" (UniqueName: \"kubernetes.io/projected/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-kube-api-access-2k8n8\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 11:49:16.281604 master-0 kubenswrapper[7756]: I0220 11:49:16.281557 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nd7r\" (UniqueName: \"kubernetes.io/projected/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-kube-api-access-8nd7r\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:16.296187 master-0 kubenswrapper[7756]: I0220 11:49:16.296143 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zkbq\" (UniqueName: \"kubernetes.io/projected/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-kube-api-access-2zkbq\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:49:16.321088 master-0 kubenswrapper[7756]: I0220 11:49:16.321008 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpk24\" (UniqueName: \"kubernetes.io/projected/d65a0af4-c96f-44f8-9384-6bae4585983b-kube-api-access-bpk24\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:16.340918 master-0 kubenswrapper[7756]: I0220 11:49:16.340863 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrwcs\" (UniqueName: \"kubernetes.io/projected/533fe3c7-504f-40aa-aab0-8d66ef27920f-kube-api-access-jrwcs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 11:49:16.353969 master-0 kubenswrapper[7756]: I0220 11:49:16.353936 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wnh5\" (UniqueName: \"kubernetes.io/projected/22bba1b3-587d-4802-b4ae-946827c3fa7a-kube-api-access-2wnh5\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:16.378551 master-0 kubenswrapper[7756]: I0220 11:49:16.378489 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx26k\" (UniqueName: \"kubernetes.io/projected/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-kube-api-access-jx26k\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:16.398927 master-0 kubenswrapper[7756]: I0220 11:49:16.398876 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67f890c8-05a1-4797-8da8-6194aea0df9a-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:16.417130 master-0 kubenswrapper[7756]: I0220 11:49:16.417090 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4457\" (UniqueName: \"kubernetes.io/projected/6dfca740-0387-428a-b957-3e8a09c6e352-kube-api-access-d4457\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:16.439903 master-0 kubenswrapper[7756]: I0220 11:49:16.439840 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j82z\" (UniqueName: \"kubernetes.io/projected/906307ef-d988-49e7-9d63-39116a2c4880-kube-api-access-5j82z\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 11:49:16.455769 master-0 kubenswrapper[7756]: I0220 11:49:16.455701 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5pw4\" (UniqueName: \"kubernetes.io/projected/07281644-2789-424f-8429-aa4448dda01e-kube-api-access-l5pw4\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 11:49:16.466745 master-0 kubenswrapper[7756]: W0220 11:49:16.466704 7756 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 20 11:49:16.466834 master-0 kubenswrapper[7756]: E0220 11:49:16.466805 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:49:16.486260 master-0 kubenswrapper[7756]: E0220 11:49:16.486213 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 11:49:16.508841 master-0 kubenswrapper[7756]: E0220 11:49:16.508790 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:16.532908 master-0 kubenswrapper[7756]: I0220 11:49:16.532832 7756 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 20 11:49:16.541436 master-0 kubenswrapper[7756]: I0220 11:49:16.541388 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zmwm\" (UniqueName: \"kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm\") pod \"network-check-target-h5w2t\" (UID: \"39ccf158-b40f-4dba-90e2-27b1409487b7\") " pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:49:16.715446 master-0 kubenswrapper[7756]: I0220 11:49:16.715012 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:49:16.782038 master-0 kubenswrapper[7756]: E0220 11:49:16.781660 7756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" Feb 20 11:49:16.782719 master-0 kubenswrapper[7756]: E0220 11:49:16.781958 7756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-scheduler-operator-container,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7,Command:[cluster-kube-scheduler-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.31.14,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-kube-scheduler-operator-77cd4d9559-9zp85_openshift-kube-scheduler-operator(f98aeaf7-bf1a-46af-bf1b-85713baa4c67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 20 11:49:16.784024 master-0 kubenswrapper[7756]: E0220 11:49:16.783937 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" podUID="f98aeaf7-bf1a-46af-bf1b-85713baa4c67" Feb 20 11:49:16.788147 master-0 kubenswrapper[7756]: I0220 11:49:16.788081 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=2.788057672 podStartE2EDuration="2.788057672s" podCreationTimestamp="2026-02-20 11:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:49:16.787505876 +0000 UTC m=+2.529753884" watchObservedRunningTime="2026-02-20 11:49:16.788057672 +0000 UTC m=+2.530305680" Feb 20 11:49:17.162954 master-0 kubenswrapper[7756]: I0220 11:49:17.162008 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:17.162954 master-0 kubenswrapper[7756]: I0220 11:49:17.162088 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:17.162954 master-0 kubenswrapper[7756]: I0220 11:49:17.162152 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:17.162954 master-0 kubenswrapper[7756]: E0220 11:49:17.162336 7756 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 11:49:17.162954 master-0 kubenswrapper[7756]: E0220 11:49:17.162388 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls podName:7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca nodeName:}" failed. No retries permitted until 2026-02-20 11:49:19.162372048 +0000 UTC m=+4.904620076 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-r9ntt" (UID: "7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca") : secret "image-registry-operator-tls" not found Feb 20 11:49:17.162954 master-0 kubenswrapper[7756]: E0220 11:49:17.162756 7756 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:17.162954 master-0 kubenswrapper[7756]: E0220 11:49:17.162786 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls podName:db2a7cb1-1d05-4b24-86ed-f823fad5013e nodeName:}" failed. No retries permitted until 2026-02-20 11:49:19.16277633 +0000 UTC m=+4.905024348 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls") pod "ingress-operator-6569778c84-kw2v6" (UID: "db2a7cb1-1d05-4b24-86ed-f823fad5013e") : secret "metrics-tls" not found Feb 20 11:49:17.162954 master-0 kubenswrapper[7756]: E0220 11:49:17.162827 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 11:49:17.162954 master-0 kubenswrapper[7756]: E0220 11:49:17.162851 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert podName:dbce6cdc-040a-48e1-8a81-b6ff9c180eba nodeName:}" failed. No retries permitted until 2026-02-20 11:49:19.162842752 +0000 UTC m=+4.905090770 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-mr99g" (UID: "dbce6cdc-040a-48e1-8a81-b6ff9c180eba") : secret "package-server-manager-serving-cert" not found Feb 20 11:49:17.263019 master-0 kubenswrapper[7756]: I0220 11:49:17.262959 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:17.263019 master-0 kubenswrapper[7756]: I0220 11:49:17.263010 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:17.263019 master-0 kubenswrapper[7756]: I0220 11:49:17.263029 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:17.263385 master-0 kubenswrapper[7756]: E0220 11:49:17.263156 7756 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 11:49:17.263385 master-0 kubenswrapper[7756]: E0220 11:49:17.263231 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics podName:6dfca740-0387-428a-b957-3e8a09c6e352 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:19.263210063 +0000 UTC m=+5.005458081 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-nr4tg" (UID: "6dfca740-0387-428a-b957-3e8a09c6e352") : secret "marketplace-operator-metrics" not found Feb 20 11:49:17.263385 master-0 kubenswrapper[7756]: I0220 11:49:17.263229 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:17.263385 master-0 kubenswrapper[7756]: E0220 11:49:17.263272 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 20 11:49:17.263385 master-0 kubenswrapper[7756]: I0220 11:49:17.263297 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:17.263385 master-0 kubenswrapper[7756]: E0220 11:49:17.263320 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert podName:4d060bff-3c25-4eeb-bdd3-e20fb2687645 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:19.263304076 +0000 UTC m=+5.005552084 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert") pod "catalog-operator-596f79dd6f-bjxbt" (UID: "4d060bff-3c25-4eeb-bdd3-e20fb2687645") : secret "catalog-operator-serving-cert" not found Feb 20 11:49:17.263385 master-0 kubenswrapper[7756]: E0220 11:49:17.263355 7756 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 20 11:49:17.263385 master-0 kubenswrapper[7756]: E0220 11:49:17.263372 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs podName:1709ef31-9ddd-42bf-9a95-4be4502a0828 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:19.263366648 +0000 UTC m=+5.005614656 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs") pod "network-metrics-daemon-29622" (UID: "1709ef31-9ddd-42bf-9a95-4be4502a0828") : secret "metrics-daemon-secret" not found Feb 20 11:49:17.263674 master-0 kubenswrapper[7756]: I0220 11:49:17.263476 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:17.263674 master-0 kubenswrapper[7756]: E0220 11:49:17.263565 7756 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 11:49:17.263674 master-0 kubenswrapper[7756]: E0220 11:49:17.263619 7756 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 11:49:17.263674 master-0 kubenswrapper[7756]: E0220 11:49:17.263629 7756 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 20 11:49:17.263674 master-0 kubenswrapper[7756]: E0220 11:49:17.263654 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls podName:eb135cff-1a2e-468d-80ab-f7db3f57552a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:19.263648466 +0000 UTC m=+5.005896474 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls") pod "machine-config-operator-7f8c75f984-vvvjt" (UID: "eb135cff-1a2e-468d-80ab-f7db3f57552a") : secret "mco-proxy-tls" not found Feb 20 11:49:17.263674 master-0 kubenswrapper[7756]: E0220 11:49:17.263619 7756 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:17.263674 master-0 kubenswrapper[7756]: I0220 11:49:17.263595 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:17.263860 master-0 kubenswrapper[7756]: E0220 11:49:17.263680 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs podName:dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:19.263660466 +0000 UTC m=+5.005908514 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-jgv89" (UID: "dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783") : secret "multus-admission-controller-secret" not found Feb 20 11:49:17.263860 master-0 kubenswrapper[7756]: E0220 11:49:17.263809 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:19.263770929 +0000 UTC m=+5.006019027 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "node-tuning-operator-tls" not found Feb 20 11:49:17.263919 master-0 kubenswrapper[7756]: E0220 11:49:17.263863 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:19.263842452 +0000 UTC m=+5.006090580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:17.263972 master-0 kubenswrapper[7756]: I0220 11:49:17.263936 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:17.264090 master-0 kubenswrapper[7756]: I0220 11:49:17.264046 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:17.264128 master-0 kubenswrapper[7756]: E0220 11:49:17.264097 7756 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 11:49:17.264459 master-0 kubenswrapper[7756]: I0220 11:49:17.264152 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:17.264459 master-0 kubenswrapper[7756]: I0220 11:49:17.264218 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:17.264459 master-0 kubenswrapper[7756]: E0220 11:49:17.264176 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 20 11:49:17.264459 master-0 kubenswrapper[7756]: E0220 11:49:17.264284 7756 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:17.264459 master-0 kubenswrapper[7756]: E0220 11:49:17.264239 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert podName:67f890c8-05a1-4797-8da8-6194aea0df9a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:19.264224353 +0000 UTC m=+5.006472371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert") pod "cluster-version-operator-5cfd9759cf-4pnsw" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a") : secret "cluster-version-operator-serving-cert" not found Feb 20 11:49:17.264459 master-0 kubenswrapper[7756]: E0220 11:49:17.264337 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls podName:22bba1b3-587d-4802-b4ae-946827c3fa7a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:19.264326346 +0000 UTC m=+5.006574424 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-5zl5l" (UID: "22bba1b3-587d-4802-b4ae-946827c3fa7a") : secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:17.264459 master-0 kubenswrapper[7756]: E0220 11:49:17.264356 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert podName:d65a0af4-c96f-44f8-9384-6bae4585983b nodeName:}" failed. No retries permitted until 2026-02-20 11:49:19.264344936 +0000 UTC m=+5.006593084 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert") pod "olm-operator-5499d7f7bb-6qtzc" (UID: "d65a0af4-c96f-44f8-9384-6bae4585983b") : secret "olm-operator-serving-cert" not found Feb 20 11:49:17.264459 master-0 kubenswrapper[7756]: E0220 11:49:17.264439 7756 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:17.265285 master-0 kubenswrapper[7756]: E0220 11:49:17.264569 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls podName:b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:19.264506541 +0000 UTC m=+5.006754589 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls") pod "dns-operator-8c7d49845-qhx9j" (UID: "b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8") : secret "metrics-tls" not found Feb 20 11:49:17.347111 master-0 kubenswrapper[7756]: E0220 11:49:17.346789 7756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2" Feb 20 11:49:17.347111 master-0 kubenswrapper[7756]: E0220 11:49:17.346968 7756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:copy-catalogd-manifests,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2,Command:[/bin/sh],Args:[-c cp -a /openshift/manifests /operand-assets/catalogd],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:operand-assets,ReadOnly:false,MountPath:/operand-assets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2k8n8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000380000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-olm-operator-5bd7768f54-j5fsc_openshift-cluster-olm-operator(ce2b6fde-de56-49c3-9bd6-e81c679b02bc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 20 11:49:17.348222 master-0 kubenswrapper[7756]: E0220 11:49:17.348136 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"copy-catalogd-manifests\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" podUID="ce2b6fde-de56-49c3-9bd6-e81c679b02bc" Feb 20 11:49:18.032409 master-0 kubenswrapper[7756]: E0220 11:49:18.032282 7756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" Feb 20 11:49:18.033069 master-0 kubenswrapper[7756]: E0220 11:49:18.032618 7756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:etcd-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396,Command:[cluster-etcd-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml --terminate-on-files=/var/run/secrets/serving-cert/tls.crt --terminate-on-files=/var/run/secrets/serving-cert/tls.key --terminate-on-files=/var/run/secrets/etcd-client/tls.crt --terminate-on-files=/var/run/secrets/etcd-client/tls.key --terminate-on-files=/var/run/configmaps/etcd-ca/ca-bundle.crt --terminate-on-files=/var/run/configmaps/etcd-service-ca/service-ca.crt],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPENSHIFT_PROFILE,Value:web,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-ca,ReadOnly:false,MountPath:/var/run/configmaps/etcd-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-service-ca,ReadOnly:false,MountPath:/var/run/configmaps/etcd-service-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-client,ReadOnly:false,MountPath:/var/run/secrets/etcd-client,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvjcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-operator-545bf96f4d-d69w2_openshift-etcd-operator(1d3a36bb-9d11-48b3-a3b5-07b47738ef97): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 20 11:49:18.034015 master-0 kubenswrapper[7756]: E0220 11:49:18.033926 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" podUID="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" Feb 20 11:49:18.596569 master-0 kubenswrapper[7756]: E0220 11:49:18.596457 7756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e" Feb 20 11:49:18.596807 master-0 kubenswrapper[7756]: E0220 11:49:18.596729 7756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:csi-snapshot-controller-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e,Command:[],Args:[start -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERAND_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9,ValueFrom:nil,},EnvVar{Name:WEBHOOK_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d953b34fe1ab03e9a57b3c91de4220683cf92e804edb5f5c230e5888e1c5a6d2,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvxsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000160000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-snapshot-controller-operator-6fb4df594f-8x7xw_openshift-cluster-storage-operator(839bf5b1-b242-4bbd-bc09-cf6abcf7f734): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 20 11:49:18.598129 master-0 kubenswrapper[7756]: E0220 11:49:18.598055 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw" podUID="839bf5b1-b242-4bbd-bc09-cf6abcf7f734" Feb 20 11:49:18.630261 master-0 kubenswrapper[7756]: I0220 11:49:18.630149 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:18.636675 master-0 kubenswrapper[7756]: I0220 11:49:18.636623 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:18.649382 master-0 kubenswrapper[7756]: I0220 11:49:18.649328 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:18.655889 master-0 kubenswrapper[7756]: I0220 11:49:18.655819 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 11:49:18.852629 master-0 kubenswrapper[7756]: I0220 11:49:18.852541 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:19.187774 master-0 kubenswrapper[7756]: I0220 11:49:19.187666 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:19.188404 master-0 kubenswrapper[7756]: E0220 11:49:19.187913 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 11:49:19.188404 master-0 kubenswrapper[7756]: I0220 11:49:19.187984 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:19.188404 master-0 kubenswrapper[7756]: E0220 11:49:19.188003 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert podName:dbce6cdc-040a-48e1-8a81-b6ff9c180eba nodeName:}" failed. No retries permitted until 2026-02-20 11:49:23.187980347 +0000 UTC m=+8.930228455 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-mr99g" (UID: "dbce6cdc-040a-48e1-8a81-b6ff9c180eba") : secret "package-server-manager-serving-cert" not found Feb 20 11:49:19.188404 master-0 kubenswrapper[7756]: E0220 11:49:19.188182 7756 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 11:49:19.188404 master-0 kubenswrapper[7756]: E0220 11:49:19.188276 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls podName:7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca nodeName:}" failed. No retries permitted until 2026-02-20 11:49:23.188252875 +0000 UTC m=+8.930500913 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-r9ntt" (UID: "7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca") : secret "image-registry-operator-tls" not found Feb 20 11:49:19.188404 master-0 kubenswrapper[7756]: I0220 11:49:19.188335 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:19.188610 master-0 kubenswrapper[7756]: E0220 11:49:19.188457 7756 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:19.188610 master-0 kubenswrapper[7756]: E0220 11:49:19.188506 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls podName:db2a7cb1-1d05-4b24-86ed-f823fad5013e nodeName:}" failed. No retries permitted until 2026-02-20 11:49:23.188492422 +0000 UTC m=+8.930740460 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls") pod "ingress-operator-6569778c84-kw2v6" (UID: "db2a7cb1-1d05-4b24-86ed-f823fad5013e") : secret "metrics-tls" not found Feb 20 11:49:19.288811 master-0 kubenswrapper[7756]: I0220 11:49:19.288746 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:19.289064 master-0 kubenswrapper[7756]: E0220 11:49:19.288911 7756 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 11:49:19.289064 master-0 kubenswrapper[7756]: I0220 11:49:19.288946 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:19.289064 master-0 kubenswrapper[7756]: E0220 11:49:19.289007 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:23.288975897 +0000 UTC m=+9.031223915 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "node-tuning-operator-tls" not found Feb 20 11:49:19.289064 master-0 kubenswrapper[7756]: I0220 11:49:19.289039 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:19.289446 master-0 kubenswrapper[7756]: E0220 11:49:19.289101 7756 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:19.289446 master-0 kubenswrapper[7756]: I0220 11:49:19.289113 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:19.289446 master-0 kubenswrapper[7756]: E0220 11:49:19.289166 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:23.289146362 +0000 UTC m=+9.031394480 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:19.289446 master-0 kubenswrapper[7756]: E0220 11:49:19.289243 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 20 11:49:19.289446 master-0 kubenswrapper[7756]: I0220 11:49:19.289261 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:19.289446 master-0 kubenswrapper[7756]: E0220 11:49:19.289309 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert podName:4d060bff-3c25-4eeb-bdd3-e20fb2687645 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:23.289292006 +0000 UTC m=+9.031540014 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert") pod "catalog-operator-596f79dd6f-bjxbt" (UID: "4d060bff-3c25-4eeb-bdd3-e20fb2687645") : secret "catalog-operator-serving-cert" not found Feb 20 11:49:19.289446 master-0 kubenswrapper[7756]: I0220 11:49:19.289328 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:19.289446 master-0 kubenswrapper[7756]: E0220 11:49:19.289342 7756 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 20 11:49:19.289446 master-0 kubenswrapper[7756]: I0220 11:49:19.289358 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:19.289446 master-0 kubenswrapper[7756]: E0220 11:49:19.289378 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls podName:eb135cff-1a2e-468d-80ab-f7db3f57552a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:23.289369219 +0000 UTC m=+9.031617237 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls") pod "machine-config-operator-7f8c75f984-vvvjt" (UID: "eb135cff-1a2e-468d-80ab-f7db3f57552a") : secret "mco-proxy-tls" not found Feb 20 11:49:19.289446 master-0 kubenswrapper[7756]: I0220 11:49:19.289398 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:19.289446 master-0 kubenswrapper[7756]: E0220 11:49:19.289343 7756 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 11:49:19.289446 master-0 kubenswrapper[7756]: E0220 11:49:19.289436 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 20 11:49:19.289446 master-0 kubenswrapper[7756]: E0220 11:49:19.289443 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs podName:dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:23.28943558 +0000 UTC m=+9.031683598 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-jgv89" (UID: "dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783") : secret "multus-admission-controller-secret" not found Feb 20 11:49:19.289446 master-0 kubenswrapper[7756]: E0220 11:49:19.289472 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert podName:d65a0af4-c96f-44f8-9384-6bae4585983b nodeName:}" failed. No retries permitted until 2026-02-20 11:49:23.289462861 +0000 UTC m=+9.031710869 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert") pod "olm-operator-5499d7f7bb-6qtzc" (UID: "d65a0af4-c96f-44f8-9384-6bae4585983b") : secret "olm-operator-serving-cert" not found Feb 20 11:49:19.291830 master-0 kubenswrapper[7756]: E0220 11:49:19.289503 7756 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 11:49:19.291830 master-0 kubenswrapper[7756]: I0220 11:49:19.289562 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:19.291830 master-0 kubenswrapper[7756]: E0220 11:49:19.289570 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert podName:67f890c8-05a1-4797-8da8-6194aea0df9a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:23.289553384 +0000 UTC m=+9.031801392 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert") pod "cluster-version-operator-5cfd9759cf-4pnsw" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a") : secret "cluster-version-operator-serving-cert" not found Feb 20 11:49:19.291830 master-0 kubenswrapper[7756]: E0220 11:49:19.289649 7756 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:19.291830 master-0 kubenswrapper[7756]: I0220 11:49:19.289676 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:19.291830 master-0 kubenswrapper[7756]: E0220 11:49:19.289684 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls podName:b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:23.289674998 +0000 UTC m=+9.031923006 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls") pod "dns-operator-8c7d49845-qhx9j" (UID: "b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8") : secret "metrics-tls" not found Feb 20 11:49:19.291830 master-0 kubenswrapper[7756]: E0220 11:49:19.289614 7756 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:19.291830 master-0 kubenswrapper[7756]: E0220 11:49:19.289726 7756 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 11:49:19.291830 master-0 kubenswrapper[7756]: E0220 11:49:19.289731 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls podName:22bba1b3-587d-4802-b4ae-946827c3fa7a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:23.289722359 +0000 UTC m=+9.031970377 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-5zl5l" (UID: "22bba1b3-587d-4802-b4ae-946827c3fa7a") : secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:19.291830 master-0 kubenswrapper[7756]: E0220 11:49:19.289765 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics podName:6dfca740-0387-428a-b957-3e8a09c6e352 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:23.28975799 +0000 UTC m=+9.032005998 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-nr4tg" (UID: "6dfca740-0387-428a-b957-3e8a09c6e352") : secret "marketplace-operator-metrics" not found Feb 20 11:49:19.291830 master-0 kubenswrapper[7756]: I0220 11:49:19.289794 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:19.291830 master-0 kubenswrapper[7756]: E0220 11:49:19.289894 7756 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 20 11:49:19.291830 master-0 kubenswrapper[7756]: E0220 11:49:19.289915 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs podName:1709ef31-9ddd-42bf-9a95-4be4502a0828 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:23.289909134 +0000 UTC m=+9.032157142 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs") pod "network-metrics-daemon-29622" (UID: "1709ef31-9ddd-42bf-9a95-4be4502a0828") : secret "metrics-daemon-secret" not found Feb 20 11:49:19.506497 master-0 kubenswrapper[7756]: I0220 11:49:19.506394 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:19.510565 master-0 kubenswrapper[7756]: I0220 11:49:19.510479 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:19.543351 master-0 kubenswrapper[7756]: E0220 11:49:19.543264 7756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9" Feb 20 11:49:19.543600 master-0 kubenswrapper[7756]: E0220 11:49:19.543514 7756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5j82z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-gkxzr_openshift-network-operator(906307ef-d988-49e7-9d63-39116a2c4880): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 20 11:49:19.544969 master-0 kubenswrapper[7756]: E0220 11:49:19.544791 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-network-operator/iptables-alerter-gkxzr" podUID="906307ef-d988-49e7-9d63-39116a2c4880" Feb 20 11:49:19.685353 master-0 kubenswrapper[7756]: I0220 11:49:19.685313 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:20.076737 master-0 kubenswrapper[7756]: E0220 11:49:20.076649 7756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896" Feb 20 11:49:20.077048 master-0 kubenswrapper[7756]: E0220 11:49:20.076946 7756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896,Command:[cluster-openshift-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:ROUTE_CONTROLLER_MANAGER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vvm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-controller-manager-operator-584cc7bcb5-qdb75_openshift-controller-manager-operator(5360f3f5-2d07-432f-af45-22659538c55e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 20 11:49:20.078604 master-0 kubenswrapper[7756]: E0220 11:49:20.078496 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" podUID="5360f3f5-2d07-432f-af45-22659538c55e" Feb 20 11:49:20.145827 master-0 kubenswrapper[7756]: I0220 11:49:20.145421 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:20.184742 master-0 kubenswrapper[7756]: I0220 11:49:20.184657 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:20.279563 master-0 kubenswrapper[7756]: I0220 11:49:20.279485 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:20.301339 master-0 kubenswrapper[7756]: I0220 11:49:20.300999 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-h5w2t"] Feb 20 11:49:20.682729 master-0 kubenswrapper[7756]: I0220 11:49:20.682675 7756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 11:49:20.682729 master-0 kubenswrapper[7756]: I0220 11:49:20.682702 7756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 11:49:20.691293 master-0 kubenswrapper[7756]: W0220 11:49:20.691237 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39ccf158_b40f_4dba_90e2_27b1409487b7.slice/crio-bccd3e3cca0e5a27f19803d019ffa435cc0a6a211a761789d34e9900fb9748dc WatchSource:0}: Error finding container bccd3e3cca0e5a27f19803d019ffa435cc0a6a211a761789d34e9900fb9748dc: Status 404 returned error can't find the container with id bccd3e3cca0e5a27f19803d019ffa435cc0a6a211a761789d34e9900fb9748dc Feb 20 11:49:21.687664 master-0 kubenswrapper[7756]: I0220 11:49:21.687594 7756 generic.go:334] "Generic (PLEG): container finished" podID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerID="7271b0c2f4252bb9d18ca82cf9dc28e192310d41fc4837e2edcbc00ae9a2f5cd" exitCode=0 Feb 20 11:49:21.689049 master-0 kubenswrapper[7756]: I0220 11:49:21.687679 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" event={"ID":"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145","Type":"ContainerDied","Data":"7271b0c2f4252bb9d18ca82cf9dc28e192310d41fc4837e2edcbc00ae9a2f5cd"} Feb 20 11:49:21.692152 master-0 kubenswrapper[7756]: I0220 11:49:21.692104 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-h5w2t" event={"ID":"39ccf158-b40f-4dba-90e2-27b1409487b7","Type":"ContainerStarted","Data":"320ffa16eac809af29206ccb1c4495da313d923694ef3ce6a0be86b1c57edb02"} Feb 20 11:49:21.692316 master-0 kubenswrapper[7756]: I0220 11:49:21.692293 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-h5w2t" event={"ID":"39ccf158-b40f-4dba-90e2-27b1409487b7","Type":"ContainerStarted","Data":"bccd3e3cca0e5a27f19803d019ffa435cc0a6a211a761789d34e9900fb9748dc"} Feb 20 11:49:21.695007 master-0 kubenswrapper[7756]: I0220 11:49:21.693950 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" event={"ID":"01e90033-9ddf-41b4-ab61-e89add6c2fde","Type":"ContainerStarted","Data":"731cb148dbfdffc2b55c2372adae7ffe3b1128ca5f50a9d64465c2aba12d6905"} Feb 20 11:49:21.696409 master-0 kubenswrapper[7756]: I0220 11:49:21.695875 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" event={"ID":"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9","Type":"ContainerStarted","Data":"c3fd58850441274093931c36087d9a8518e8af6cd5182fdb00d74233da8d66da"} Feb 20 11:49:21.698471 master-0 kubenswrapper[7756]: I0220 11:49:21.698432 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" event={"ID":"1df81fcc-f967-4874-ad16-1a89f0e7875a","Type":"ContainerStarted","Data":"5461ac8869ede1ae48aaf443305cec8c0cf9a21a54dc206e103440a3f966bcc9"} Feb 20 11:49:21.707595 master-0 kubenswrapper[7756]: I0220 11:49:21.700260 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" event={"ID":"f1388469-5e55-4c1b-97c3-c88777f29ae7","Type":"ContainerStarted","Data":"b288109ee32770ae0136eb8073a319dc58d7b8d8a7d067c5f9bf71abd12290e4"} Feb 20 11:49:21.707595 master-0 kubenswrapper[7756]: I0220 11:49:21.703012 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" event={"ID":"6c3aa45a-44cc-48fb-a478-ce01a70c4b02","Type":"ContainerStarted","Data":"f4d85100cd0f06816a98689538bc93ed981f60823f3ce37e7c844447bcdb96ee"} Feb 20 11:49:21.742088 master-0 kubenswrapper[7756]: I0220 11:49:21.742037 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:21.749683 master-0 kubenswrapper[7756]: I0220 11:49:21.749616 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:22.243557 master-0 kubenswrapper[7756]: I0220 11:49:22.243490 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-j46n9"] Feb 20 11:49:22.243754 master-0 kubenswrapper[7756]: E0220 11:49:22.243698 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952aa6bb-4f60-4582-b978-52ebf9218755" containerName="prober" Feb 20 11:49:22.243754 master-0 kubenswrapper[7756]: I0220 11:49:22.243713 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="952aa6bb-4f60-4582-b978-52ebf9218755" containerName="prober" Feb 20 11:49:22.243754 master-0 kubenswrapper[7756]: E0220 11:49:22.243722 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" containerName="assisted-installer-controller" Feb 20 11:49:22.243754 master-0 kubenswrapper[7756]: I0220 11:49:22.243730 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" containerName="assisted-installer-controller" Feb 20 11:49:22.243925 master-0 kubenswrapper[7756]: I0220 11:49:22.243791 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="952aa6bb-4f60-4582-b978-52ebf9218755" containerName="prober" Feb 20 11:49:22.243925 master-0 kubenswrapper[7756]: I0220 11:49:22.243804 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" containerName="assisted-installer-controller" Feb 20 11:49:22.247548 master-0 kubenswrapper[7756]: I0220 11:49:22.244251 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-j46n9" Feb 20 11:49:22.257562 master-0 kubenswrapper[7756]: I0220 11:49:22.253050 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 20 11:49:22.257562 master-0 kubenswrapper[7756]: I0220 11:49:22.253422 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 20 11:49:22.309905 master-0 kubenswrapper[7756]: I0220 11:49:22.309854 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-j46n9"] Feb 20 11:49:22.338795 master-0 kubenswrapper[7756]: I0220 11:49:22.338741 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8djgj\" (UniqueName: \"kubernetes.io/projected/1fb59696-1d5f-41bb-9211-b89c63b10840-kube-api-access-8djgj\") pod \"migrator-5c85bff57-j46n9\" (UID: \"1fb59696-1d5f-41bb-9211-b89c63b10840\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-j46n9" Feb 20 11:49:22.440210 master-0 kubenswrapper[7756]: I0220 11:49:22.440105 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8djgj\" (UniqueName: \"kubernetes.io/projected/1fb59696-1d5f-41bb-9211-b89c63b10840-kube-api-access-8djgj\") pod \"migrator-5c85bff57-j46n9\" (UID: \"1fb59696-1d5f-41bb-9211-b89c63b10840\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-j46n9" Feb 20 11:49:22.460183 master-0 kubenswrapper[7756]: I0220 11:49:22.460122 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8djgj\" (UniqueName: \"kubernetes.io/projected/1fb59696-1d5f-41bb-9211-b89c63b10840-kube-api-access-8djgj\") pod \"migrator-5c85bff57-j46n9\" (UID: \"1fb59696-1d5f-41bb-9211-b89c63b10840\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-j46n9" Feb 20 11:49:22.582455 master-0 kubenswrapper[7756]: I0220 11:49:22.582365 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-j46n9" Feb 20 11:49:22.706458 master-0 kubenswrapper[7756]: I0220 11:49:22.706415 7756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 11:49:22.714457 master-0 kubenswrapper[7756]: I0220 11:49:22.714398 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:49:22.808938 master-0 kubenswrapper[7756]: I0220 11:49:22.808763 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-j46n9"] Feb 20 11:49:22.840281 master-0 kubenswrapper[7756]: W0220 11:49:22.840103 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fb59696_1d5f_41bb_9211_b89c63b10840.slice/crio-33f515505da92fce1875904be2b838a9fceeeb5773f300e97e9d391050d94811 WatchSource:0}: Error finding container 33f515505da92fce1875904be2b838a9fceeeb5773f300e97e9d391050d94811: Status 404 returned error can't find the container with id 33f515505da92fce1875904be2b838a9fceeeb5773f300e97e9d391050d94811 Feb 20 11:49:23.152192 master-0 kubenswrapper[7756]: I0220 11:49:23.152025 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:49:23.251485 master-0 kubenswrapper[7756]: I0220 11:49:23.249759 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:23.251485 master-0 kubenswrapper[7756]: I0220 11:49:23.249955 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:23.251485 master-0 kubenswrapper[7756]: I0220 11:49:23.250179 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:23.251485 master-0 kubenswrapper[7756]: E0220 11:49:23.250472 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 11:49:23.251485 master-0 kubenswrapper[7756]: E0220 11:49:23.250598 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert podName:dbce6cdc-040a-48e1-8a81-b6ff9c180eba nodeName:}" failed. No retries permitted until 2026-02-20 11:49:31.250566241 +0000 UTC m=+16.992814289 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-mr99g" (UID: "dbce6cdc-040a-48e1-8a81-b6ff9c180eba") : secret "package-server-manager-serving-cert" not found Feb 20 11:49:23.251485 master-0 kubenswrapper[7756]: E0220 11:49:23.250595 7756 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 11:49:23.251485 master-0 kubenswrapper[7756]: E0220 11:49:23.250733 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls podName:7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca nodeName:}" failed. No retries permitted until 2026-02-20 11:49:31.250692925 +0000 UTC m=+16.992940973 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-r9ntt" (UID: "7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca") : secret "image-registry-operator-tls" not found Feb 20 11:49:23.251485 master-0 kubenswrapper[7756]: E0220 11:49:23.250939 7756 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:23.251485 master-0 kubenswrapper[7756]: E0220 11:49:23.250992 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls podName:db2a7cb1-1d05-4b24-86ed-f823fad5013e nodeName:}" failed. No retries permitted until 2026-02-20 11:49:31.250975193 +0000 UTC m=+16.993223231 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls") pod "ingress-operator-6569778c84-kw2v6" (UID: "db2a7cb1-1d05-4b24-86ed-f823fad5013e") : secret "metrics-tls" not found Feb 20 11:49:23.352287 master-0 kubenswrapper[7756]: I0220 11:49:23.352216 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:23.352550 master-0 kubenswrapper[7756]: I0220 11:49:23.352302 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:23.352550 master-0 kubenswrapper[7756]: I0220 11:49:23.352341 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:23.352550 master-0 kubenswrapper[7756]: I0220 11:49:23.352375 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:23.352550 master-0 kubenswrapper[7756]: I0220 11:49:23.352407 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:23.352550 master-0 kubenswrapper[7756]: I0220 11:49:23.352463 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:23.352550 master-0 kubenswrapper[7756]: I0220 11:49:23.352502 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:23.352965 master-0 kubenswrapper[7756]: I0220 11:49:23.352562 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:23.352965 master-0 kubenswrapper[7756]: I0220 11:49:23.352595 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:23.352965 master-0 kubenswrapper[7756]: I0220 11:49:23.352632 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:23.352965 master-0 kubenswrapper[7756]: I0220 11:49:23.352664 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:23.353276 master-0 kubenswrapper[7756]: E0220 11:49:23.352976 7756 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 11:49:23.353276 master-0 kubenswrapper[7756]: E0220 11:49:23.353045 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls podName:b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:31.353021524 +0000 UTC m=+17.095269562 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls") pod "dns-operator-8c7d49845-qhx9j" (UID: "b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8") : secret "metrics-tls" not found Feb 20 11:49:23.353276 master-0 kubenswrapper[7756]: E0220 11:49:23.353105 7756 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 20 11:49:23.353276 master-0 kubenswrapper[7756]: E0220 11:49:23.353221 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs podName:1709ef31-9ddd-42bf-9a95-4be4502a0828 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:31.353192359 +0000 UTC m=+17.095440407 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs") pod "network-metrics-daemon-29622" (UID: "1709ef31-9ddd-42bf-9a95-4be4502a0828") : secret "metrics-daemon-secret" not found Feb 20 11:49:23.353711 master-0 kubenswrapper[7756]: E0220 11:49:23.353296 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 20 11:49:23.353711 master-0 kubenswrapper[7756]: E0220 11:49:23.353336 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert podName:4d060bff-3c25-4eeb-bdd3-e20fb2687645 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:31.353323843 +0000 UTC m=+17.095571891 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert") pod "catalog-operator-596f79dd6f-bjxbt" (UID: "4d060bff-3c25-4eeb-bdd3-e20fb2687645") : secret "catalog-operator-serving-cert" not found Feb 20 11:49:23.353711 master-0 kubenswrapper[7756]: E0220 11:49:23.353402 7756 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 11:49:23.353711 master-0 kubenswrapper[7756]: E0220 11:49:23.353439 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:31.353427186 +0000 UTC m=+17.095675234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "node-tuning-operator-tls" not found Feb 20 11:49:23.353711 master-0 kubenswrapper[7756]: E0220 11:49:23.353505 7756 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 11:49:23.353711 master-0 kubenswrapper[7756]: E0220 11:49:23.353582 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs podName:dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:31.35356606 +0000 UTC m=+17.095814098 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-jgv89" (UID: "dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783") : secret "multus-admission-controller-secret" not found Feb 20 11:49:23.353711 master-0 kubenswrapper[7756]: E0220 11:49:23.353669 7756 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 11:49:23.354128 master-0 kubenswrapper[7756]: E0220 11:49:23.353722 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics podName:6dfca740-0387-428a-b957-3e8a09c6e352 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:31.353704044 +0000 UTC m=+17.095952172 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-nr4tg" (UID: "6dfca740-0387-428a-b957-3e8a09c6e352") : secret "marketplace-operator-metrics" not found Feb 20 11:49:23.354128 master-0 kubenswrapper[7756]: E0220 11:49:23.353815 7756 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:23.354128 master-0 kubenswrapper[7756]: E0220 11:49:23.353858 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert podName:4cbb46f1-1c33-42fc-8371-6a1bea8c28ff nodeName:}" failed. No retries permitted until 2026-02-20 11:49:31.353844988 +0000 UTC m=+17.096093026 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-gwpst" (UID: "4cbb46f1-1c33-42fc-8371-6a1bea8c28ff") : secret "performance-addon-operator-webhook-cert" not found Feb 20 11:49:23.354128 master-0 kubenswrapper[7756]: E0220 11:49:23.353925 7756 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 20 11:49:23.354128 master-0 kubenswrapper[7756]: E0220 11:49:23.353959 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls podName:eb135cff-1a2e-468d-80ab-f7db3f57552a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:31.353948021 +0000 UTC m=+17.096196059 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls") pod "machine-config-operator-7f8c75f984-vvvjt" (UID: "eb135cff-1a2e-468d-80ab-f7db3f57552a") : secret "mco-proxy-tls" not found Feb 20 11:49:23.354128 master-0 kubenswrapper[7756]: E0220 11:49:23.354017 7756 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 11:49:23.354128 master-0 kubenswrapper[7756]: E0220 11:49:23.354049 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert podName:67f890c8-05a1-4797-8da8-6194aea0df9a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:31.354038474 +0000 UTC m=+17.096286522 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert") pod "cluster-version-operator-5cfd9759cf-4pnsw" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a") : secret "cluster-version-operator-serving-cert" not found Feb 20 11:49:23.356084 master-0 kubenswrapper[7756]: E0220 11:49:23.356037 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 20 11:49:23.356216 master-0 kubenswrapper[7756]: E0220 11:49:23.356123 7756 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:23.356216 master-0 kubenswrapper[7756]: E0220 11:49:23.356153 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert podName:d65a0af4-c96f-44f8-9384-6bae4585983b nodeName:}" failed. No retries permitted until 2026-02-20 11:49:31.356133925 +0000 UTC m=+17.098381963 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert") pod "olm-operator-5499d7f7bb-6qtzc" (UID: "d65a0af4-c96f-44f8-9384-6bae4585983b") : secret "olm-operator-serving-cert" not found Feb 20 11:49:23.356345 master-0 kubenswrapper[7756]: E0220 11:49:23.356218 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls podName:22bba1b3-587d-4802-b4ae-946827c3fa7a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:31.356203367 +0000 UTC m=+17.098451405 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-5zl5l" (UID: "22bba1b3-587d-4802-b4ae-946827c3fa7a") : secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:23.714945 master-0 kubenswrapper[7756]: I0220 11:49:23.714866 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" event={"ID":"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145","Type":"ContainerStarted","Data":"65fc745d32199b41ad554a4c4ed1944167b7da7496dffcee77c17ec0d2f1a51b"} Feb 20 11:49:23.715950 master-0 kubenswrapper[7756]: I0220 11:49:23.715147 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:23.716328 master-0 kubenswrapper[7756]: I0220 11:49:23.716248 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-j46n9" event={"ID":"1fb59696-1d5f-41bb-9211-b89c63b10840","Type":"ContainerStarted","Data":"33f515505da92fce1875904be2b838a9fceeeb5773f300e97e9d391050d94811"} Feb 20 11:49:24.096987 master-0 kubenswrapper[7756]: I0220 11:49:24.096438 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:24.097217 master-0 kubenswrapper[7756]: I0220 11:49:24.097142 7756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 11:49:24.097217 master-0 kubenswrapper[7756]: I0220 11:49:24.097164 7756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 11:49:24.147702 master-0 kubenswrapper[7756]: I0220 11:49:24.147609 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:24.456517 master-0 kubenswrapper[7756]: I0220 11:49:24.456378 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-5fph4"] Feb 20 11:49:24.456824 master-0 kubenswrapper[7756]: I0220 11:49:24.456790 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 11:49:24.459220 master-0 kubenswrapper[7756]: I0220 11:49:24.458968 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 20 11:49:24.459359 master-0 kubenswrapper[7756]: I0220 11:49:24.459248 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 20 11:49:24.460289 master-0 kubenswrapper[7756]: I0220 11:49:24.460023 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 20 11:49:24.460996 master-0 kubenswrapper[7756]: I0220 11:49:24.460795 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 20 11:49:24.468504 master-0 kubenswrapper[7756]: I0220 11:49:24.468364 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq4ct\" (UniqueName: \"kubernetes.io/projected/8a97bbf5-7409-4f36-894b-b88284e1b6d0-kube-api-access-vq4ct\") pod \"service-ca-576b4d78bd-5fph4\" (UID: \"8a97bbf5-7409-4f36-894b-b88284e1b6d0\") " pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 11:49:24.468951 master-0 kubenswrapper[7756]: I0220 11:49:24.468767 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8a97bbf5-7409-4f36-894b-b88284e1b6d0-signing-key\") pod \"service-ca-576b4d78bd-5fph4\" (UID: \"8a97bbf5-7409-4f36-894b-b88284e1b6d0\") " pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 11:49:24.468951 master-0 kubenswrapper[7756]: I0220 11:49:24.468862 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8a97bbf5-7409-4f36-894b-b88284e1b6d0-signing-cabundle\") pod \"service-ca-576b4d78bd-5fph4\" (UID: \"8a97bbf5-7409-4f36-894b-b88284e1b6d0\") " pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 11:49:24.472424 master-0 kubenswrapper[7756]: I0220 11:49:24.472380 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-5fph4"] Feb 20 11:49:24.571635 master-0 kubenswrapper[7756]: I0220 11:49:24.570118 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8a97bbf5-7409-4f36-894b-b88284e1b6d0-signing-key\") pod \"service-ca-576b4d78bd-5fph4\" (UID: \"8a97bbf5-7409-4f36-894b-b88284e1b6d0\") " pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 11:49:24.571635 master-0 kubenswrapper[7756]: I0220 11:49:24.570233 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8a97bbf5-7409-4f36-894b-b88284e1b6d0-signing-cabundle\") pod \"service-ca-576b4d78bd-5fph4\" (UID: \"8a97bbf5-7409-4f36-894b-b88284e1b6d0\") " pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 11:49:24.571635 master-0 kubenswrapper[7756]: I0220 11:49:24.570327 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq4ct\" (UniqueName: \"kubernetes.io/projected/8a97bbf5-7409-4f36-894b-b88284e1b6d0-kube-api-access-vq4ct\") pod \"service-ca-576b4d78bd-5fph4\" (UID: \"8a97bbf5-7409-4f36-894b-b88284e1b6d0\") " pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 11:49:24.573365 master-0 kubenswrapper[7756]: I0220 11:49:24.573303 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8a97bbf5-7409-4f36-894b-b88284e1b6d0-signing-cabundle\") pod \"service-ca-576b4d78bd-5fph4\" (UID: \"8a97bbf5-7409-4f36-894b-b88284e1b6d0\") " pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 11:49:24.587994 master-0 kubenswrapper[7756]: I0220 11:49:24.584610 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8a97bbf5-7409-4f36-894b-b88284e1b6d0-signing-key\") pod \"service-ca-576b4d78bd-5fph4\" (UID: \"8a97bbf5-7409-4f36-894b-b88284e1b6d0\") " pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 11:49:24.587994 master-0 kubenswrapper[7756]: I0220 11:49:24.587937 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq4ct\" (UniqueName: \"kubernetes.io/projected/8a97bbf5-7409-4f36-894b-b88284e1b6d0-kube-api-access-vq4ct\") pod \"service-ca-576b4d78bd-5fph4\" (UID: \"8a97bbf5-7409-4f36-894b-b88284e1b6d0\") " pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 11:49:24.766477 master-0 kubenswrapper[7756]: I0220 11:49:24.766408 7756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 11:49:24.791393 master-0 kubenswrapper[7756]: I0220 11:49:24.791334 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 11:49:25.027508 master-0 kubenswrapper[7756]: I0220 11:49:25.027318 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-5fph4"] Feb 20 11:49:25.693718 master-0 kubenswrapper[7756]: I0220 11:49:25.693313 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:25.725431 master-0 kubenswrapper[7756]: I0220 11:49:25.725345 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 11:49:25.770378 master-0 kubenswrapper[7756]: I0220 11:49:25.770303 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" event={"ID":"8a97bbf5-7409-4f36-894b-b88284e1b6d0","Type":"ContainerStarted","Data":"0394ee858152290726abadbd7c30c0f31262c014870cefb1d45db15a3536bc63"} Feb 20 11:49:25.770378 master-0 kubenswrapper[7756]: I0220 11:49:25.770377 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" event={"ID":"8a97bbf5-7409-4f36-894b-b88284e1b6d0","Type":"ContainerStarted","Data":"b2d70b13e56c93d2b547edf220b4dd7dcd419773ebed8ee5ba82b3212eb438a5"} Feb 20 11:49:25.775023 master-0 kubenswrapper[7756]: I0220 11:49:25.774980 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-j46n9" event={"ID":"1fb59696-1d5f-41bb-9211-b89c63b10840","Type":"ContainerStarted","Data":"cf67dfaf7c6a1cd6bb4ad4ead7801696bfa3db1c07b828efc3eaf0207c68e0b5"} Feb 20 11:49:25.775131 master-0 kubenswrapper[7756]: I0220 11:49:25.775023 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-j46n9" event={"ID":"1fb59696-1d5f-41bb-9211-b89c63b10840","Type":"ContainerStarted","Data":"48776ba0e67cbff38dc2f736492f05784734d5c7453fad6b961cf29a0fab579e"} Feb 20 11:49:25.819964 master-0 kubenswrapper[7756]: I0220 11:49:25.819885 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" podStartSLOduration=1.819847287 podStartE2EDuration="1.819847287s" podCreationTimestamp="2026-02-20 11:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:49:25.796974674 +0000 UTC m=+11.539222692" watchObservedRunningTime="2026-02-20 11:49:25.819847287 +0000 UTC m=+11.562095305" Feb 20 11:49:26.074243 master-0 kubenswrapper[7756]: I0220 11:49:26.074192 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:49:26.098023 master-0 kubenswrapper[7756]: I0220 11:49:26.097421 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-j46n9" podStartSLOduration=2.354540422 podStartE2EDuration="4.097401128s" podCreationTimestamp="2026-02-20 11:49:22 +0000 UTC" firstStartedPulling="2026-02-20 11:49:22.843285256 +0000 UTC m=+8.585533274" lastFinishedPulling="2026-02-20 11:49:24.586145942 +0000 UTC m=+10.328393980" observedRunningTime="2026-02-20 11:49:25.820383532 +0000 UTC m=+11.562631540" watchObservedRunningTime="2026-02-20 11:49:26.097401128 +0000 UTC m=+11.839649166" Feb 20 11:49:31.267764 master-0 kubenswrapper[7756]: I0220 11:49:31.267695 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:31.268828 master-0 kubenswrapper[7756]: I0220 11:49:31.267867 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:31.268828 master-0 kubenswrapper[7756]: E0220 11:49:31.268075 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 11:49:31.268828 master-0 kubenswrapper[7756]: E0220 11:49:31.268162 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert podName:dbce6cdc-040a-48e1-8a81-b6ff9c180eba nodeName:}" failed. No retries permitted until 2026-02-20 11:49:47.268142386 +0000 UTC m=+33.010390394 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-mr99g" (UID: "dbce6cdc-040a-48e1-8a81-b6ff9c180eba") : secret "package-server-manager-serving-cert" not found Feb 20 11:49:31.268828 master-0 kubenswrapper[7756]: I0220 11:49:31.268215 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:31.281766 master-0 kubenswrapper[7756]: I0220 11:49:31.277866 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:31.281766 master-0 kubenswrapper[7756]: I0220 11:49:31.277874 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:31.370102 master-0 kubenswrapper[7756]: I0220 11:49:31.370018 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:31.370364 master-0 kubenswrapper[7756]: I0220 11:49:31.370178 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:31.370364 master-0 kubenswrapper[7756]: E0220 11:49:31.370233 7756 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 11:49:31.370364 master-0 kubenswrapper[7756]: E0220 11:49:31.370329 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics podName:6dfca740-0387-428a-b957-3e8a09c6e352 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:47.370303079 +0000 UTC m=+33.112551128 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-nr4tg" (UID: "6dfca740-0387-428a-b957-3e8a09c6e352") : secret "marketplace-operator-metrics" not found Feb 20 11:49:31.370364 master-0 kubenswrapper[7756]: I0220 11:49:31.370235 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:31.370644 master-0 kubenswrapper[7756]: E0220 11:49:31.370366 7756 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 20 11:49:31.370644 master-0 kubenswrapper[7756]: I0220 11:49:31.370403 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:31.370644 master-0 kubenswrapper[7756]: E0220 11:49:31.370446 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs podName:1709ef31-9ddd-42bf-9a95-4be4502a0828 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:47.370422863 +0000 UTC m=+33.112670881 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs") pod "network-metrics-daemon-29622" (UID: "1709ef31-9ddd-42bf-9a95-4be4502a0828") : secret "metrics-daemon-secret" not found Feb 20 11:49:31.370644 master-0 kubenswrapper[7756]: I0220 11:49:31.370477 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:31.370644 master-0 kubenswrapper[7756]: I0220 11:49:31.370559 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:31.370644 master-0 kubenswrapper[7756]: I0220 11:49:31.370588 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:31.370644 master-0 kubenswrapper[7756]: I0220 11:49:31.370624 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:31.371001 master-0 kubenswrapper[7756]: I0220 11:49:31.370654 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:31.371001 master-0 kubenswrapper[7756]: I0220 11:49:31.370682 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:31.371001 master-0 kubenswrapper[7756]: I0220 11:49:31.370710 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:31.371001 master-0 kubenswrapper[7756]: E0220 11:49:31.370856 7756 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:31.371001 master-0 kubenswrapper[7756]: E0220 11:49:31.370883 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls podName:22bba1b3-587d-4802-b4ae-946827c3fa7a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:47.370874336 +0000 UTC m=+33.113122354 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-5zl5l" (UID: "22bba1b3-587d-4802-b4ae-946827c3fa7a") : secret "cluster-monitoring-operator-tls" not found Feb 20 11:49:31.371001 master-0 kubenswrapper[7756]: E0220 11:49:31.370963 7756 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 11:49:31.371325 master-0 kubenswrapper[7756]: E0220 11:49:31.371018 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs podName:dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:47.3709999 +0000 UTC m=+33.113247948 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-jgv89" (UID: "dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783") : secret "multus-admission-controller-secret" not found Feb 20 11:49:31.371325 master-0 kubenswrapper[7756]: E0220 11:49:31.371091 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 20 11:49:31.371325 master-0 kubenswrapper[7756]: E0220 11:49:31.371129 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert podName:4d060bff-3c25-4eeb-bdd3-e20fb2687645 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:47.371115763 +0000 UTC m=+33.113363821 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert") pod "catalog-operator-596f79dd6f-bjxbt" (UID: "4d060bff-3c25-4eeb-bdd3-e20fb2687645") : secret "catalog-operator-serving-cert" not found Feb 20 11:49:31.371623 master-0 kubenswrapper[7756]: E0220 11:49:31.371598 7756 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 20 11:49:31.371819 master-0 kubenswrapper[7756]: E0220 11:49:31.371800 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls podName:eb135cff-1a2e-468d-80ab-f7db3f57552a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:47.371768253 +0000 UTC m=+33.114016271 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls") pod "machine-config-operator-7f8c75f984-vvvjt" (UID: "eb135cff-1a2e-468d-80ab-f7db3f57552a") : secret "mco-proxy-tls" not found Feb 20 11:49:31.372049 master-0 kubenswrapper[7756]: E0220 11:49:31.371999 7756 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 20 11:49:31.372242 master-0 kubenswrapper[7756]: E0220 11:49:31.372223 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert podName:d65a0af4-c96f-44f8-9384-6bae4585983b nodeName:}" failed. No retries permitted until 2026-02-20 11:49:47.372207265 +0000 UTC m=+33.114455363 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert") pod "olm-operator-5499d7f7bb-6qtzc" (UID: "d65a0af4-c96f-44f8-9384-6bae4585983b") : secret "olm-operator-serving-cert" not found Feb 20 11:49:31.385741 master-0 kubenswrapper[7756]: I0220 11:49:31.377065 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:31.385741 master-0 kubenswrapper[7756]: I0220 11:49:31.379182 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:31.393421 master-0 kubenswrapper[7756]: I0220 11:49:31.393380 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-4pnsw\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:31.400606 master-0 kubenswrapper[7756]: I0220 11:49:31.395714 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 11:49:31.400606 master-0 kubenswrapper[7756]: I0220 11:49:31.396056 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:31.400606 master-0 kubenswrapper[7756]: I0220 11:49:31.396246 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 11:49:31.401314 master-0 kubenswrapper[7756]: I0220 11:49:31.401285 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:31.414408 master-0 kubenswrapper[7756]: I0220 11:49:31.414366 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 11:49:31.414785 master-0 kubenswrapper[7756]: I0220 11:49:31.414366 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 11:49:31.445851 master-0 kubenswrapper[7756]: W0220 11:49:31.445791 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67f890c8_05a1_4797_8da8_6194aea0df9a.slice/crio-7529d79a6118a295907b42ef070c06d01dccfc108d0cf68bd4817c376797c420 WatchSource:0}: Error finding container 7529d79a6118a295907b42ef070c06d01dccfc108d0cf68bd4817c376797c420: Status 404 returned error can't find the container with id 7529d79a6118a295907b42ef070c06d01dccfc108d0cf68bd4817c376797c420 Feb 20 11:49:31.599686 master-0 kubenswrapper[7756]: I0220 11:49:31.599644 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-kw2v6"] Feb 20 11:49:31.612933 master-0 kubenswrapper[7756]: W0220 11:49:31.612679 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb2a7cb1_1d05_4b24_86ed_f823fad5013e.slice/crio-dd42f3b0e8e73a155f4ae8d3e76cb9c1f46437280ce91aa23b51a6b995b48869 WatchSource:0}: Error finding container dd42f3b0e8e73a155f4ae8d3e76cb9c1f46437280ce91aa23b51a6b995b48869: Status 404 returned error can't find the container with id dd42f3b0e8e73a155f4ae8d3e76cb9c1f46437280ce91aa23b51a6b995b48869 Feb 20 11:49:31.622331 master-0 kubenswrapper[7756]: I0220 11:49:31.621908 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt"] Feb 20 11:49:31.633932 master-0 kubenswrapper[7756]: W0220 11:49:31.633878 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b31b66a_29ea_4c0d_b5a3_a7ed4af1daca.slice/crio-0d45f4e60b11e0b0a317456c0195f07cdb88a32c6fdc95b3ec005464743a5f86 WatchSource:0}: Error finding container 0d45f4e60b11e0b0a317456c0195f07cdb88a32c6fdc95b3ec005464743a5f86: Status 404 returned error can't find the container with id 0d45f4e60b11e0b0a317456c0195f07cdb88a32c6fdc95b3ec005464743a5f86 Feb 20 11:49:31.657724 master-0 kubenswrapper[7756]: I0220 11:49:31.657694 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst"] Feb 20 11:49:31.663934 master-0 kubenswrapper[7756]: W0220 11:49:31.663821 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cbb46f1_1c33_42fc_8371_6a1bea8c28ff.slice/crio-2be4e82eb96940a91f7ac36e8a59bd96b86a7b6fac8a7814b9cb48d762103f37 WatchSource:0}: Error finding container 2be4e82eb96940a91f7ac36e8a59bd96b86a7b6fac8a7814b9cb48d762103f37: Status 404 returned error can't find the container with id 2be4e82eb96940a91f7ac36e8a59bd96b86a7b6fac8a7814b9cb48d762103f37 Feb 20 11:49:31.677753 master-0 kubenswrapper[7756]: I0220 11:49:31.677707 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-qhx9j"] Feb 20 11:49:31.802844 master-0 kubenswrapper[7756]: I0220 11:49:31.802622 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" event={"ID":"67f890c8-05a1-4797-8da8-6194aea0df9a","Type":"ContainerStarted","Data":"7529d79a6118a295907b42ef070c06d01dccfc108d0cf68bd4817c376797c420"} Feb 20 11:49:31.805152 master-0 kubenswrapper[7756]: I0220 11:49:31.805115 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" event={"ID":"f98aeaf7-bf1a-46af-bf1b-85713baa4c67","Type":"ContainerStarted","Data":"ba33361681392f1def86ef3fcb0b685dd11e1a8eb4030176e604e1253b421630"} Feb 20 11:49:31.806449 master-0 kubenswrapper[7756]: I0220 11:49:31.806408 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" event={"ID":"db2a7cb1-1d05-4b24-86ed-f823fad5013e","Type":"ContainerStarted","Data":"dd42f3b0e8e73a155f4ae8d3e76cb9c1f46437280ce91aa23b51a6b995b48869"} Feb 20 11:49:31.807698 master-0 kubenswrapper[7756]: I0220 11:49:31.807664 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" event={"ID":"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca","Type":"ContainerStarted","Data":"0d45f4e60b11e0b0a317456c0195f07cdb88a32c6fdc95b3ec005464743a5f86"} Feb 20 11:49:31.809028 master-0 kubenswrapper[7756]: I0220 11:49:31.808962 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" event={"ID":"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff","Type":"ContainerStarted","Data":"2be4e82eb96940a91f7ac36e8a59bd96b86a7b6fac8a7814b9cb48d762103f37"} Feb 20 11:49:31.811127 master-0 kubenswrapper[7756]: I0220 11:49:31.811086 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" event={"ID":"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8","Type":"ContainerStarted","Data":"4925985880a2064a6380cae65dbb1eb737b503d2a9366dcfbcec286b6e942ef7"} Feb 20 11:49:32.817051 master-0 kubenswrapper[7756]: I0220 11:49:32.816946 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" event={"ID":"5360f3f5-2d07-432f-af45-22659538c55e","Type":"ContainerStarted","Data":"2d9f878c267250c634175c8afa99432d0586168560ba8d948183859d4b64504a"} Feb 20 11:49:32.821496 master-0 kubenswrapper[7756]: I0220 11:49:32.821467 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" event={"ID":"1d3a36bb-9d11-48b3-a3b5-07b47738ef97","Type":"ContainerStarted","Data":"8d90051cb425dcfb05eea700daacd614186eaabfc560fdf17a2b201fc46c56ad"} Feb 20 11:49:32.824519 master-0 kubenswrapper[7756]: I0220 11:49:32.824361 7756 generic.go:334] "Generic (PLEG): container finished" podID="ce2b6fde-de56-49c3-9bd6-e81c679b02bc" containerID="cbf4059981e662ffa8f5572d1a08ac8b15a360a7ff62236f5ccfa4eb74c73c26" exitCode=0 Feb 20 11:49:32.824519 master-0 kubenswrapper[7756]: I0220 11:49:32.824389 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" event={"ID":"ce2b6fde-de56-49c3-9bd6-e81c679b02bc","Type":"ContainerDied","Data":"cbf4059981e662ffa8f5572d1a08ac8b15a360a7ff62236f5ccfa4eb74c73c26"} Feb 20 11:49:33.815189 master-0 kubenswrapper[7756]: I0220 11:49:33.813916 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-7cd76464f7-76wdb"] Feb 20 11:49:33.815189 master-0 kubenswrapper[7756]: I0220 11:49:33.815142 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:33.818148 master-0 kubenswrapper[7756]: I0220 11:49:33.818089 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 20 11:49:33.818641 master-0 kubenswrapper[7756]: I0220 11:49:33.818292 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 20 11:49:33.818641 master-0 kubenswrapper[7756]: I0220 11:49:33.818407 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 20 11:49:33.818641 master-0 kubenswrapper[7756]: I0220 11:49:33.818590 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 20 11:49:33.818762 master-0 kubenswrapper[7756]: I0220 11:49:33.818706 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Feb 20 11:49:33.818884 master-0 kubenswrapper[7756]: I0220 11:49:33.818868 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 20 11:49:33.819731 master-0 kubenswrapper[7756]: I0220 11:49:33.819714 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 20 11:49:33.819942 master-0 kubenswrapper[7756]: I0220 11:49:33.819925 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 20 11:49:33.820551 master-0 kubenswrapper[7756]: I0220 11:49:33.820501 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Feb 20 11:49:33.824164 master-0 kubenswrapper[7756]: I0220 11:49:33.823914 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7cd76464f7-76wdb"] Feb 20 11:49:33.826152 master-0 kubenswrapper[7756]: I0220 11:49:33.826128 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 20 11:49:33.878255 master-0 kubenswrapper[7756]: I0220 11:49:33.878215 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps"] Feb 20 11:49:33.878924 master-0 kubenswrapper[7756]: I0220 11:49:33.878845 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:33.881663 master-0 kubenswrapper[7756]: I0220 11:49:33.881627 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 11:49:33.881745 master-0 kubenswrapper[7756]: I0220 11:49:33.881670 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 11:49:33.881819 master-0 kubenswrapper[7756]: I0220 11:49:33.881627 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 11:49:33.881870 master-0 kubenswrapper[7756]: I0220 11:49:33.881850 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 11:49:33.881999 master-0 kubenswrapper[7756]: I0220 11:49:33.881905 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 11:49:33.882169 master-0 kubenswrapper[7756]: I0220 11:49:33.882156 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 11:49:33.887983 master-0 kubenswrapper[7756]: I0220 11:49:33.887935 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps"] Feb 20 11:49:33.931890 master-0 kubenswrapper[7756]: I0220 11:49:33.931842 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-serving-ca\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:33.931890 master-0 kubenswrapper[7756]: I0220 11:49:33.931890 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:33.932135 master-0 kubenswrapper[7756]: I0220 11:49:33.932001 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:33.932135 master-0 kubenswrapper[7756]: I0220 11:49:33.932041 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit-dir\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:33.932135 master-0 kubenswrapper[7756]: I0220 11:49:33.932060 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-encryption-config\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:33.932135 master-0 kubenswrapper[7756]: I0220 11:49:33.932094 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-trusted-ca-bundle\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:33.932352 master-0 kubenswrapper[7756]: I0220 11:49:33.932297 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:33.932394 master-0 kubenswrapper[7756]: I0220 11:49:33.932351 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzvps\" (UniqueName: \"kubernetes.io/projected/d46a5dc4-89d6-4be7-8aac-11f034d25076-kube-api-access-dzvps\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:33.932501 master-0 kubenswrapper[7756]: I0220 11:49:33.932422 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-config\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:33.932501 master-0 kubenswrapper[7756]: I0220 11:49:33.932465 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d46a5dc4-89d6-4be7-8aac-11f034d25076-node-pullsecrets\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:33.932501 master-0 kubenswrapper[7756]: I0220 11:49:33.932493 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-image-import-ca\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.034007 master-0 kubenswrapper[7756]: I0220 11:49:34.033958 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.034187 master-0 kubenswrapper[7756]: I0220 11:49:34.034015 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit-dir\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.034187 master-0 kubenswrapper[7756]: I0220 11:49:34.034098 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit-dir\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.034187 master-0 kubenswrapper[7756]: I0220 11:49:34.034137 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-encryption-config\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.034187 master-0 kubenswrapper[7756]: E0220 11:49:34.034176 7756 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 20 11:49:34.034321 master-0 kubenswrapper[7756]: E0220 11:49:34.034248 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:34.534228428 +0000 UTC m=+20.276476656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : configmap "audit-0" not found Feb 20 11:49:34.034493 master-0 kubenswrapper[7756]: I0220 11:49:34.034427 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-serving-cert\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:34.034493 master-0 kubenswrapper[7756]: I0220 11:49:34.034470 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:34.034705 master-0 kubenswrapper[7756]: I0220 11:49:34.034492 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-trusted-ca-bundle\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.034824 master-0 kubenswrapper[7756]: I0220 11:49:34.034784 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-client-ca\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:34.034874 master-0 kubenswrapper[7756]: I0220 11:49:34.034859 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzvps\" (UniqueName: \"kubernetes.io/projected/d46a5dc4-89d6-4be7-8aac-11f034d25076-kube-api-access-dzvps\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.034917 master-0 kubenswrapper[7756]: I0220 11:49:34.034886 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.035014 master-0 kubenswrapper[7756]: E0220 11:49:34.034990 7756 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Feb 20 11:49:34.035052 master-0 kubenswrapper[7756]: I0220 11:49:34.035026 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rlmg\" (UniqueName: \"kubernetes.io/projected/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-kube-api-access-8rlmg\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:34.035052 master-0 kubenswrapper[7756]: E0220 11:49:34.035042 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:34.535026101 +0000 UTC m=+20.277274109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : secret "etcd-client" not found Feb 20 11:49:34.035127 master-0 kubenswrapper[7756]: I0220 11:49:34.035078 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-config\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:34.035127 master-0 kubenswrapper[7756]: I0220 11:49:34.035114 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-config\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.035181 master-0 kubenswrapper[7756]: I0220 11:49:34.035148 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d46a5dc4-89d6-4be7-8aac-11f034d25076-node-pullsecrets\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.035212 master-0 kubenswrapper[7756]: I0220 11:49:34.035178 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-image-import-ca\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.035432 master-0 kubenswrapper[7756]: I0220 11:49:34.035256 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d46a5dc4-89d6-4be7-8aac-11f034d25076-node-pullsecrets\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.035432 master-0 kubenswrapper[7756]: I0220 11:49:34.035293 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-serving-ca\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.035432 master-0 kubenswrapper[7756]: I0220 11:49:34.035323 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.035603 master-0 kubenswrapper[7756]: E0220 11:49:34.035487 7756 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 20 11:49:34.035603 master-0 kubenswrapper[7756]: E0220 11:49:34.035543 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:34.535511195 +0000 UTC m=+20.277759203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : secret "serving-cert" not found Feb 20 11:49:34.035603 master-0 kubenswrapper[7756]: E0220 11:49:34.035589 7756 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: configmap "etcd-serving-ca" not found Feb 20 11:49:34.035855 master-0 kubenswrapper[7756]: E0220 11:49:34.035618 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-serving-ca podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:34.535609087 +0000 UTC m=+20.277857105 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-serving-ca") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : configmap "etcd-serving-ca" not found Feb 20 11:49:34.036132 master-0 kubenswrapper[7756]: I0220 11:49:34.036105 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-image-import-ca\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.036496 master-0 kubenswrapper[7756]: I0220 11:49:34.036453 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-config\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.037299 master-0 kubenswrapper[7756]: I0220 11:49:34.037125 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-trusted-ca-bundle\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.040916 master-0 kubenswrapper[7756]: I0220 11:49:34.040727 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-encryption-config\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.056436 master-0 kubenswrapper[7756]: I0220 11:49:34.056385 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzvps\" (UniqueName: \"kubernetes.io/projected/d46a5dc4-89d6-4be7-8aac-11f034d25076-kube-api-access-dzvps\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.136591 master-0 kubenswrapper[7756]: I0220 11:49:34.136383 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rlmg\" (UniqueName: \"kubernetes.io/projected/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-kube-api-access-8rlmg\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:34.136591 master-0 kubenswrapper[7756]: I0220 11:49:34.136465 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-config\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:34.136965 master-0 kubenswrapper[7756]: I0220 11:49:34.136634 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-serving-cert\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:34.136965 master-0 kubenswrapper[7756]: I0220 11:49:34.136816 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:34.136965 master-0 kubenswrapper[7756]: I0220 11:49:34.136871 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-client-ca\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:34.137066 master-0 kubenswrapper[7756]: E0220 11:49:34.136810 7756 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 20 11:49:34.137066 master-0 kubenswrapper[7756]: E0220 11:49:34.137010 7756 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:34.137122 master-0 kubenswrapper[7756]: E0220 11:49:34.137076 7756 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 20 11:49:34.137122 master-0 kubenswrapper[7756]: E0220 11:49:34.136869 7756 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 11:49:34.137122 master-0 kubenswrapper[7756]: E0220 11:49:34.137111 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-config podName:99b7c6c6-c06a-4566-8f38-bb523bf3c73d nodeName:}" failed. No retries permitted until 2026-02-20 11:49:34.637042867 +0000 UTC m=+20.379291065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-config") pod "controller-manager-6c9b8f4d95-dw2ps" (UID: "99b7c6c6-c06a-4566-8f38-bb523bf3c73d") : configmap "config" not found Feb 20 11:49:34.137217 master-0 kubenswrapper[7756]: E0220 11:49:34.137148 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-serving-cert podName:99b7c6c6-c06a-4566-8f38-bb523bf3c73d nodeName:}" failed. No retries permitted until 2026-02-20 11:49:34.637126789 +0000 UTC m=+20.379374817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-serving-cert") pod "controller-manager-6c9b8f4d95-dw2ps" (UID: "99b7c6c6-c06a-4566-8f38-bb523bf3c73d") : secret "serving-cert" not found Feb 20 11:49:34.137217 master-0 kubenswrapper[7756]: E0220 11:49:34.137166 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-client-ca podName:99b7c6c6-c06a-4566-8f38-bb523bf3c73d nodeName:}" failed. No retries permitted until 2026-02-20 11:49:34.63715797 +0000 UTC m=+20.379405988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-client-ca") pod "controller-manager-6c9b8f4d95-dw2ps" (UID: "99b7c6c6-c06a-4566-8f38-bb523bf3c73d") : configmap "client-ca" not found Feb 20 11:49:34.137217 master-0 kubenswrapper[7756]: E0220 11:49:34.137180 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-proxy-ca-bundles podName:99b7c6c6-c06a-4566-8f38-bb523bf3c73d nodeName:}" failed. No retries permitted until 2026-02-20 11:49:34.63717249 +0000 UTC m=+20.379420518 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-proxy-ca-bundles") pod "controller-manager-6c9b8f4d95-dw2ps" (UID: "99b7c6c6-c06a-4566-8f38-bb523bf3c73d") : configmap "openshift-global-ca" not found Feb 20 11:49:34.173942 master-0 kubenswrapper[7756]: I0220 11:49:34.173879 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rlmg\" (UniqueName: \"kubernetes.io/projected/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-kube-api-access-8rlmg\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:34.541830 master-0 kubenswrapper[7756]: I0220 11:49:34.541708 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.541830 master-0 kubenswrapper[7756]: I0220 11:49:34.541818 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-serving-ca\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.542072 master-0 kubenswrapper[7756]: I0220 11:49:34.541859 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.542072 master-0 kubenswrapper[7756]: I0220 11:49:34.541968 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:34.542188 master-0 kubenswrapper[7756]: E0220 11:49:34.542159 7756 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 20 11:49:34.542318 master-0 kubenswrapper[7756]: E0220 11:49:34.542228 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:35.542205887 +0000 UTC m=+21.284453935 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : configmap "audit-0" not found Feb 20 11:49:34.542806 master-0 kubenswrapper[7756]: E0220 11:49:34.542776 7756 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Feb 20 11:49:34.542866 master-0 kubenswrapper[7756]: E0220 11:49:34.542837 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:35.542820215 +0000 UTC m=+21.285068263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : secret "etcd-client" not found Feb 20 11:49:34.542924 master-0 kubenswrapper[7756]: E0220 11:49:34.542902 7756 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: configmap "etcd-serving-ca" not found Feb 20 11:49:34.542959 master-0 kubenswrapper[7756]: E0220 11:49:34.542949 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-serving-ca podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:35.542937088 +0000 UTC m=+21.285185136 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-serving-ca") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : configmap "etcd-serving-ca" not found Feb 20 11:49:34.543048 master-0 kubenswrapper[7756]: E0220 11:49:34.543016 7756 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 20 11:49:34.543260 master-0 kubenswrapper[7756]: E0220 11:49:34.543060 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:35.543049381 +0000 UTC m=+21.285297419 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : secret "serving-cert" not found Feb 20 11:49:34.643799 master-0 kubenswrapper[7756]: I0220 11:49:34.643704 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-serving-cert\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:34.644031 master-0 kubenswrapper[7756]: E0220 11:49:34.643874 7756 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 11:49:34.644031 master-0 kubenswrapper[7756]: E0220 11:49:34.643941 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-serving-cert podName:99b7c6c6-c06a-4566-8f38-bb523bf3c73d nodeName:}" failed. No retries permitted until 2026-02-20 11:49:35.643924134 +0000 UTC m=+21.386172132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-serving-cert") pod "controller-manager-6c9b8f4d95-dw2ps" (UID: "99b7c6c6-c06a-4566-8f38-bb523bf3c73d") : secret "serving-cert" not found Feb 20 11:49:34.645373 master-0 kubenswrapper[7756]: I0220 11:49:34.644213 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:34.645373 master-0 kubenswrapper[7756]: E0220 11:49:34.644357 7756 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 20 11:49:34.645373 master-0 kubenswrapper[7756]: E0220 11:49:34.644416 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-proxy-ca-bundles podName:99b7c6c6-c06a-4566-8f38-bb523bf3c73d nodeName:}" failed. No retries permitted until 2026-02-20 11:49:35.644396558 +0000 UTC m=+21.386644586 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-proxy-ca-bundles") pod "controller-manager-6c9b8f4d95-dw2ps" (UID: "99b7c6c6-c06a-4566-8f38-bb523bf3c73d") : configmap "openshift-global-ca" not found Feb 20 11:49:34.645373 master-0 kubenswrapper[7756]: I0220 11:49:34.644464 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-client-ca\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:34.645373 master-0 kubenswrapper[7756]: E0220 11:49:34.644618 7756 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:34.645373 master-0 kubenswrapper[7756]: I0220 11:49:34.644644 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-config\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:34.645373 master-0 kubenswrapper[7756]: E0220 11:49:34.644686 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-client-ca podName:99b7c6c6-c06a-4566-8f38-bb523bf3c73d nodeName:}" failed. No retries permitted until 2026-02-20 11:49:35.644667966 +0000 UTC m=+21.386915984 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-client-ca") pod "controller-manager-6c9b8f4d95-dw2ps" (UID: "99b7c6c6-c06a-4566-8f38-bb523bf3c73d") : configmap "client-ca" not found Feb 20 11:49:34.645373 master-0 kubenswrapper[7756]: E0220 11:49:34.644741 7756 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 20 11:49:34.645373 master-0 kubenswrapper[7756]: E0220 11:49:34.644791 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-config podName:99b7c6c6-c06a-4566-8f38-bb523bf3c73d nodeName:}" failed. No retries permitted until 2026-02-20 11:49:35.644780139 +0000 UTC m=+21.387028157 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-config") pod "controller-manager-6c9b8f4d95-dw2ps" (UID: "99b7c6c6-c06a-4566-8f38-bb523bf3c73d") : configmap "config" not found Feb 20 11:49:35.008455 master-0 kubenswrapper[7756]: I0220 11:49:35.005699 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps"] Feb 20 11:49:35.008455 master-0 kubenswrapper[7756]: E0220 11:49:35.005963 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" podUID="99b7c6c6-c06a-4566-8f38-bb523bf3c73d" Feb 20 11:49:35.022688 master-0 kubenswrapper[7756]: I0220 11:49:35.022623 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb"] Feb 20 11:49:35.024268 master-0 kubenswrapper[7756]: I0220 11:49:35.024225 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:35.030422 master-0 kubenswrapper[7756]: I0220 11:49:35.030365 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 20 11:49:35.030802 master-0 kubenswrapper[7756]: I0220 11:49:35.030777 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 20 11:49:35.031151 master-0 kubenswrapper[7756]: I0220 11:49:35.031109 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 20 11:49:35.031333 master-0 kubenswrapper[7756]: I0220 11:49:35.031315 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 20 11:49:35.031450 master-0 kubenswrapper[7756]: I0220 11:49:35.031432 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 20 11:49:35.035325 master-0 kubenswrapper[7756]: I0220 11:49:35.035293 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb"] Feb 20 11:49:35.155993 master-0 kubenswrapper[7756]: I0220 11:49:35.155940 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhxvq\" (UniqueName: \"kubernetes.io/projected/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-kube-api-access-hhxvq\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:35.156153 master-0 kubenswrapper[7756]: I0220 11:49:35.156022 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:35.156153 master-0 kubenswrapper[7756]: I0220 11:49:35.156065 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:35.156153 master-0 kubenswrapper[7756]: I0220 11:49:35.156119 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-config\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:35.257615 master-0 kubenswrapper[7756]: I0220 11:49:35.257559 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhxvq\" (UniqueName: \"kubernetes.io/projected/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-kube-api-access-hhxvq\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:35.257615 master-0 kubenswrapper[7756]: I0220 11:49:35.257624 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:35.257823 master-0 kubenswrapper[7756]: I0220 11:49:35.257672 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:35.257823 master-0 kubenswrapper[7756]: I0220 11:49:35.257727 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-config\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:35.258603 master-0 kubenswrapper[7756]: I0220 11:49:35.258544 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-config\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:35.258897 master-0 kubenswrapper[7756]: E0220 11:49:35.258855 7756 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 11:49:35.258897 master-0 kubenswrapper[7756]: E0220 11:49:35.258897 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert podName:739a6cfd-c386-4ac9-8b18-cf913bd6cc61 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:35.758887241 +0000 UTC m=+21.501135249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert") pod "route-controller-manager-596cddd866-6nmjb" (UID: "739a6cfd-c386-4ac9-8b18-cf913bd6cc61") : secret "serving-cert" not found Feb 20 11:49:35.259086 master-0 kubenswrapper[7756]: E0220 11:49:35.259063 7756 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:35.259128 master-0 kubenswrapper[7756]: E0220 11:49:35.259092 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca podName:739a6cfd-c386-4ac9-8b18-cf913bd6cc61 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:35.759085297 +0000 UTC m=+21.501333305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca") pod "route-controller-manager-596cddd866-6nmjb" (UID: "739a6cfd-c386-4ac9-8b18-cf913bd6cc61") : configmap "client-ca" not found Feb 20 11:49:35.297325 master-0 kubenswrapper[7756]: I0220 11:49:35.297255 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhxvq\" (UniqueName: \"kubernetes.io/projected/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-kube-api-access-hhxvq\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:35.562385 master-0 kubenswrapper[7756]: I0220 11:49:35.562218 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:35.562606 master-0 kubenswrapper[7756]: I0220 11:49:35.562392 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:35.562606 master-0 kubenswrapper[7756]: I0220 11:49:35.562454 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-serving-ca\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:35.562606 master-0 kubenswrapper[7756]: I0220 11:49:35.562475 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:35.562774 master-0 kubenswrapper[7756]: E0220 11:49:35.562724 7756 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 20 11:49:35.562845 master-0 kubenswrapper[7756]: E0220 11:49:35.562821 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:37.562797368 +0000 UTC m=+23.305045376 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : secret "serving-cert" not found Feb 20 11:49:35.563443 master-0 kubenswrapper[7756]: E0220 11:49:35.563385 7756 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Feb 20 11:49:35.563502 master-0 kubenswrapper[7756]: E0220 11:49:35.563444 7756 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 20 11:49:35.563502 master-0 kubenswrapper[7756]: E0220 11:49:35.563472 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:37.563464347 +0000 UTC m=+23.305712355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : configmap "audit-0" not found Feb 20 11:49:35.563502 master-0 kubenswrapper[7756]: E0220 11:49:35.563414 7756 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: configmap "etcd-serving-ca" not found Feb 20 11:49:35.563502 master-0 kubenswrapper[7756]: E0220 11:49:35.563490 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:37.563481417 +0000 UTC m=+23.305729425 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : secret "etcd-client" not found Feb 20 11:49:35.563653 master-0 kubenswrapper[7756]: E0220 11:49:35.563550 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-serving-ca podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:37.563512318 +0000 UTC m=+23.305760326 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-serving-ca") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : configmap "etcd-serving-ca" not found Feb 20 11:49:35.663557 master-0 kubenswrapper[7756]: I0220 11:49:35.663488 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-serving-cert\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:35.663557 master-0 kubenswrapper[7756]: I0220 11:49:35.663550 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:35.663798 master-0 kubenswrapper[7756]: I0220 11:49:35.663583 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-client-ca\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:35.663798 master-0 kubenswrapper[7756]: I0220 11:49:35.663618 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-config\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:35.664789 master-0 kubenswrapper[7756]: I0220 11:49:35.664729 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:35.664922 master-0 kubenswrapper[7756]: E0220 11:49:35.664813 7756 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 11:49:35.664922 master-0 kubenswrapper[7756]: E0220 11:49:35.664852 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-serving-cert podName:99b7c6c6-c06a-4566-8f38-bb523bf3c73d nodeName:}" failed. No retries permitted until 2026-02-20 11:49:37.664841024 +0000 UTC m=+23.407089032 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-serving-cert") pod "controller-manager-6c9b8f4d95-dw2ps" (UID: "99b7c6c6-c06a-4566-8f38-bb523bf3c73d") : secret "serving-cert" not found Feb 20 11:49:35.665203 master-0 kubenswrapper[7756]: E0220 11:49:35.665155 7756 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:35.665203 master-0 kubenswrapper[7756]: E0220 11:49:35.665187 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-client-ca podName:99b7c6c6-c06a-4566-8f38-bb523bf3c73d nodeName:}" failed. No retries permitted until 2026-02-20 11:49:37.665180254 +0000 UTC m=+23.407428262 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-client-ca") pod "controller-manager-6c9b8f4d95-dw2ps" (UID: "99b7c6c6-c06a-4566-8f38-bb523bf3c73d") : configmap "client-ca" not found Feb 20 11:49:35.666170 master-0 kubenswrapper[7756]: I0220 11:49:35.666103 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-config\") pod \"controller-manager-6c9b8f4d95-dw2ps\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:35.766490 master-0 kubenswrapper[7756]: I0220 11:49:35.766395 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:35.766490 master-0 kubenswrapper[7756]: I0220 11:49:35.766497 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:35.766957 master-0 kubenswrapper[7756]: E0220 11:49:35.766713 7756 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:35.766957 master-0 kubenswrapper[7756]: E0220 11:49:35.766772 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca podName:739a6cfd-c386-4ac9-8b18-cf913bd6cc61 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:36.766752927 +0000 UTC m=+22.509000935 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca") pod "route-controller-manager-596cddd866-6nmjb" (UID: "739a6cfd-c386-4ac9-8b18-cf913bd6cc61") : configmap "client-ca" not found Feb 20 11:49:35.767203 master-0 kubenswrapper[7756]: E0220 11:49:35.767160 7756 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 11:49:35.767203 master-0 kubenswrapper[7756]: E0220 11:49:35.767202 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert podName:739a6cfd-c386-4ac9-8b18-cf913bd6cc61 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:36.76719229 +0000 UTC m=+22.509440298 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert") pod "route-controller-manager-596cddd866-6nmjb" (UID: "739a6cfd-c386-4ac9-8b18-cf913bd6cc61") : secret "serving-cert" not found Feb 20 11:49:35.837461 master-0 kubenswrapper[7756]: I0220 11:49:35.837354 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:35.846250 master-0 kubenswrapper[7756]: I0220 11:49:35.846198 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:35.968853 master-0 kubenswrapper[7756]: I0220 11:49:35.968779 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rlmg\" (UniqueName: \"kubernetes.io/projected/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-kube-api-access-8rlmg\") pod \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " Feb 20 11:49:35.969106 master-0 kubenswrapper[7756]: I0220 11:49:35.968906 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-proxy-ca-bundles\") pod \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " Feb 20 11:49:35.969106 master-0 kubenswrapper[7756]: I0220 11:49:35.968938 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-config\") pod \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\" (UID: \"99b7c6c6-c06a-4566-8f38-bb523bf3c73d\") " Feb 20 11:49:35.969962 master-0 kubenswrapper[7756]: I0220 11:49:35.969918 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-config" (OuterVolumeSpecName: "config") pod "99b7c6c6-c06a-4566-8f38-bb523bf3c73d" (UID: "99b7c6c6-c06a-4566-8f38-bb523bf3c73d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:35.970289 master-0 kubenswrapper[7756]: I0220 11:49:35.970254 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "99b7c6c6-c06a-4566-8f38-bb523bf3c73d" (UID: "99b7c6c6-c06a-4566-8f38-bb523bf3c73d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:35.972936 master-0 kubenswrapper[7756]: I0220 11:49:35.972878 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-kube-api-access-8rlmg" (OuterVolumeSpecName: "kube-api-access-8rlmg") pod "99b7c6c6-c06a-4566-8f38-bb523bf3c73d" (UID: "99b7c6c6-c06a-4566-8f38-bb523bf3c73d"). InnerVolumeSpecName "kube-api-access-8rlmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:49:36.071212 master-0 kubenswrapper[7756]: I0220 11:49:36.071163 7756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:36.071212 master-0 kubenswrapper[7756]: I0220 11:49:36.071206 7756 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:36.071809 master-0 kubenswrapper[7756]: I0220 11:49:36.071223 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rlmg\" (UniqueName: \"kubernetes.io/projected/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-kube-api-access-8rlmg\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:36.793690 master-0 kubenswrapper[7756]: I0220 11:49:36.779802 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:36.793690 master-0 kubenswrapper[7756]: I0220 11:49:36.779887 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:36.793690 master-0 kubenswrapper[7756]: E0220 11:49:36.780154 7756 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:36.793690 master-0 kubenswrapper[7756]: E0220 11:49:36.780259 7756 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 11:49:36.793690 master-0 kubenswrapper[7756]: E0220 11:49:36.780277 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca podName:739a6cfd-c386-4ac9-8b18-cf913bd6cc61 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:38.780247674 +0000 UTC m=+24.522495722 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca") pod "route-controller-manager-596cddd866-6nmjb" (UID: "739a6cfd-c386-4ac9-8b18-cf913bd6cc61") : configmap "client-ca" not found Feb 20 11:49:36.793690 master-0 kubenswrapper[7756]: E0220 11:49:36.780397 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert podName:739a6cfd-c386-4ac9-8b18-cf913bd6cc61 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:38.780367968 +0000 UTC m=+24.522615976 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert") pod "route-controller-manager-596cddd866-6nmjb" (UID: "739a6cfd-c386-4ac9-8b18-cf913bd6cc61") : secret "serving-cert" not found Feb 20 11:49:36.851568 master-0 kubenswrapper[7756]: I0220 11:49:36.851473 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps" Feb 20 11:49:36.885198 master-0 kubenswrapper[7756]: I0220 11:49:36.885101 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps"] Feb 20 11:49:36.890552 master-0 kubenswrapper[7756]: I0220 11:49:36.890472 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-dw2ps"] Feb 20 11:49:36.982917 master-0 kubenswrapper[7756]: I0220 11:49:36.982834 7756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:36.982917 master-0 kubenswrapper[7756]: I0220 11:49:36.982896 7756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99b7c6c6-c06a-4566-8f38-bb523bf3c73d-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:37.608997 master-0 kubenswrapper[7756]: I0220 11:49:37.607138 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6df74647cb-cmzht"] Feb 20 11:49:37.608997 master-0 kubenswrapper[7756]: I0220 11:49:37.607771 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:37.610502 master-0 kubenswrapper[7756]: I0220 11:49:37.610296 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 11:49:37.610870 master-0 kubenswrapper[7756]: I0220 11:49:37.610771 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 11:49:37.611208 master-0 kubenswrapper[7756]: I0220 11:49:37.611173 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 11:49:37.611425 master-0 kubenswrapper[7756]: I0220 11:49:37.611396 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 11:49:37.612290 master-0 kubenswrapper[7756]: I0220 11:49:37.612178 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:37.612587 master-0 kubenswrapper[7756]: I0220 11:49:37.612506 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:37.612666 master-0 kubenswrapper[7756]: E0220 11:49:37.612600 7756 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 20 11:49:37.612666 master-0 kubenswrapper[7756]: E0220 11:49:37.612615 7756 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Feb 20 11:49:37.612793 master-0 kubenswrapper[7756]: I0220 11:49:37.612683 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-serving-ca\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:37.612793 master-0 kubenswrapper[7756]: E0220 11:49:37.612711 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:41.612672767 +0000 UTC m=+27.354920815 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : configmap "audit-0" not found Feb 20 11:49:37.612793 master-0 kubenswrapper[7756]: I0220 11:49:37.612779 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:37.613017 master-0 kubenswrapper[7756]: E0220 11:49:37.612906 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:41.612860612 +0000 UTC m=+27.355108620 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : secret "etcd-client" not found Feb 20 11:49:37.613187 master-0 kubenswrapper[7756]: E0220 11:49:37.613045 7756 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 20 11:49:37.613286 master-0 kubenswrapper[7756]: E0220 11:49:37.613220 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:41.613156371 +0000 UTC m=+27.355404419 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : secret "serving-cert" not found Feb 20 11:49:37.613667 master-0 kubenswrapper[7756]: I0220 11:49:37.613626 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-serving-ca\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:37.618929 master-0 kubenswrapper[7756]: I0220 11:49:37.618602 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 11:49:37.621830 master-0 kubenswrapper[7756]: I0220 11:49:37.621781 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6df74647cb-cmzht"] Feb 20 11:49:37.622106 master-0 kubenswrapper[7756]: I0220 11:49:37.621887 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 11:49:37.714321 master-0 kubenswrapper[7756]: I0220 11:49:37.714241 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-client-ca\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:37.714592 master-0 kubenswrapper[7756]: I0220 11:49:37.714563 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-config\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:37.714636 master-0 kubenswrapper[7756]: I0220 11:49:37.714606 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-serving-cert\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:37.714816 master-0 kubenswrapper[7756]: I0220 11:49:37.714793 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ggks\" (UniqueName: \"kubernetes.io/projected/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-kube-api-access-7ggks\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:37.714861 master-0 kubenswrapper[7756]: I0220 11:49:37.714834 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-proxy-ca-bundles\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:37.816520 master-0 kubenswrapper[7756]: I0220 11:49:37.816465 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggks\" (UniqueName: \"kubernetes.io/projected/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-kube-api-access-7ggks\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:37.816520 master-0 kubenswrapper[7756]: I0220 11:49:37.816514 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-proxy-ca-bundles\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:37.816850 master-0 kubenswrapper[7756]: I0220 11:49:37.816806 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-client-ca\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:37.817331 master-0 kubenswrapper[7756]: I0220 11:49:37.816930 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-config\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:37.817331 master-0 kubenswrapper[7756]: E0220 11:49:37.817042 7756 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:37.817331 master-0 kubenswrapper[7756]: E0220 11:49:37.817124 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-client-ca podName:fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:38.31710338 +0000 UTC m=+24.059351388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-client-ca") pod "controller-manager-6df74647cb-cmzht" (UID: "fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a") : configmap "client-ca" not found Feb 20 11:49:37.817331 master-0 kubenswrapper[7756]: I0220 11:49:37.817113 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-serving-cert\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:37.817473 master-0 kubenswrapper[7756]: E0220 11:49:37.817363 7756 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 11:49:37.817473 master-0 kubenswrapper[7756]: E0220 11:49:37.817459 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-serving-cert podName:fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:38.31743819 +0000 UTC m=+24.059686238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-serving-cert") pod "controller-manager-6df74647cb-cmzht" (UID: "fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a") : secret "serving-cert" not found Feb 20 11:49:37.817771 master-0 kubenswrapper[7756]: I0220 11:49:37.817744 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-proxy-ca-bundles\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:37.821192 master-0 kubenswrapper[7756]: I0220 11:49:37.821161 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-config\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:37.860838 master-0 kubenswrapper[7756]: I0220 11:49:37.860760 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ggks\" (UniqueName: \"kubernetes.io/projected/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-kube-api-access-7ggks\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:37.865398 master-0 kubenswrapper[7756]: I0220 11:49:37.865363 7756 generic.go:334] "Generic (PLEG): container finished" podID="f98aeaf7-bf1a-46af-bf1b-85713baa4c67" containerID="ba33361681392f1def86ef3fcb0b685dd11e1a8eb4030176e604e1253b421630" exitCode=0 Feb 20 11:49:37.865447 master-0 kubenswrapper[7756]: I0220 11:49:37.865408 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" event={"ID":"f98aeaf7-bf1a-46af-bf1b-85713baa4c67","Type":"ContainerDied","Data":"ba33361681392f1def86ef3fcb0b685dd11e1a8eb4030176e604e1253b421630"} Feb 20 11:49:37.865830 master-0 kubenswrapper[7756]: I0220 11:49:37.865813 7756 scope.go:117] "RemoveContainer" containerID="ba33361681392f1def86ef3fcb0b685dd11e1a8eb4030176e604e1253b421630" Feb 20 11:49:38.324811 master-0 kubenswrapper[7756]: I0220 11:49:38.324723 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-client-ca\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:38.325074 master-0 kubenswrapper[7756]: I0220 11:49:38.324837 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-serving-cert\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:38.325074 master-0 kubenswrapper[7756]: E0220 11:49:38.324994 7756 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:38.325194 master-0 kubenswrapper[7756]: E0220 11:49:38.325164 7756 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 11:49:38.325243 master-0 kubenswrapper[7756]: E0220 11:49:38.325182 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-client-ca podName:fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:39.325152381 +0000 UTC m=+25.067400429 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-client-ca") pod "controller-manager-6df74647cb-cmzht" (UID: "fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a") : configmap "client-ca" not found Feb 20 11:49:38.325358 master-0 kubenswrapper[7756]: E0220 11:49:38.325273 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-serving-cert podName:fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:39.325237294 +0000 UTC m=+25.067485312 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-serving-cert") pod "controller-manager-6df74647cb-cmzht" (UID: "fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a") : secret "serving-cert" not found Feb 20 11:49:38.376938 master-0 kubenswrapper[7756]: I0220 11:49:38.376857 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6df74647cb-cmzht"] Feb 20 11:49:38.377357 master-0 kubenswrapper[7756]: E0220 11:49:38.377306 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" podUID="fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a" Feb 20 11:49:38.588288 master-0 kubenswrapper[7756]: I0220 11:49:38.588155 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99b7c6c6-c06a-4566-8f38-bb523bf3c73d" path="/var/lib/kubelet/pods/99b7c6c6-c06a-4566-8f38-bb523bf3c73d/volumes" Feb 20 11:49:38.837602 master-0 kubenswrapper[7756]: I0220 11:49:38.837513 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:38.838434 master-0 kubenswrapper[7756]: E0220 11:49:38.837763 7756 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 11:49:38.838434 master-0 kubenswrapper[7756]: I0220 11:49:38.837923 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:38.838434 master-0 kubenswrapper[7756]: E0220 11:49:38.837977 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert podName:739a6cfd-c386-4ac9-8b18-cf913bd6cc61 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:42.837950798 +0000 UTC m=+28.580198856 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert") pod "route-controller-manager-596cddd866-6nmjb" (UID: "739a6cfd-c386-4ac9-8b18-cf913bd6cc61") : secret "serving-cert" not found Feb 20 11:49:38.838434 master-0 kubenswrapper[7756]: E0220 11:49:38.838061 7756 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:38.838434 master-0 kubenswrapper[7756]: E0220 11:49:38.838125 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca podName:739a6cfd-c386-4ac9-8b18-cf913bd6cc61 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:42.838116223 +0000 UTC m=+28.580364341 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca") pod "route-controller-manager-596cddd866-6nmjb" (UID: "739a6cfd-c386-4ac9-8b18-cf913bd6cc61") : configmap "client-ca" not found Feb 20 11:49:38.869249 master-0 kubenswrapper[7756]: I0220 11:49:38.869205 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:38.876346 master-0 kubenswrapper[7756]: I0220 11:49:38.876249 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:39.040368 master-0 kubenswrapper[7756]: I0220 11:49:39.040268 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-proxy-ca-bundles\") pod \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " Feb 20 11:49:39.040697 master-0 kubenswrapper[7756]: I0220 11:49:39.040429 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-config\") pod \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " Feb 20 11:49:39.040697 master-0 kubenswrapper[7756]: I0220 11:49:39.040503 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ggks\" (UniqueName: \"kubernetes.io/projected/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-kube-api-access-7ggks\") pod \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " Feb 20 11:49:39.041632 master-0 kubenswrapper[7756]: I0220 11:49:39.041169 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a" (UID: "fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:39.041632 master-0 kubenswrapper[7756]: I0220 11:49:39.041473 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-config" (OuterVolumeSpecName: "config") pod "fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a" (UID: "fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:39.046279 master-0 kubenswrapper[7756]: I0220 11:49:39.046180 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-kube-api-access-7ggks" (OuterVolumeSpecName: "kube-api-access-7ggks") pod "fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a" (UID: "fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a"). InnerVolumeSpecName "kube-api-access-7ggks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:49:39.142740 master-0 kubenswrapper[7756]: I0220 11:49:39.142347 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ggks\" (UniqueName: \"kubernetes.io/projected/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-kube-api-access-7ggks\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:39.142740 master-0 kubenswrapper[7756]: I0220 11:49:39.142408 7756 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:39.142740 master-0 kubenswrapper[7756]: I0220 11:49:39.142434 7756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:39.347634 master-0 kubenswrapper[7756]: I0220 11:49:39.345212 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-serving-cert\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:39.347634 master-0 kubenswrapper[7756]: E0220 11:49:39.345515 7756 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 11:49:39.347634 master-0 kubenswrapper[7756]: E0220 11:49:39.345770 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-serving-cert podName:fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:41.34567987 +0000 UTC m=+27.087927908 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-serving-cert") pod "controller-manager-6df74647cb-cmzht" (UID: "fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a") : secret "serving-cert" not found Feb 20 11:49:39.347634 master-0 kubenswrapper[7756]: I0220 11:49:39.346122 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-client-ca\") pod \"controller-manager-6df74647cb-cmzht\" (UID: \"fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a\") " pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:39.347634 master-0 kubenswrapper[7756]: E0220 11:49:39.346386 7756 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:39.347634 master-0 kubenswrapper[7756]: E0220 11:49:39.346523 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-client-ca podName:fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a nodeName:}" failed. No retries permitted until 2026-02-20 11:49:41.346492264 +0000 UTC m=+27.088740312 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-client-ca") pod "controller-manager-6df74647cb-cmzht" (UID: "fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a") : configmap "client-ca" not found Feb 20 11:49:39.406428 master-0 kubenswrapper[7756]: I0220 11:49:39.406206 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7cd76464f7-76wdb"] Feb 20 11:49:39.415563 master-0 kubenswrapper[7756]: E0220 11:49:39.415303 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit etcd-client serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" podUID="d46a5dc4-89d6-4be7-8aac-11f034d25076" Feb 20 11:49:39.886946 master-0 kubenswrapper[7756]: I0220 11:49:39.886890 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:39.887694 master-0 kubenswrapper[7756]: I0220 11:49:39.887411 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df74647cb-cmzht" Feb 20 11:49:39.906949 master-0 kubenswrapper[7756]: I0220 11:49:39.906912 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:39.940608 master-0 kubenswrapper[7756]: I0220 11:49:39.938337 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6df74647cb-cmzht"] Feb 20 11:49:39.940608 master-0 kubenswrapper[7756]: I0220 11:49:39.938388 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8594984f84-vjpf6"] Feb 20 11:49:39.940608 master-0 kubenswrapper[7756]: I0220 11:49:39.938886 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:39.941548 master-0 kubenswrapper[7756]: I0220 11:49:39.941308 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 11:49:39.941548 master-0 kubenswrapper[7756]: I0220 11:49:39.941458 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 11:49:39.941695 master-0 kubenswrapper[7756]: I0220 11:49:39.941674 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 11:49:39.941849 master-0 kubenswrapper[7756]: I0220 11:49:39.941676 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 11:49:39.944140 master-0 kubenswrapper[7756]: I0220 11:49:39.943318 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 11:49:39.948090 master-0 kubenswrapper[7756]: I0220 11:49:39.947861 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6df74647cb-cmzht"] Feb 20 11:49:39.951941 master-0 kubenswrapper[7756]: I0220 11:49:39.951875 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 11:49:39.954039 master-0 kubenswrapper[7756]: I0220 11:49:39.953983 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8594984f84-vjpf6"] Feb 20 11:49:40.064379 master-0 kubenswrapper[7756]: I0220 11:49:40.064314 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-config\") pod \"d46a5dc4-89d6-4be7-8aac-11f034d25076\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " Feb 20 11:49:40.064507 master-0 kubenswrapper[7756]: I0220 11:49:40.064405 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-serving-ca\") pod \"d46a5dc4-89d6-4be7-8aac-11f034d25076\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " Feb 20 11:49:40.064507 master-0 kubenswrapper[7756]: I0220 11:49:40.064456 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-encryption-config\") pod \"d46a5dc4-89d6-4be7-8aac-11f034d25076\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " Feb 20 11:49:40.064507 master-0 kubenswrapper[7756]: I0220 11:49:40.064498 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-trusted-ca-bundle\") pod \"d46a5dc4-89d6-4be7-8aac-11f034d25076\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " Feb 20 11:49:40.065041 master-0 kubenswrapper[7756]: I0220 11:49:40.064601 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d46a5dc4-89d6-4be7-8aac-11f034d25076-node-pullsecrets\") pod \"d46a5dc4-89d6-4be7-8aac-11f034d25076\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " Feb 20 11:49:40.065041 master-0 kubenswrapper[7756]: I0220 11:49:40.064661 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46a5dc4-89d6-4be7-8aac-11f034d25076-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d46a5dc4-89d6-4be7-8aac-11f034d25076" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:49:40.065470 master-0 kubenswrapper[7756]: I0220 11:49:40.065155 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d46a5dc4-89d6-4be7-8aac-11f034d25076" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:40.066423 master-0 kubenswrapper[7756]: I0220 11:49:40.066368 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzvps\" (UniqueName: \"kubernetes.io/projected/d46a5dc4-89d6-4be7-8aac-11f034d25076-kube-api-access-dzvps\") pod \"d46a5dc4-89d6-4be7-8aac-11f034d25076\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " Feb 20 11:49:40.066423 master-0 kubenswrapper[7756]: I0220 11:49:40.066415 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit-dir\") pod \"d46a5dc4-89d6-4be7-8aac-11f034d25076\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " Feb 20 11:49:40.066635 master-0 kubenswrapper[7756]: I0220 11:49:40.066481 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-image-import-ca\") pod \"d46a5dc4-89d6-4be7-8aac-11f034d25076\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " Feb 20 11:49:40.066765 master-0 kubenswrapper[7756]: I0220 11:49:40.066725 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8c376c-445b-45c2-ab0c-9269265870c4-serving-cert\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:40.066765 master-0 kubenswrapper[7756]: I0220 11:49:40.066758 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:40.066939 master-0 kubenswrapper[7756]: I0220 11:49:40.066372 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-config" (OuterVolumeSpecName: "config") pod "d46a5dc4-89d6-4be7-8aac-11f034d25076" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:40.066939 master-0 kubenswrapper[7756]: I0220 11:49:40.066764 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d46a5dc4-89d6-4be7-8aac-11f034d25076" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:49:40.066939 master-0 kubenswrapper[7756]: I0220 11:49:40.066794 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-config\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:40.067619 master-0 kubenswrapper[7756]: I0220 11:49:40.066953 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d46a5dc4-89d6-4be7-8aac-11f034d25076" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:40.067619 master-0 kubenswrapper[7756]: I0220 11:49:40.066987 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m56fq\" (UniqueName: \"kubernetes.io/projected/bc8c376c-445b-45c2-ab0c-9269265870c4-kube-api-access-m56fq\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:40.067619 master-0 kubenswrapper[7756]: I0220 11:49:40.067054 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-proxy-ca-bundles\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:40.067619 master-0 kubenswrapper[7756]: I0220 11:49:40.067161 7756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:40.067619 master-0 kubenswrapper[7756]: I0220 11:49:40.067205 7756 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:40.067619 master-0 kubenswrapper[7756]: I0220 11:49:40.067220 7756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:40.067619 master-0 kubenswrapper[7756]: I0220 11:49:40.067235 7756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:40.067619 master-0 kubenswrapper[7756]: I0220 11:49:40.067248 7756 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d46a5dc4-89d6-4be7-8aac-11f034d25076-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:40.067619 master-0 kubenswrapper[7756]: I0220 11:49:40.067287 7756 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:40.067619 master-0 kubenswrapper[7756]: I0220 11:49:40.067300 7756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:40.067619 master-0 kubenswrapper[7756]: I0220 11:49:40.067430 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d46a5dc4-89d6-4be7-8aac-11f034d25076" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:40.068443 master-0 kubenswrapper[7756]: I0220 11:49:40.068085 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d46a5dc4-89d6-4be7-8aac-11f034d25076" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:49:40.070137 master-0 kubenswrapper[7756]: I0220 11:49:40.070094 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d46a5dc4-89d6-4be7-8aac-11f034d25076-kube-api-access-dzvps" (OuterVolumeSpecName: "kube-api-access-dzvps") pod "d46a5dc4-89d6-4be7-8aac-11f034d25076" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076"). InnerVolumeSpecName "kube-api-access-dzvps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:49:40.168437 master-0 kubenswrapper[7756]: I0220 11:49:40.168362 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8c376c-445b-45c2-ab0c-9269265870c4-serving-cert\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:40.168437 master-0 kubenswrapper[7756]: I0220 11:49:40.168414 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:40.168437 master-0 kubenswrapper[7756]: I0220 11:49:40.168448 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-config\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:40.169358 master-0 kubenswrapper[7756]: E0220 11:49:40.169296 7756 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:40.169501 master-0 kubenswrapper[7756]: E0220 11:49:40.169414 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca podName:bc8c376c-445b-45c2-ab0c-9269265870c4 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:40.669366762 +0000 UTC m=+26.411614780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca") pod "controller-manager-8594984f84-vjpf6" (UID: "bc8c376c-445b-45c2-ab0c-9269265870c4") : configmap "client-ca" not found Feb 20 11:49:40.169665 master-0 kubenswrapper[7756]: I0220 11:49:40.169563 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m56fq\" (UniqueName: \"kubernetes.io/projected/bc8c376c-445b-45c2-ab0c-9269265870c4-kube-api-access-m56fq\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:40.169665 master-0 kubenswrapper[7756]: I0220 11:49:40.169626 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-proxy-ca-bundles\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:40.169857 master-0 kubenswrapper[7756]: I0220 11:49:40.169703 7756 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:40.169857 master-0 kubenswrapper[7756]: I0220 11:49:40.169727 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzvps\" (UniqueName: \"kubernetes.io/projected/d46a5dc4-89d6-4be7-8aac-11f034d25076-kube-api-access-dzvps\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:40.169857 master-0 kubenswrapper[7756]: I0220 11:49:40.169748 7756 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-image-import-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:40.169857 master-0 kubenswrapper[7756]: I0220 11:49:40.169773 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-config\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:40.170880 master-0 kubenswrapper[7756]: I0220 11:49:40.170824 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-proxy-ca-bundles\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:40.186892 master-0 kubenswrapper[7756]: I0220 11:49:40.186845 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8c376c-445b-45c2-ab0c-9269265870c4-serving-cert\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:40.209049 master-0 kubenswrapper[7756]: I0220 11:49:40.209004 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m56fq\" (UniqueName: \"kubernetes.io/projected/bc8c376c-445b-45c2-ab0c-9269265870c4-kube-api-access-m56fq\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:40.585452 master-0 kubenswrapper[7756]: I0220 11:49:40.584921 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a" path="/var/lib/kubelet/pods/fe121d8b-190f-48a4-9b20-e4d4dd2e5d5a/volumes" Feb 20 11:49:40.689125 master-0 kubenswrapper[7756]: I0220 11:49:40.684070 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:40.689125 master-0 kubenswrapper[7756]: E0220 11:49:40.684410 7756 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:40.689125 master-0 kubenswrapper[7756]: E0220 11:49:40.685419 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca podName:bc8c376c-445b-45c2-ab0c-9269265870c4 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:41.685396502 +0000 UTC m=+27.427644510 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca") pod "controller-manager-8594984f84-vjpf6" (UID: "bc8c376c-445b-45c2-ab0c-9269265870c4") : configmap "client-ca" not found Feb 20 11:49:40.894114 master-0 kubenswrapper[7756]: I0220 11:49:40.894057 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" event={"ID":"db2a7cb1-1d05-4b24-86ed-f823fad5013e","Type":"ContainerStarted","Data":"3796dafb0ee926b9fcb9517b0d055e097fe2b58d3dd50fea12b4f6a29bcd4790"} Feb 20 11:49:40.895105 master-0 kubenswrapper[7756]: I0220 11:49:40.894126 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" event={"ID":"db2a7cb1-1d05-4b24-86ed-f823fad5013e","Type":"ContainerStarted","Data":"3d07e9c592eed7a379f55e981ead57df10fdecdbcdadc7facb3720be20c537af"} Feb 20 11:49:40.896464 master-0 kubenswrapper[7756]: I0220 11:49:40.896400 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" event={"ID":"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca","Type":"ContainerStarted","Data":"e47a05c8d2dbbc49205addf05b6f326c0f38dfd41f3498f290a08ebfa22cbc94"} Feb 20 11:49:40.899765 master-0 kubenswrapper[7756]: I0220 11:49:40.899722 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" event={"ID":"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff","Type":"ContainerStarted","Data":"f478ae19f7f37b0b144530d29503cc9eb3edcf8d27e26035c2139b9aa149987b"} Feb 20 11:49:40.902507 master-0 kubenswrapper[7756]: I0220 11:49:40.902454 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw" event={"ID":"839bf5b1-b242-4bbd-bc09-cf6abcf7f734","Type":"ContainerStarted","Data":"a536c272954462921fc604267b25f8d65d6f6bc9444d2c6bb8607f4b9f14a00d"} Feb 20 11:49:40.904793 master-0 kubenswrapper[7756]: I0220 11:49:40.904726 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" event={"ID":"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8","Type":"ContainerStarted","Data":"bb7f64392a206b73dc57009488aa74f832bb76ee429b00d19bad4f19d75604bd"} Feb 20 11:49:40.906895 master-0 kubenswrapper[7756]: I0220 11:49:40.906873 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" event={"ID":"67f890c8-05a1-4797-8da8-6194aea0df9a","Type":"ContainerStarted","Data":"fa4ba9b481647c70fe45ee5f4e5d91d6aa14b2c851cafd176c7db271a0f62932"} Feb 20 11:49:40.910119 master-0 kubenswrapper[7756]: I0220 11:49:40.910083 7756 generic.go:334] "Generic (PLEG): container finished" podID="ce2b6fde-de56-49c3-9bd6-e81c679b02bc" containerID="55207161b0670236349ac65a2776c47132c8ff804fc186b630f3016022116ce7" exitCode=0 Feb 20 11:49:40.910232 master-0 kubenswrapper[7756]: I0220 11:49:40.910186 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" event={"ID":"ce2b6fde-de56-49c3-9bd6-e81c679b02bc","Type":"ContainerDied","Data":"55207161b0670236349ac65a2776c47132c8ff804fc186b630f3016022116ce7"} Feb 20 11:49:40.913373 master-0 kubenswrapper[7756]: I0220 11:49:40.913328 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" event={"ID":"f98aeaf7-bf1a-46af-bf1b-85713baa4c67","Type":"ContainerStarted","Data":"f8d154b1c828589837ec3c8ec4ad4d835c269d69b663caaef17de5eec1f25aa8"} Feb 20 11:49:40.913429 master-0 kubenswrapper[7756]: I0220 11:49:40.913383 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:41.706002 master-0 kubenswrapper[7756]: I0220 11:49:41.705657 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:41.706393 master-0 kubenswrapper[7756]: I0220 11:49:41.706361 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:41.706618 master-0 kubenswrapper[7756]: I0220 11:49:41.706589 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:41.706883 master-0 kubenswrapper[7756]: I0220 11:49:41.706855 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client\") pod \"apiserver-7cd76464f7-76wdb\" (UID: \"d46a5dc4-89d6-4be7-8aac-11f034d25076\") " pod="openshift-apiserver/apiserver-7cd76464f7-76wdb" Feb 20 11:49:41.707055 master-0 kubenswrapper[7756]: E0220 11:49:41.705933 7756 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 20 11:49:41.707265 master-0 kubenswrapper[7756]: E0220 11:49:41.707235 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:49.707199997 +0000 UTC m=+35.449448045 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : object "openshift-apiserver"/"serving-cert" not registered Feb 20 11:49:41.707464 master-0 kubenswrapper[7756]: E0220 11:49:41.706457 7756 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:41.707665 master-0 kubenswrapper[7756]: E0220 11:49:41.707644 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca podName:bc8c376c-445b-45c2-ab0c-9269265870c4 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:43.707622879 +0000 UTC m=+29.449870927 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca") pod "controller-manager-8594984f84-vjpf6" (UID: "bc8c376c-445b-45c2-ab0c-9269265870c4") : configmap "client-ca" not found Feb 20 11:49:41.707816 master-0 kubenswrapper[7756]: E0220 11:49:41.706705 7756 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: object "openshift-apiserver"/"audit-0" not registered Feb 20 11:49:41.708036 master-0 kubenswrapper[7756]: E0220 11:49:41.706975 7756 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 20 11:49:41.708148 master-0 kubenswrapper[7756]: E0220 11:49:41.707999 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:49.707976969 +0000 UTC m=+35.450225007 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : object "openshift-apiserver"/"audit-0" not registered Feb 20 11:49:41.708148 master-0 kubenswrapper[7756]: E0220 11:49:41.708125 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client podName:d46a5dc4-89d6-4be7-8aac-11f034d25076 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:49.708089252 +0000 UTC m=+35.450337300 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client") pod "apiserver-7cd76464f7-76wdb" (UID: "d46a5dc4-89d6-4be7-8aac-11f034d25076") : object "openshift-apiserver"/"etcd-client" not registered Feb 20 11:49:41.920012 master-0 kubenswrapper[7756]: I0220 11:49:41.919832 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" event={"ID":"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8","Type":"ContainerStarted","Data":"7a7b7bcd0b404c0a8a9a8aefc815bdd9080f092bb0431b92182ecb715649c6d7"} Feb 20 11:49:42.519137 master-0 kubenswrapper[7756]: I0220 11:49:42.518877 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-z82cm"] Feb 20 11:49:42.519713 master-0 kubenswrapper[7756]: I0220 11:49:42.519683 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.628901 master-0 kubenswrapper[7756]: I0220 11:49:42.628833 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-var-lib-kubelet\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.628901 master-0 kubenswrapper[7756]: I0220 11:49:42.628895 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-sys\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.629140 master-0 kubenswrapper[7756]: I0220 11:49:42.629071 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-modprobe-d\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.629175 master-0 kubenswrapper[7756]: I0220 11:49:42.629164 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysctl-d\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.629239 master-0 kubenswrapper[7756]: I0220 11:49:42.629212 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-kubernetes\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.629280 master-0 kubenswrapper[7756]: I0220 11:49:42.629239 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-tmp\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.629494 master-0 kubenswrapper[7756]: I0220 11:49:42.629464 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysconfig\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.629569 master-0 kubenswrapper[7756]: I0220 11:49:42.629496 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-host\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.629569 master-0 kubenswrapper[7756]: I0220 11:49:42.629552 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-systemd\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.629663 master-0 kubenswrapper[7756]: I0220 11:49:42.629629 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxm8p\" (UniqueName: \"kubernetes.io/projected/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-kube-api-access-qxm8p\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.629706 master-0 kubenswrapper[7756]: I0220 11:49:42.629695 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysctl-conf\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.629771 master-0 kubenswrapper[7756]: I0220 11:49:42.629752 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-tuned\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.629806 master-0 kubenswrapper[7756]: I0220 11:49:42.629784 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-run\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.629837 master-0 kubenswrapper[7756]: I0220 11:49:42.629807 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-lib-modules\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.731380 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-host\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.731579 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-host\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.731687 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysconfig\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.731741 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-systemd\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.731770 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxm8p\" (UniqueName: \"kubernetes.io/projected/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-kube-api-access-qxm8p\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.731798 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysctl-conf\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.731826 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-tuned\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.731851 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-run\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.731883 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-lib-modules\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.731949 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-var-lib-kubelet\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.731980 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-sys\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.732014 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-modprobe-d\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.732033 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysctl-d\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.732060 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-kubernetes\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.732073 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-tmp\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.733402 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-run\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.733487 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysconfig\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.733540 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-systemd\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.735557 master-0 kubenswrapper[7756]: I0220 11:49:42.734804 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysctl-conf\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.738131 master-0 kubenswrapper[7756]: I0220 11:49:42.738043 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-sys\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.738214 master-0 kubenswrapper[7756]: I0220 11:49:42.738082 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysctl-d\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.738214 master-0 kubenswrapper[7756]: I0220 11:49:42.738206 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-modprobe-d\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.738307 master-0 kubenswrapper[7756]: I0220 11:49:42.738247 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-lib-modules\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.738307 master-0 kubenswrapper[7756]: I0220 11:49:42.738277 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-var-lib-kubelet\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.738461 master-0 kubenswrapper[7756]: I0220 11:49:42.738427 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-kubernetes\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.825050 master-0 kubenswrapper[7756]: I0220 11:49:42.824478 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-tmp\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.825050 master-0 kubenswrapper[7756]: I0220 11:49:42.824586 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-tuned\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:42.933667 master-0 kubenswrapper[7756]: I0220 11:49:42.933592 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:42.933667 master-0 kubenswrapper[7756]: I0220 11:49:42.933674 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:42.934438 master-0 kubenswrapper[7756]: E0220 11:49:42.933974 7756 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:42.934438 master-0 kubenswrapper[7756]: E0220 11:49:42.934137 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca podName:739a6cfd-c386-4ac9-8b18-cf913bd6cc61 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:50.934079794 +0000 UTC m=+36.676327822 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca") pod "route-controller-manager-596cddd866-6nmjb" (UID: "739a6cfd-c386-4ac9-8b18-cf913bd6cc61") : configmap "client-ca" not found Feb 20 11:49:42.938784 master-0 kubenswrapper[7756]: I0220 11:49:42.938743 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:43.151187 master-0 kubenswrapper[7756]: I0220 11:49:43.151008 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxm8p\" (UniqueName: \"kubernetes.io/projected/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-kube-api-access-qxm8p\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:43.165085 master-0 kubenswrapper[7756]: I0220 11:49:43.164994 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 11:49:43.182493 master-0 kubenswrapper[7756]: W0220 11:49:43.182438 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9eb45bd_fc01_4707_87ea_64f07f72f6f9.slice/crio-ff0cce7da53a6e4c2ebc0872440f95104040b68cb4cef7228a83f4a954522bc8 WatchSource:0}: Error finding container ff0cce7da53a6e4c2ebc0872440f95104040b68cb4cef7228a83f4a954522bc8: Status 404 returned error can't find the container with id ff0cce7da53a6e4c2ebc0872440f95104040b68cb4cef7228a83f4a954522bc8 Feb 20 11:49:43.746433 master-0 kubenswrapper[7756]: I0220 11:49:43.746355 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:43.746701 master-0 kubenswrapper[7756]: E0220 11:49:43.746545 7756 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:43.746701 master-0 kubenswrapper[7756]: E0220 11:49:43.746624 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca podName:bc8c376c-445b-45c2-ab0c-9269265870c4 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:47.746605117 +0000 UTC m=+33.488853135 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca") pod "controller-manager-8594984f84-vjpf6" (UID: "bc8c376c-445b-45c2-ab0c-9269265870c4") : configmap "client-ca" not found Feb 20 11:49:43.933252 master-0 kubenswrapper[7756]: I0220 11:49:43.932976 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-z82cm" event={"ID":"b9eb45bd-fc01-4707-87ea-64f07f72f6f9","Type":"ContainerStarted","Data":"f260aeb0597842bee53f893720c01a76ede8d4d6a4d2e8cb16d29d35c27e7830"} Feb 20 11:49:43.933252 master-0 kubenswrapper[7756]: I0220 11:49:43.933028 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-z82cm" event={"ID":"b9eb45bd-fc01-4707-87ea-64f07f72f6f9","Type":"ContainerStarted","Data":"ff0cce7da53a6e4c2ebc0872440f95104040b68cb4cef7228a83f4a954522bc8"} Feb 20 11:49:44.902488 master-0 kubenswrapper[7756]: I0220 11:49:44.900994 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-687c46c5b-2xbrj"] Feb 20 11:49:44.902488 master-0 kubenswrapper[7756]: I0220 11:49:44.901637 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:44.931489 master-0 kubenswrapper[7756]: I0220 11:49:44.931418 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 20 11:49:44.967800 master-0 kubenswrapper[7756]: I0220 11:49:44.967165 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 20 11:49:44.971555 master-0 kubenswrapper[7756]: I0220 11:49:44.970901 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 20 11:49:44.975556 master-0 kubenswrapper[7756]: I0220 11:49:44.972465 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 20 11:49:44.975556 master-0 kubenswrapper[7756]: I0220 11:49:44.972715 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 20 11:49:44.975556 master-0 kubenswrapper[7756]: I0220 11:49:44.972951 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 20 11:49:44.975556 master-0 kubenswrapper[7756]: I0220 11:49:44.974068 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 20 11:49:44.975556 master-0 kubenswrapper[7756]: I0220 11:49:44.974185 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 20 11:49:44.975556 master-0 kubenswrapper[7756]: I0220 11:49:44.974294 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 20 11:49:44.975556 master-0 kubenswrapper[7756]: I0220 11:49:44.974608 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7cd76464f7-76wdb"] Feb 20 11:49:44.975556 master-0 kubenswrapper[7756]: I0220 11:49:44.974990 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-687c46c5b-2xbrj"] Feb 20 11:49:44.977480 master-0 kubenswrapper[7756]: I0220 11:49:44.977081 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-7cd76464f7-76wdb"] Feb 20 11:49:44.977480 master-0 kubenswrapper[7756]: I0220 11:49:44.977311 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 20 11:49:44.979480 master-0 kubenswrapper[7756]: I0220 11:49:44.979221 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-687c46c5b-2xbrj"] Feb 20 11:49:44.998557 master-0 kubenswrapper[7756]: I0220 11:49:44.987722 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-trusted-ca-bundle\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:44.998557 master-0 kubenswrapper[7756]: I0220 11:49:44.987818 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-etcd-client\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:44.998557 master-0 kubenswrapper[7756]: I0220 11:49:44.987843 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-image-import-ca\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:44.998557 master-0 kubenswrapper[7756]: I0220 11:49:44.987887 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-audit\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:44.998557 master-0 kubenswrapper[7756]: I0220 11:49:44.987904 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd7698c6-bc33-4416-9417-dfe3ecd706cb-audit-dir\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:44.998557 master-0 kubenswrapper[7756]: I0220 11:49:44.987922 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-etcd-serving-ca\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:44.998557 master-0 kubenswrapper[7756]: I0220 11:49:44.987935 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-serving-cert\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:44.998557 master-0 kubenswrapper[7756]: I0220 11:49:44.987958 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv8pq\" (UniqueName: \"kubernetes.io/projected/dd7698c6-bc33-4416-9417-dfe3ecd706cb-kube-api-access-gv8pq\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:44.998557 master-0 kubenswrapper[7756]: I0220 11:49:44.987973 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-config\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:44.998557 master-0 kubenswrapper[7756]: I0220 11:49:44.988005 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-encryption-config\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:44.998557 master-0 kubenswrapper[7756]: I0220 11:49:44.988232 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd7698c6-bc33-4416-9417-dfe3ecd706cb-node-pullsecrets\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:44.998557 master-0 kubenswrapper[7756]: I0220 11:49:44.988287 7756 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:44.998557 master-0 kubenswrapper[7756]: I0220 11:49:44.988299 7756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d46a5dc4-89d6-4be7-8aac-11f034d25076-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:44.998557 master-0 kubenswrapper[7756]: I0220 11:49:44.988307 7756 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d46a5dc4-89d6-4be7-8aac-11f034d25076-audit\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:45.016553 master-0 kubenswrapper[7756]: I0220 11:49:45.013518 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-z82cm" podStartSLOduration=4.013497498 podStartE2EDuration="4.013497498s" podCreationTimestamp="2026-02-20 11:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:49:45.01320655 +0000 UTC m=+30.755454558" watchObservedRunningTime="2026-02-20 11:49:45.013497498 +0000 UTC m=+30.755745516" Feb 20 11:49:45.022449 master-0 kubenswrapper[7756]: I0220 11:49:45.021944 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-gkxzr" event={"ID":"906307ef-d988-49e7-9d63-39116a2c4880","Type":"ContainerStarted","Data":"f72d3c1c6874a2613f14b821d8bbcda7ed1edb4eebc620145ed623a06b7f5a8c"} Feb 20 11:49:45.022675 master-0 kubenswrapper[7756]: E0220 11:49:45.022033 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit audit-dir config encryption-config etcd-client etcd-serving-ca image-import-ca kube-api-access-gv8pq node-pullsecrets serving-cert trusted-ca-bundle], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" podUID="dd7698c6-bc33-4416-9417-dfe3ecd706cb" Feb 20 11:49:45.043583 master-0 kubenswrapper[7756]: I0220 11:49:45.042139 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn"] Feb 20 11:49:45.043583 master-0 kubenswrapper[7756]: I0220 11:49:45.042662 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" Feb 20 11:49:45.046156 master-0 kubenswrapper[7756]: I0220 11:49:45.046116 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-kx4ch"] Feb 20 11:49:45.047135 master-0 kubenswrapper[7756]: I0220 11:49:45.047115 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kx4ch" Feb 20 11:49:45.055093 master-0 kubenswrapper[7756]: I0220 11:49:45.055055 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 20 11:49:45.055744 master-0 kubenswrapper[7756]: I0220 11:49:45.055727 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 20 11:49:45.055979 master-0 kubenswrapper[7756]: I0220 11:49:45.055964 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 20 11:49:45.056198 master-0 kubenswrapper[7756]: I0220 11:49:45.056184 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 20 11:49:45.058720 master-0 kubenswrapper[7756]: I0220 11:49:45.058684 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn"] Feb 20 11:49:45.060078 master-0 kubenswrapper[7756]: I0220 11:49:45.059965 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kx4ch"] Feb 20 11:49:45.091064 master-0 kubenswrapper[7756]: I0220 11:49:45.089765 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-trusted-ca-bundle\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.091064 master-0 kubenswrapper[7756]: I0220 11:49:45.089846 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-etcd-client\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.091064 master-0 kubenswrapper[7756]: I0220 11:49:45.089863 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-image-import-ca\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.091064 master-0 kubenswrapper[7756]: I0220 11:49:45.089888 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-audit\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.091064 master-0 kubenswrapper[7756]: I0220 11:49:45.089904 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd7698c6-bc33-4416-9417-dfe3ecd706cb-audit-dir\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.091064 master-0 kubenswrapper[7756]: I0220 11:49:45.089917 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-serving-cert\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.091064 master-0 kubenswrapper[7756]: I0220 11:49:45.089939 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-etcd-serving-ca\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.091064 master-0 kubenswrapper[7756]: I0220 11:49:45.089953 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv8pq\" (UniqueName: \"kubernetes.io/projected/dd7698c6-bc33-4416-9417-dfe3ecd706cb-kube-api-access-gv8pq\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.091064 master-0 kubenswrapper[7756]: I0220 11:49:45.089968 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-config\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.091064 master-0 kubenswrapper[7756]: I0220 11:49:45.089989 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-encryption-config\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.091064 master-0 kubenswrapper[7756]: I0220 11:49:45.090013 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd7698c6-bc33-4416-9417-dfe3ecd706cb-node-pullsecrets\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.091064 master-0 kubenswrapper[7756]: I0220 11:49:45.090100 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd7698c6-bc33-4416-9417-dfe3ecd706cb-node-pullsecrets\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.091704 master-0 kubenswrapper[7756]: I0220 11:49:45.091319 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-image-import-ca\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.092672 master-0 kubenswrapper[7756]: I0220 11:49:45.092247 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-config\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.096715 master-0 kubenswrapper[7756]: I0220 11:49:45.093614 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-trusted-ca-bundle\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.096715 master-0 kubenswrapper[7756]: I0220 11:49:45.091325 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-etcd-serving-ca\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.096715 master-0 kubenswrapper[7756]: I0220 11:49:45.093968 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd7698c6-bc33-4416-9417-dfe3ecd706cb-audit-dir\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.096715 master-0 kubenswrapper[7756]: I0220 11:49:45.094224 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-audit\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.097059 master-0 kubenswrapper[7756]: I0220 11:49:45.097011 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-encryption-config\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.097286 master-0 kubenswrapper[7756]: I0220 11:49:45.097255 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-serving-cert\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.097335 master-0 kubenswrapper[7756]: I0220 11:49:45.097298 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-etcd-client\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.125313 master-0 kubenswrapper[7756]: I0220 11:49:45.125079 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv8pq\" (UniqueName: \"kubernetes.io/projected/dd7698c6-bc33-4416-9417-dfe3ecd706cb-kube-api-access-gv8pq\") pod \"apiserver-687c46c5b-2xbrj\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:45.192445 master-0 kubenswrapper[7756]: I0220 11:49:45.191788 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af18215b-e749-4565-bb6c-24e92c452817-config-volume\") pod \"dns-default-kx4ch\" (UID: \"af18215b-e749-4565-bb6c-24e92c452817\") " pod="openshift-dns/dns-default-kx4ch" Feb 20 11:49:45.192445 master-0 kubenswrapper[7756]: I0220 11:49:45.191841 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c9xz\" (UniqueName: \"kubernetes.io/projected/af18215b-e749-4565-bb6c-24e92c452817-kube-api-access-7c9xz\") pod \"dns-default-kx4ch\" (UID: \"af18215b-e749-4565-bb6c-24e92c452817\") " pod="openshift-dns/dns-default-kx4ch" Feb 20 11:49:45.192445 master-0 kubenswrapper[7756]: I0220 11:49:45.191886 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af18215b-e749-4565-bb6c-24e92c452817-metrics-tls\") pod \"dns-default-kx4ch\" (UID: \"af18215b-e749-4565-bb6c-24e92c452817\") " pod="openshift-dns/dns-default-kx4ch" Feb 20 11:49:45.192445 master-0 kubenswrapper[7756]: I0220 11:49:45.191908 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4j88\" (UniqueName: \"kubernetes.io/projected/bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4-kube-api-access-s4j88\") pod \"csi-snapshot-controller-6847bb4785-792hn\" (UID: \"bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" Feb 20 11:49:45.295280 master-0 kubenswrapper[7756]: I0220 11:49:45.293847 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af18215b-e749-4565-bb6c-24e92c452817-config-volume\") pod \"dns-default-kx4ch\" (UID: \"af18215b-e749-4565-bb6c-24e92c452817\") " pod="openshift-dns/dns-default-kx4ch" Feb 20 11:49:45.295280 master-0 kubenswrapper[7756]: I0220 11:49:45.293988 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c9xz\" (UniqueName: \"kubernetes.io/projected/af18215b-e749-4565-bb6c-24e92c452817-kube-api-access-7c9xz\") pod \"dns-default-kx4ch\" (UID: \"af18215b-e749-4565-bb6c-24e92c452817\") " pod="openshift-dns/dns-default-kx4ch" Feb 20 11:49:45.295280 master-0 kubenswrapper[7756]: I0220 11:49:45.294170 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af18215b-e749-4565-bb6c-24e92c452817-metrics-tls\") pod \"dns-default-kx4ch\" (UID: \"af18215b-e749-4565-bb6c-24e92c452817\") " pod="openshift-dns/dns-default-kx4ch" Feb 20 11:49:45.295280 master-0 kubenswrapper[7756]: I0220 11:49:45.294247 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4j88\" (UniqueName: \"kubernetes.io/projected/bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4-kube-api-access-s4j88\") pod \"csi-snapshot-controller-6847bb4785-792hn\" (UID: \"bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" Feb 20 11:49:45.297554 master-0 kubenswrapper[7756]: I0220 11:49:45.295839 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af18215b-e749-4565-bb6c-24e92c452817-config-volume\") pod \"dns-default-kx4ch\" (UID: \"af18215b-e749-4565-bb6c-24e92c452817\") " pod="openshift-dns/dns-default-kx4ch" Feb 20 11:49:45.324915 master-0 kubenswrapper[7756]: I0220 11:49:45.324858 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af18215b-e749-4565-bb6c-24e92c452817-metrics-tls\") pod \"dns-default-kx4ch\" (UID: \"af18215b-e749-4565-bb6c-24e92c452817\") " pod="openshift-dns/dns-default-kx4ch" Feb 20 11:49:45.338610 master-0 kubenswrapper[7756]: I0220 11:49:45.338487 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4j88\" (UniqueName: \"kubernetes.io/projected/bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4-kube-api-access-s4j88\") pod \"csi-snapshot-controller-6847bb4785-792hn\" (UID: \"bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" Feb 20 11:49:45.343252 master-0 kubenswrapper[7756]: I0220 11:49:45.343213 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c9xz\" (UniqueName: \"kubernetes.io/projected/af18215b-e749-4565-bb6c-24e92c452817-kube-api-access-7c9xz\") pod \"dns-default-kx4ch\" (UID: \"af18215b-e749-4565-bb6c-24e92c452817\") " pod="openshift-dns/dns-default-kx4ch" Feb 20 11:49:45.398402 master-0 kubenswrapper[7756]: I0220 11:49:45.398348 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" Feb 20 11:49:45.434548 master-0 kubenswrapper[7756]: I0220 11:49:45.433768 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kx4ch" Feb 20 11:49:45.535958 master-0 kubenswrapper[7756]: I0220 11:49:45.535907 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-jlp7n"] Feb 20 11:49:45.537243 master-0 kubenswrapper[7756]: I0220 11:49:45.537213 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jlp7n" Feb 20 11:49:45.697710 master-0 kubenswrapper[7756]: I0220 11:49:45.697606 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2795m\" (UniqueName: \"kubernetes.io/projected/afa174b3-912c-4b56-b5eb-f3e3df012c11-kube-api-access-2795m\") pod \"node-resolver-jlp7n\" (UID: \"afa174b3-912c-4b56-b5eb-f3e3df012c11\") " pod="openshift-dns/node-resolver-jlp7n" Feb 20 11:49:45.697710 master-0 kubenswrapper[7756]: I0220 11:49:45.697665 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/afa174b3-912c-4b56-b5eb-f3e3df012c11-hosts-file\") pod \"node-resolver-jlp7n\" (UID: \"afa174b3-912c-4b56-b5eb-f3e3df012c11\") " pod="openshift-dns/node-resolver-jlp7n" Feb 20 11:49:45.798987 master-0 kubenswrapper[7756]: I0220 11:49:45.798913 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2795m\" (UniqueName: \"kubernetes.io/projected/afa174b3-912c-4b56-b5eb-f3e3df012c11-kube-api-access-2795m\") pod \"node-resolver-jlp7n\" (UID: \"afa174b3-912c-4b56-b5eb-f3e3df012c11\") " pod="openshift-dns/node-resolver-jlp7n" Feb 20 11:49:45.799311 master-0 kubenswrapper[7756]: I0220 11:49:45.799263 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/afa174b3-912c-4b56-b5eb-f3e3df012c11-hosts-file\") pod \"node-resolver-jlp7n\" (UID: \"afa174b3-912c-4b56-b5eb-f3e3df012c11\") " pod="openshift-dns/node-resolver-jlp7n" Feb 20 11:49:45.799532 master-0 kubenswrapper[7756]: I0220 11:49:45.799473 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/afa174b3-912c-4b56-b5eb-f3e3df012c11-hosts-file\") pod \"node-resolver-jlp7n\" (UID: \"afa174b3-912c-4b56-b5eb-f3e3df012c11\") " pod="openshift-dns/node-resolver-jlp7n" Feb 20 11:49:45.926355 master-0 kubenswrapper[7756]: I0220 11:49:45.926298 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2795m\" (UniqueName: \"kubernetes.io/projected/afa174b3-912c-4b56-b5eb-f3e3df012c11-kube-api-access-2795m\") pod \"node-resolver-jlp7n\" (UID: \"afa174b3-912c-4b56-b5eb-f3e3df012c11\") " pod="openshift-dns/node-resolver-jlp7n" Feb 20 11:49:46.035947 master-0 kubenswrapper[7756]: I0220 11:49:46.034812 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:46.039753 master-0 kubenswrapper[7756]: I0220 11:49:46.035515 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" event={"ID":"ce2b6fde-de56-49c3-9bd6-e81c679b02bc","Type":"ContainerStarted","Data":"f3706b3c34cf4ca963f10ba2e8498b0291187d135d8a240b66a3eb3e3ede44fb"} Feb 20 11:49:46.057420 master-0 kubenswrapper[7756]: I0220 11:49:46.051372 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jlp7n" Feb 20 11:49:46.105966 master-0 kubenswrapper[7756]: I0220 11:49:46.105886 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:46.106403 master-0 kubenswrapper[7756]: I0220 11:49:46.106362 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-etcd-serving-ca\") pod \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " Feb 20 11:49:46.108064 master-0 kubenswrapper[7756]: I0220 11:49:46.107611 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "dd7698c6-bc33-4416-9417-dfe3ecd706cb" (UID: "dd7698c6-bc33-4416-9417-dfe3ecd706cb"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.213806 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-audit\") pod \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.213885 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-etcd-client\") pod \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.213921 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-trusted-ca-bundle\") pod \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.213956 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-image-import-ca\") pod \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.213990 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd7698c6-bc33-4416-9417-dfe3ecd706cb-node-pullsecrets\") pod \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.214017 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-encryption-config\") pod \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.214073 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd7698c6-bc33-4416-9417-dfe3ecd706cb-audit-dir\") pod \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.214112 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-config\") pod \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.214159 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-serving-cert\") pod \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.214240 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv8pq\" (UniqueName: \"kubernetes.io/projected/dd7698c6-bc33-4416-9417-dfe3ecd706cb-kube-api-access-gv8pq\") pod \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\" (UID: \"dd7698c6-bc33-4416-9417-dfe3ecd706cb\") " Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.214624 7756 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.217968 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-audit" (OuterVolumeSpecName: "audit") pod "dd7698c6-bc33-4416-9417-dfe3ecd706cb" (UID: "dd7698c6-bc33-4416-9417-dfe3ecd706cb"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.218090 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd7698c6-bc33-4416-9417-dfe3ecd706cb-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "dd7698c6-bc33-4416-9417-dfe3ecd706cb" (UID: "dd7698c6-bc33-4416-9417-dfe3ecd706cb"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.218504 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd7698c6-bc33-4416-9417-dfe3ecd706cb-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "dd7698c6-bc33-4416-9417-dfe3ecd706cb" (UID: "dd7698c6-bc33-4416-9417-dfe3ecd706cb"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.221097 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "dd7698c6-bc33-4416-9417-dfe3ecd706cb" (UID: "dd7698c6-bc33-4416-9417-dfe3ecd706cb"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.221546 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dd7698c6-bc33-4416-9417-dfe3ecd706cb" (UID: "dd7698c6-bc33-4416-9417-dfe3ecd706cb"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.221909 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "dd7698c6-bc33-4416-9417-dfe3ecd706cb" (UID: "dd7698c6-bc33-4416-9417-dfe3ecd706cb"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.222264 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-config" (OuterVolumeSpecName: "config") pod "dd7698c6-bc33-4416-9417-dfe3ecd706cb" (UID: "dd7698c6-bc33-4416-9417-dfe3ecd706cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.223647 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "dd7698c6-bc33-4416-9417-dfe3ecd706cb" (UID: "dd7698c6-bc33-4416-9417-dfe3ecd706cb"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:49:46.230064 master-0 kubenswrapper[7756]: I0220 11:49:46.226113 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dd7698c6-bc33-4416-9417-dfe3ecd706cb" (UID: "dd7698c6-bc33-4416-9417-dfe3ecd706cb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:49:46.240307 master-0 kubenswrapper[7756]: I0220 11:49:46.230958 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd7698c6-bc33-4416-9417-dfe3ecd706cb-kube-api-access-gv8pq" (OuterVolumeSpecName: "kube-api-access-gv8pq") pod "dd7698c6-bc33-4416-9417-dfe3ecd706cb" (UID: "dd7698c6-bc33-4416-9417-dfe3ecd706cb"). InnerVolumeSpecName "kube-api-access-gv8pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:49:46.240307 master-0 kubenswrapper[7756]: W0220 11:49:46.239875 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf8dc2a9_fcc6_41b4_ae05_ed27cc60a2f4.slice/crio-8cf490279cd50e81a0597e17ffd2c0830f353d5b000ce0e906995ead9d10342b WatchSource:0}: Error finding container 8cf490279cd50e81a0597e17ffd2c0830f353d5b000ce0e906995ead9d10342b: Status 404 returned error can't find the container with id 8cf490279cd50e81a0597e17ffd2c0830f353d5b000ce0e906995ead9d10342b Feb 20 11:49:46.257296 master-0 kubenswrapper[7756]: I0220 11:49:46.251211 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn"] Feb 20 11:49:46.257296 master-0 kubenswrapper[7756]: I0220 11:49:46.252864 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kx4ch"] Feb 20 11:49:46.268774 master-0 kubenswrapper[7756]: W0220 11:49:46.268703 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf18215b_e749_4565_bb6c_24e92c452817.slice/crio-5b2746caab687d58b26002188b5ccba20de2a04cd6da171355541cf375046c0d WatchSource:0}: Error finding container 5b2746caab687d58b26002188b5ccba20de2a04cd6da171355541cf375046c0d: Status 404 returned error can't find the container with id 5b2746caab687d58b26002188b5ccba20de2a04cd6da171355541cf375046c0d Feb 20 11:49:46.323791 master-0 kubenswrapper[7756]: I0220 11:49:46.320780 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv8pq\" (UniqueName: \"kubernetes.io/projected/dd7698c6-bc33-4416-9417-dfe3ecd706cb-kube-api-access-gv8pq\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:46.323791 master-0 kubenswrapper[7756]: I0220 11:49:46.320824 7756 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-audit\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:46.323791 master-0 kubenswrapper[7756]: I0220 11:49:46.320835 7756 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:46.323791 master-0 kubenswrapper[7756]: I0220 11:49:46.320846 7756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:46.323791 master-0 kubenswrapper[7756]: I0220 11:49:46.320856 7756 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-image-import-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:46.323791 master-0 kubenswrapper[7756]: I0220 11:49:46.320867 7756 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd7698c6-bc33-4416-9417-dfe3ecd706cb-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:46.323791 master-0 kubenswrapper[7756]: I0220 11:49:46.320878 7756 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:46.323791 master-0 kubenswrapper[7756]: I0220 11:49:46.320888 7756 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd7698c6-bc33-4416-9417-dfe3ecd706cb-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:46.323791 master-0 kubenswrapper[7756]: I0220 11:49:46.320899 7756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd7698c6-bc33-4416-9417-dfe3ecd706cb-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:46.323791 master-0 kubenswrapper[7756]: I0220 11:49:46.320911 7756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd7698c6-bc33-4416-9417-dfe3ecd706cb-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:46.587468 master-0 kubenswrapper[7756]: I0220 11:49:46.587316 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d46a5dc4-89d6-4be7-8aac-11f034d25076" path="/var/lib/kubelet/pods/d46a5dc4-89d6-4be7-8aac-11f034d25076/volumes" Feb 20 11:49:47.046794 master-0 kubenswrapper[7756]: I0220 11:49:47.046710 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kx4ch" event={"ID":"af18215b-e749-4565-bb6c-24e92c452817","Type":"ContainerStarted","Data":"5b2746caab687d58b26002188b5ccba20de2a04cd6da171355541cf375046c0d"} Feb 20 11:49:47.050144 master-0 kubenswrapper[7756]: I0220 11:49:47.050007 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" event={"ID":"bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4","Type":"ContainerStarted","Data":"8cf490279cd50e81a0597e17ffd2c0830f353d5b000ce0e906995ead9d10342b"} Feb 20 11:49:47.053704 master-0 kubenswrapper[7756]: I0220 11:49:47.053632 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jlp7n" event={"ID":"afa174b3-912c-4b56-b5eb-f3e3df012c11","Type":"ContainerStarted","Data":"aaf3e9c5522251f5f37384cf2ce59e46c462b4b343c2d24877effdd5526f048e"} Feb 20 11:49:47.053798 master-0 kubenswrapper[7756]: I0220 11:49:47.053691 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-687c46c5b-2xbrj" Feb 20 11:49:47.053798 master-0 kubenswrapper[7756]: I0220 11:49:47.053738 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jlp7n" event={"ID":"afa174b3-912c-4b56-b5eb-f3e3df012c11","Type":"ContainerStarted","Data":"7b0a0741b1c4a0dbf76177da995e7cc407702a375fbd2c1f79e4ec49f22b6e5f"} Feb 20 11:49:47.070697 master-0 kubenswrapper[7756]: I0220 11:49:47.069715 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-jlp7n" podStartSLOduration=2.0696965880000002 podStartE2EDuration="2.069696588s" podCreationTimestamp="2026-02-20 11:49:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:49:47.069331808 +0000 UTC m=+32.811579826" watchObservedRunningTime="2026-02-20 11:49:47.069696588 +0000 UTC m=+32.811944596" Feb 20 11:49:47.107849 master-0 kubenswrapper[7756]: I0220 11:49:47.107780 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-687c46c5b-2xbrj"] Feb 20 11:49:47.109480 master-0 kubenswrapper[7756]: I0220 11:49:47.109423 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-7666bb78cc-jxswr"] Feb 20 11:49:47.124725 master-0 kubenswrapper[7756]: I0220 11:49:47.124648 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.126914 master-0 kubenswrapper[7756]: I0220 11:49:47.126865 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 20 11:49:47.133193 master-0 kubenswrapper[7756]: I0220 11:49:47.133114 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 20 11:49:47.133439 master-0 kubenswrapper[7756]: I0220 11:49:47.133400 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 20 11:49:47.133764 master-0 kubenswrapper[7756]: I0220 11:49:47.133721 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 20 11:49:47.133898 master-0 kubenswrapper[7756]: I0220 11:49:47.133875 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 20 11:49:47.134113 master-0 kubenswrapper[7756]: I0220 11:49:47.134075 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 20 11:49:47.134193 master-0 kubenswrapper[7756]: I0220 11:49:47.134180 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 20 11:49:47.134398 master-0 kubenswrapper[7756]: I0220 11:49:47.134358 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 20 11:49:47.138422 master-0 kubenswrapper[7756]: I0220 11:49:47.138367 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 20 11:49:47.140149 master-0 kubenswrapper[7756]: I0220 11:49:47.140096 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 20 11:49:47.156444 master-0 kubenswrapper[7756]: I0220 11:49:47.151253 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-687c46c5b-2xbrj"] Feb 20 11:49:47.193550 master-0 kubenswrapper[7756]: I0220 11:49:47.180137 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7666bb78cc-jxswr"] Feb 20 11:49:47.233551 master-0 kubenswrapper[7756]: I0220 11:49:47.231805 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-serving-cert\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.233551 master-0 kubenswrapper[7756]: I0220 11:49:47.231907 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp57v\" (UniqueName: \"kubernetes.io/projected/59c1cc61-8692-4a35-83fc-6bbef7086117-kube-api-access-mp57v\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.233551 master-0 kubenswrapper[7756]: I0220 11:49:47.232113 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-image-import-ca\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.233551 master-0 kubenswrapper[7756]: I0220 11:49:47.232207 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59c1cc61-8692-4a35-83fc-6bbef7086117-audit-dir\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.233551 master-0 kubenswrapper[7756]: I0220 11:49:47.232322 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-etcd-serving-ca\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.233551 master-0 kubenswrapper[7756]: I0220 11:49:47.232377 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-trusted-ca-bundle\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.233551 master-0 kubenswrapper[7756]: I0220 11:49:47.232401 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-encryption-config\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.233551 master-0 kubenswrapper[7756]: I0220 11:49:47.232444 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/59c1cc61-8692-4a35-83fc-6bbef7086117-node-pullsecrets\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.233551 master-0 kubenswrapper[7756]: I0220 11:49:47.232471 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-audit\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.233551 master-0 kubenswrapper[7756]: I0220 11:49:47.232583 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-config\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.233551 master-0 kubenswrapper[7756]: I0220 11:49:47.232607 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-etcd-client\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.332895 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-serving-cert\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.332955 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp57v\" (UniqueName: \"kubernetes.io/projected/59c1cc61-8692-4a35-83fc-6bbef7086117-kube-api-access-mp57v\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.332978 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.332999 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-image-import-ca\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.333778 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-image-import-ca\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.333806 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59c1cc61-8692-4a35-83fc-6bbef7086117-audit-dir\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.333934 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-etcd-serving-ca\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.333939 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59c1cc61-8692-4a35-83fc-6bbef7086117-audit-dir\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.333981 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-trusted-ca-bundle\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.333998 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-encryption-config\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.334025 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-audit\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.334181 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/59c1cc61-8692-4a35-83fc-6bbef7086117-node-pullsecrets\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.334250 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-config\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.334279 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-etcd-client\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.334868 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-etcd-serving-ca\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.334960 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/59c1cc61-8692-4a35-83fc-6bbef7086117-node-pullsecrets\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.335100 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-audit\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.335296 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-trusted-ca-bundle\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.335831 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-config\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.336546 master-0 kubenswrapper[7756]: I0220 11:49:47.336081 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-serving-cert\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.337286 master-0 kubenswrapper[7756]: I0220 11:49:47.337249 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-etcd-client\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.338566 master-0 kubenswrapper[7756]: I0220 11:49:47.338093 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-encryption-config\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.341477 master-0 kubenswrapper[7756]: I0220 11:49:47.341426 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:47.349320 master-0 kubenswrapper[7756]: I0220 11:49:47.349277 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp57v\" (UniqueName: \"kubernetes.io/projected/59c1cc61-8692-4a35-83fc-6bbef7086117-kube-api-access-mp57v\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.435092 master-0 kubenswrapper[7756]: I0220 11:49:47.434979 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:47.435092 master-0 kubenswrapper[7756]: I0220 11:49:47.435031 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:47.435092 master-0 kubenswrapper[7756]: I0220 11:49:47.435051 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:47.436006 master-0 kubenswrapper[7756]: I0220 11:49:47.435986 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:47.436073 master-0 kubenswrapper[7756]: I0220 11:49:47.436014 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:47.436073 master-0 kubenswrapper[7756]: I0220 11:49:47.436037 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:47.436419 master-0 kubenswrapper[7756]: I0220 11:49:47.436369 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:47.438382 master-0 kubenswrapper[7756]: I0220 11:49:47.438361 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:47.438755 master-0 kubenswrapper[7756]: I0220 11:49:47.438734 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:47.439459 master-0 kubenswrapper[7756]: I0220 11:49:47.439437 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:47.440009 master-0 kubenswrapper[7756]: I0220 11:49:47.439950 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:47.441077 master-0 kubenswrapper[7756]: I0220 11:49:47.440914 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:47.441077 master-0 kubenswrapper[7756]: I0220 11:49:47.440965 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-jgv89\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:47.446607 master-0 kubenswrapper[7756]: I0220 11:49:47.446549 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:47.481435 master-0 kubenswrapper[7756]: I0220 11:49:47.481396 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:49:47.597256 master-0 kubenswrapper[7756]: I0220 11:49:47.597202 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:49:47.599278 master-0 kubenswrapper[7756]: I0220 11:49:47.599252 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:49:47.600034 master-0 kubenswrapper[7756]: I0220 11:49:47.599996 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:49:47.607428 master-0 kubenswrapper[7756]: I0220 11:49:47.607382 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 11:49:47.612763 master-0 kubenswrapper[7756]: I0220 11:49:47.612612 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:47.613619 master-0 kubenswrapper[7756]: I0220 11:49:47.613306 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-29622" Feb 20 11:49:47.613619 master-0 kubenswrapper[7756]: I0220 11:49:47.613318 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 11:49:47.613778 master-0 kubenswrapper[7756]: I0220 11:49:47.613745 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:49:47.770336 master-0 kubenswrapper[7756]: I0220 11:49:47.770237 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx"] Feb 20 11:49:47.773159 master-0 kubenswrapper[7756]: I0220 11:49:47.772149 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.774545 master-0 kubenswrapper[7756]: I0220 11:49:47.774441 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 20 11:49:47.776430 master-0 kubenswrapper[7756]: I0220 11:49:47.775554 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 20 11:49:47.776430 master-0 kubenswrapper[7756]: I0220 11:49:47.775796 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 20 11:49:47.776430 master-0 kubenswrapper[7756]: I0220 11:49:47.775896 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 20 11:49:47.776430 master-0 kubenswrapper[7756]: I0220 11:49:47.776057 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 20 11:49:47.776430 master-0 kubenswrapper[7756]: I0220 11:49:47.776157 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 20 11:49:47.778486 master-0 kubenswrapper[7756]: I0220 11:49:47.778400 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx"] Feb 20 11:49:47.779066 master-0 kubenswrapper[7756]: I0220 11:49:47.779019 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 20 11:49:47.779404 master-0 kubenswrapper[7756]: I0220 11:49:47.779372 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 20 11:49:47.849675 master-0 kubenswrapper[7756]: I0220 11:49:47.849607 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-serving-cert\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.849675 master-0 kubenswrapper[7756]: I0220 11:49:47.849658 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-etcd-serving-ca\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.849675 master-0 kubenswrapper[7756]: I0220 11:49:47.849679 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c183ec2-be40-4781-aabd-928c4f70661e-audit-dir\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.850091 master-0 kubenswrapper[7756]: I0220 11:49:47.849714 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca\") pod \"controller-manager-8594984f84-vjpf6\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:47.850091 master-0 kubenswrapper[7756]: I0220 11:49:47.849733 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-encryption-config\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.850091 master-0 kubenswrapper[7756]: I0220 11:49:47.849803 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n4rw\" (UniqueName: \"kubernetes.io/projected/1c183ec2-be40-4781-aabd-928c4f70661e-kube-api-access-2n4rw\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.850091 master-0 kubenswrapper[7756]: E0220 11:49:47.849852 7756 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:47.850091 master-0 kubenswrapper[7756]: E0220 11:49:47.849902 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca podName:bc8c376c-445b-45c2-ab0c-9269265870c4 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:55.849887068 +0000 UTC m=+41.592135076 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca") pod "controller-manager-8594984f84-vjpf6" (UID: "bc8c376c-445b-45c2-ab0c-9269265870c4") : configmap "client-ca" not found Feb 20 11:49:47.850091 master-0 kubenswrapper[7756]: I0220 11:49:47.850070 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-audit-policies\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.850469 master-0 kubenswrapper[7756]: I0220 11:49:47.850164 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-etcd-client\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.850469 master-0 kubenswrapper[7756]: I0220 11:49:47.850355 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-trusted-ca-bundle\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.910543 master-0 kubenswrapper[7756]: I0220 11:49:47.908237 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7666bb78cc-jxswr"] Feb 20 11:49:47.951549 master-0 kubenswrapper[7756]: I0220 11:49:47.951064 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-encryption-config\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.951549 master-0 kubenswrapper[7756]: I0220 11:49:47.951132 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n4rw\" (UniqueName: \"kubernetes.io/projected/1c183ec2-be40-4781-aabd-928c4f70661e-kube-api-access-2n4rw\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.951549 master-0 kubenswrapper[7756]: I0220 11:49:47.951155 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-audit-policies\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.951549 master-0 kubenswrapper[7756]: I0220 11:49:47.951177 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-etcd-client\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.951549 master-0 kubenswrapper[7756]: I0220 11:49:47.951230 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-trusted-ca-bundle\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.951549 master-0 kubenswrapper[7756]: I0220 11:49:47.951257 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-serving-cert\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.951549 master-0 kubenswrapper[7756]: I0220 11:49:47.951278 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-etcd-serving-ca\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.951549 master-0 kubenswrapper[7756]: I0220 11:49:47.951297 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c183ec2-be40-4781-aabd-928c4f70661e-audit-dir\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.951549 master-0 kubenswrapper[7756]: I0220 11:49:47.951384 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c183ec2-be40-4781-aabd-928c4f70661e-audit-dir\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.959146 master-0 kubenswrapper[7756]: I0220 11:49:47.959102 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-etcd-serving-ca\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.959518 master-0 kubenswrapper[7756]: I0220 11:49:47.959480 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-audit-policies\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.959580 master-0 kubenswrapper[7756]: I0220 11:49:47.959515 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-trusted-ca-bundle\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.961614 master-0 kubenswrapper[7756]: I0220 11:49:47.961551 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-serving-cert\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.962283 master-0 kubenswrapper[7756]: I0220 11:49:47.962218 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-encryption-config\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.963520 master-0 kubenswrapper[7756]: I0220 11:49:47.963495 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-etcd-client\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:47.980591 master-0 kubenswrapper[7756]: I0220 11:49:47.980548 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n4rw\" (UniqueName: \"kubernetes.io/projected/1c183ec2-be40-4781-aabd-928c4f70661e-kube-api-access-2n4rw\") pod \"apiserver-58558b4f4c-b5nxx\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:48.097218 master-0 kubenswrapper[7756]: I0220 11:49:48.097089 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:49:48.141516 master-0 kubenswrapper[7756]: W0220 11:49:48.141453 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59c1cc61_8692_4a35_83fc_6bbef7086117.slice/crio-3492cbd782b3ac55acb0d1ebd2aa664af10267490d59604deb78eb50aef952ff WatchSource:0}: Error finding container 3492cbd782b3ac55acb0d1ebd2aa664af10267490d59604deb78eb50aef952ff: Status 404 returned error can't find the container with id 3492cbd782b3ac55acb0d1ebd2aa664af10267490d59604deb78eb50aef952ff Feb 20 11:49:48.587074 master-0 kubenswrapper[7756]: I0220 11:49:48.586947 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd7698c6-bc33-4416-9417-dfe3ecd706cb" path="/var/lib/kubelet/pods/dd7698c6-bc33-4416-9417-dfe3ecd706cb/volumes" Feb 20 11:49:49.108330 master-0 kubenswrapper[7756]: I0220 11:49:49.108184 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" event={"ID":"bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4","Type":"ContainerStarted","Data":"dda80c885f92b57bca602a3a57fe7a72f775d424964427877643f5139f187abf"} Feb 20 11:49:49.124914 master-0 kubenswrapper[7756]: I0220 11:49:49.119022 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" event={"ID":"59c1cc61-8692-4a35-83fc-6bbef7086117","Type":"ContainerStarted","Data":"3492cbd782b3ac55acb0d1ebd2aa664af10267490d59604deb78eb50aef952ff"} Feb 20 11:49:49.124914 master-0 kubenswrapper[7756]: I0220 11:49:49.122435 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" podStartSLOduration=1.545383342 podStartE2EDuration="4.122377838s" podCreationTimestamp="2026-02-20 11:49:45 +0000 UTC" firstStartedPulling="2026-02-20 11:49:46.243666969 +0000 UTC m=+31.985914987" lastFinishedPulling="2026-02-20 11:49:48.820661435 +0000 UTC m=+34.562909483" observedRunningTime="2026-02-20 11:49:49.120057302 +0000 UTC m=+34.862305300" watchObservedRunningTime="2026-02-20 11:49:49.122377838 +0000 UTC m=+34.864625846" Feb 20 11:49:49.344507 master-0 kubenswrapper[7756]: I0220 11:49:49.344455 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89"] Feb 20 11:49:49.347367 master-0 kubenswrapper[7756]: I0220 11:49:49.347329 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc"] Feb 20 11:49:49.359579 master-0 kubenswrapper[7756]: W0220 11:49:49.359468 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd65a0af4_c96f_44f8_9384_6bae4585983b.slice/crio-36eb1911b1d84465d4f3614b052501f0ab8200fc09c3cd58c9e93b58066e3180 WatchSource:0}: Error finding container 36eb1911b1d84465d4f3614b052501f0ab8200fc09c3cd58c9e93b58066e3180: Status 404 returned error can't find the container with id 36eb1911b1d84465d4f3614b052501f0ab8200fc09c3cd58c9e93b58066e3180 Feb 20 11:49:49.438985 master-0 kubenswrapper[7756]: I0220 11:49:49.438912 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g"] Feb 20 11:49:49.708691 master-0 kubenswrapper[7756]: I0220 11:49:49.708447 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx"] Feb 20 11:49:49.722399 master-0 kubenswrapper[7756]: W0220 11:49:49.722032 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c183ec2_be40_4781_aabd_928c4f70661e.slice/crio-b21326213655ed5cbd77c1642d0a20ed5adab633f8bf2bcc2848b92a7011ed7f WatchSource:0}: Error finding container b21326213655ed5cbd77c1642d0a20ed5adab633f8bf2bcc2848b92a7011ed7f: Status 404 returned error can't find the container with id b21326213655ed5cbd77c1642d0a20ed5adab633f8bf2bcc2848b92a7011ed7f Feb 20 11:49:49.724025 master-0 kubenswrapper[7756]: W0220 11:49:49.723801 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dfca740_0387_428a_b957_3e8a09c6e352.slice/crio-826db63109cf25d66ed31a255738b519d4a9faae58f44b83818b33fc45665543 WatchSource:0}: Error finding container 826db63109cf25d66ed31a255738b519d4a9faae58f44b83818b33fc45665543: Status 404 returned error can't find the container with id 826db63109cf25d66ed31a255738b519d4a9faae58f44b83818b33fc45665543 Feb 20 11:49:49.724025 master-0 kubenswrapper[7756]: W0220 11:49:49.727340 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d060bff_3c25_4eeb_bdd3_e20fb2687645.slice/crio-39a1a5d33692c6053b9e75c3ef75f6d5e551935ea080f8573acf4698acb62831 WatchSource:0}: Error finding container 39a1a5d33692c6053b9e75c3ef75f6d5e551935ea080f8573acf4698acb62831: Status 404 returned error can't find the container with id 39a1a5d33692c6053b9e75c3ef75f6d5e551935ea080f8573acf4698acb62831 Feb 20 11:49:49.724025 master-0 kubenswrapper[7756]: I0220 11:49:49.728906 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt"] Feb 20 11:49:49.732030 master-0 kubenswrapper[7756]: I0220 11:49:49.731640 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-nr4tg"] Feb 20 11:49:49.738629 master-0 kubenswrapper[7756]: I0220 11:49:49.738497 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt"] Feb 20 11:49:49.741966 master-0 kubenswrapper[7756]: I0220 11:49:49.741152 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-29622"] Feb 20 11:49:49.743130 master-0 kubenswrapper[7756]: W0220 11:49:49.743036 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1709ef31_9ddd_42bf_9a95_4be4502a0828.slice/crio-b246614c1f2f72db4cedbcce4b955bc3ac0b04e8bff7cc76cf229101226ee259 WatchSource:0}: Error finding container b246614c1f2f72db4cedbcce4b955bc3ac0b04e8bff7cc76cf229101226ee259: Status 404 returned error can't find the container with id b246614c1f2f72db4cedbcce4b955bc3ac0b04e8bff7cc76cf229101226ee259 Feb 20 11:49:49.749233 master-0 kubenswrapper[7756]: I0220 11:49:49.749167 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l"] Feb 20 11:49:49.757169 master-0 kubenswrapper[7756]: W0220 11:49:49.756386 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb135cff_1a2e_468d_80ab_f7db3f57552a.slice/crio-a7e80ad99f32fd1031084b1ec720eccfe0c30d3f2999f46f1a0b9a07c12c03d3 WatchSource:0}: Error finding container a7e80ad99f32fd1031084b1ec720eccfe0c30d3f2999f46f1a0b9a07c12c03d3: Status 404 returned error can't find the container with id a7e80ad99f32fd1031084b1ec720eccfe0c30d3f2999f46f1a0b9a07c12c03d3 Feb 20 11:49:50.133069 master-0 kubenswrapper[7756]: I0220 11:49:50.132997 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" event={"ID":"eb135cff-1a2e-468d-80ab-f7db3f57552a","Type":"ContainerStarted","Data":"b5ed8f366ad0d863cc0352d00a5c8a35bfb3ef99da5c5cd7e671a3f150c1fa29"} Feb 20 11:49:50.133069 master-0 kubenswrapper[7756]: I0220 11:49:50.133070 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" event={"ID":"eb135cff-1a2e-468d-80ab-f7db3f57552a","Type":"ContainerStarted","Data":"9583a5d028e457a8b1106eee87ac3a3f6e2e8ded0c2d13dad805b6ccfd5190e1"} Feb 20 11:49:50.133624 master-0 kubenswrapper[7756]: I0220 11:49:50.133086 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" event={"ID":"eb135cff-1a2e-468d-80ab-f7db3f57552a","Type":"ContainerStarted","Data":"a7e80ad99f32fd1031084b1ec720eccfe0c30d3f2999f46f1a0b9a07c12c03d3"} Feb 20 11:49:50.135921 master-0 kubenswrapper[7756]: I0220 11:49:50.135861 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" event={"ID":"1c183ec2-be40-4781-aabd-928c4f70661e","Type":"ContainerStarted","Data":"b21326213655ed5cbd77c1642d0a20ed5adab633f8bf2bcc2848b92a7011ed7f"} Feb 20 11:49:50.141642 master-0 kubenswrapper[7756]: I0220 11:49:50.138006 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-29622" event={"ID":"1709ef31-9ddd-42bf-9a95-4be4502a0828","Type":"ContainerStarted","Data":"b246614c1f2f72db4cedbcce4b955bc3ac0b04e8bff7cc76cf229101226ee259"} Feb 20 11:49:50.141642 master-0 kubenswrapper[7756]: I0220 11:49:50.139172 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" event={"ID":"4d060bff-3c25-4eeb-bdd3-e20fb2687645","Type":"ContainerStarted","Data":"39a1a5d33692c6053b9e75c3ef75f6d5e551935ea080f8573acf4698acb62831"} Feb 20 11:49:50.141642 master-0 kubenswrapper[7756]: I0220 11:49:50.140487 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" event={"ID":"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783","Type":"ContainerStarted","Data":"bd39daabfdce6754d4a4f78c48fcaecbdad1e1d29636e311b156f098b7cc24fe"} Feb 20 11:49:50.142986 master-0 kubenswrapper[7756]: I0220 11:49:50.142931 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" event={"ID":"22bba1b3-587d-4802-b4ae-946827c3fa7a","Type":"ContainerStarted","Data":"47b2f781a814a8d1bcfc1cccd7e4c348407c92b6cdeff2bb7b600cfbaa766dff"} Feb 20 11:49:50.144664 master-0 kubenswrapper[7756]: I0220 11:49:50.144626 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" event={"ID":"6dfca740-0387-428a-b957-3e8a09c6e352","Type":"ContainerStarted","Data":"826db63109cf25d66ed31a255738b519d4a9faae58f44b83818b33fc45665543"} Feb 20 11:49:50.148932 master-0 kubenswrapper[7756]: I0220 11:49:50.148864 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" event={"ID":"d65a0af4-c96f-44f8-9384-6bae4585983b","Type":"ContainerStarted","Data":"36eb1911b1d84465d4f3614b052501f0ab8200fc09c3cd58c9e93b58066e3180"} Feb 20 11:49:50.157900 master-0 kubenswrapper[7756]: I0220 11:49:50.156000 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" event={"ID":"dbce6cdc-040a-48e1-8a81-b6ff9c180eba","Type":"ContainerStarted","Data":"3e1f3522b8c5324a3d01947568a95f0661361b2581e77220b87002f02127d281"} Feb 20 11:49:50.157900 master-0 kubenswrapper[7756]: I0220 11:49:50.156065 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" event={"ID":"dbce6cdc-040a-48e1-8a81-b6ff9c180eba","Type":"ContainerStarted","Data":"7127b21b93cf0d636eeb4e29ca5a97fd29d095d44d5d5c9994999fa758bf4565"} Feb 20 11:49:50.158450 master-0 kubenswrapper[7756]: I0220 11:49:50.158401 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kx4ch" event={"ID":"af18215b-e749-4565-bb6c-24e92c452817","Type":"ContainerStarted","Data":"8aec7b4d1ebeddbd875e9e8b54da98abf4170fdb876d94e2e9df81795a2136d1"} Feb 20 11:49:50.158450 master-0 kubenswrapper[7756]: I0220 11:49:50.158432 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kx4ch" event={"ID":"af18215b-e749-4565-bb6c-24e92c452817","Type":"ContainerStarted","Data":"1d1ee6b4575573729b9753c1824a734f0bea57acdd9ae97c375e5988c47ac502"} Feb 20 11:49:50.191598 master-0 kubenswrapper[7756]: I0220 11:49:50.188052 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-kx4ch" podStartSLOduration=2.635983464 podStartE2EDuration="5.187998356s" podCreationTimestamp="2026-02-20 11:49:45 +0000 UTC" firstStartedPulling="2026-02-20 11:49:46.271934587 +0000 UTC m=+32.014182595" lastFinishedPulling="2026-02-20 11:49:48.823949469 +0000 UTC m=+34.566197487" observedRunningTime="2026-02-20 11:49:50.184160057 +0000 UTC m=+35.926408075" watchObservedRunningTime="2026-02-20 11:49:50.187998356 +0000 UTC m=+35.930246364" Feb 20 11:49:51.010106 master-0 kubenswrapper[7756]: I0220 11:49:51.010057 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca\") pod \"route-controller-manager-596cddd866-6nmjb\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:51.010544 master-0 kubenswrapper[7756]: E0220 11:49:51.010196 7756 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 20 11:49:51.010544 master-0 kubenswrapper[7756]: E0220 11:49:51.010247 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca podName:739a6cfd-c386-4ac9-8b18-cf913bd6cc61 nodeName:}" failed. No retries permitted until 2026-02-20 11:50:07.010232497 +0000 UTC m=+52.752480505 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca") pod "route-controller-manager-596cddd866-6nmjb" (UID: "739a6cfd-c386-4ac9-8b18-cf913bd6cc61") : configmap "client-ca" not found Feb 20 11:49:51.164270 master-0 kubenswrapper[7756]: I0220 11:49:51.164195 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-kx4ch" Feb 20 11:49:53.072887 master-0 kubenswrapper[7756]: I0220 11:49:53.072467 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8594984f84-vjpf6"] Feb 20 11:49:53.072887 master-0 kubenswrapper[7756]: E0220 11:49:53.072815 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" podUID="bc8c376c-445b-45c2-ab0c-9269265870c4" Feb 20 11:49:53.094678 master-0 kubenswrapper[7756]: I0220 11:49:53.094629 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb"] Feb 20 11:49:53.095378 master-0 kubenswrapper[7756]: E0220 11:49:53.095291 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" podUID="739a6cfd-c386-4ac9-8b18-cf913bd6cc61" Feb 20 11:49:53.157294 master-0 kubenswrapper[7756]: I0220 11:49:53.157228 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 11:49:53.171491 master-0 kubenswrapper[7756]: I0220 11:49:53.171189 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:53.171737 master-0 kubenswrapper[7756]: I0220 11:49:53.171638 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:53.179982 master-0 kubenswrapper[7756]: I0220 11:49:53.179950 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:53.185733 master-0 kubenswrapper[7756]: I0220 11:49:53.185688 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:53.336665 master-0 kubenswrapper[7756]: I0220 11:49:53.336507 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-proxy-ca-bundles\") pod \"bc8c376c-445b-45c2-ab0c-9269265870c4\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " Feb 20 11:49:53.336665 master-0 kubenswrapper[7756]: I0220 11:49:53.336584 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhxvq\" (UniqueName: \"kubernetes.io/projected/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-kube-api-access-hhxvq\") pod \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " Feb 20 11:49:53.336665 master-0 kubenswrapper[7756]: I0220 11:49:53.336609 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert\") pod \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " Feb 20 11:49:53.336665 master-0 kubenswrapper[7756]: I0220 11:49:53.336651 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8c376c-445b-45c2-ab0c-9269265870c4-serving-cert\") pod \"bc8c376c-445b-45c2-ab0c-9269265870c4\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " Feb 20 11:49:53.336665 master-0 kubenswrapper[7756]: I0220 11:49:53.336672 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-config\") pod \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\" (UID: \"739a6cfd-c386-4ac9-8b18-cf913bd6cc61\") " Feb 20 11:49:53.337074 master-0 kubenswrapper[7756]: I0220 11:49:53.337028 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bc8c376c-445b-45c2-ab0c-9269265870c4" (UID: "bc8c376c-445b-45c2-ab0c-9269265870c4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:53.337197 master-0 kubenswrapper[7756]: I0220 11:49:53.337173 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m56fq\" (UniqueName: \"kubernetes.io/projected/bc8c376c-445b-45c2-ab0c-9269265870c4-kube-api-access-m56fq\") pod \"bc8c376c-445b-45c2-ab0c-9269265870c4\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " Feb 20 11:49:53.337247 master-0 kubenswrapper[7756]: I0220 11:49:53.337231 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-config\") pod \"bc8c376c-445b-45c2-ab0c-9269265870c4\" (UID: \"bc8c376c-445b-45c2-ab0c-9269265870c4\") " Feb 20 11:49:53.337335 master-0 kubenswrapper[7756]: I0220 11:49:53.337290 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-config" (OuterVolumeSpecName: "config") pod "739a6cfd-c386-4ac9-8b18-cf913bd6cc61" (UID: "739a6cfd-c386-4ac9-8b18-cf913bd6cc61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:53.337615 master-0 kubenswrapper[7756]: I0220 11:49:53.337580 7756 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:53.337615 master-0 kubenswrapper[7756]: I0220 11:49:53.337601 7756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:53.337784 master-0 kubenswrapper[7756]: I0220 11:49:53.337760 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-config" (OuterVolumeSpecName: "config") pod "bc8c376c-445b-45c2-ab0c-9269265870c4" (UID: "bc8c376c-445b-45c2-ab0c-9269265870c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:53.340519 master-0 kubenswrapper[7756]: I0220 11:49:53.340449 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "739a6cfd-c386-4ac9-8b18-cf913bd6cc61" (UID: "739a6cfd-c386-4ac9-8b18-cf913bd6cc61"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:49:53.340667 master-0 kubenswrapper[7756]: I0220 11:49:53.340609 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-kube-api-access-hhxvq" (OuterVolumeSpecName: "kube-api-access-hhxvq") pod "739a6cfd-c386-4ac9-8b18-cf913bd6cc61" (UID: "739a6cfd-c386-4ac9-8b18-cf913bd6cc61"). InnerVolumeSpecName "kube-api-access-hhxvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:49:53.341117 master-0 kubenswrapper[7756]: I0220 11:49:53.341074 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc8c376c-445b-45c2-ab0c-9269265870c4-kube-api-access-m56fq" (OuterVolumeSpecName: "kube-api-access-m56fq") pod "bc8c376c-445b-45c2-ab0c-9269265870c4" (UID: "bc8c376c-445b-45c2-ab0c-9269265870c4"). InnerVolumeSpecName "kube-api-access-m56fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:49:53.343333 master-0 kubenswrapper[7756]: I0220 11:49:53.343293 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8c376c-445b-45c2-ab0c-9269265870c4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc8c376c-445b-45c2-ab0c-9269265870c4" (UID: "bc8c376c-445b-45c2-ab0c-9269265870c4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:49:53.438943 master-0 kubenswrapper[7756]: I0220 11:49:53.438858 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m56fq\" (UniqueName: \"kubernetes.io/projected/bc8c376c-445b-45c2-ab0c-9269265870c4-kube-api-access-m56fq\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:53.438943 master-0 kubenswrapper[7756]: I0220 11:49:53.438921 7756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:53.438943 master-0 kubenswrapper[7756]: I0220 11:49:53.438944 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhxvq\" (UniqueName: \"kubernetes.io/projected/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-kube-api-access-hhxvq\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:53.439255 master-0 kubenswrapper[7756]: I0220 11:49:53.438968 7756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:53.439255 master-0 kubenswrapper[7756]: I0220 11:49:53.438987 7756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8c376c-445b-45c2-ab0c-9269265870c4-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:54.157875 master-0 kubenswrapper[7756]: I0220 11:49:54.157798 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx"] Feb 20 11:49:54.176936 master-0 kubenswrapper[7756]: I0220 11:49:54.175390 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8594984f84-vjpf6" Feb 20 11:49:54.176936 master-0 kubenswrapper[7756]: I0220 11:49:54.176217 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb" Feb 20 11:49:54.219500 master-0 kubenswrapper[7756]: I0220 11:49:54.219451 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p"] Feb 20 11:49:54.220158 master-0 kubenswrapper[7756]: I0220 11:49:54.220133 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:49:54.225900 master-0 kubenswrapper[7756]: I0220 11:49:54.224985 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb"] Feb 20 11:49:54.225900 master-0 kubenswrapper[7756]: I0220 11:49:54.225032 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-596cddd866-6nmjb"] Feb 20 11:49:54.228222 master-0 kubenswrapper[7756]: I0220 11:49:54.226512 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p"] Feb 20 11:49:54.228222 master-0 kubenswrapper[7756]: I0220 11:49:54.227482 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 20 11:49:54.228222 master-0 kubenswrapper[7756]: I0220 11:49:54.227645 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 20 11:49:54.228222 master-0 kubenswrapper[7756]: I0220 11:49:54.227764 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 20 11:49:54.228222 master-0 kubenswrapper[7756]: I0220 11:49:54.227960 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 20 11:49:54.228222 master-0 kubenswrapper[7756]: I0220 11:49:54.228073 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 20 11:49:54.251904 master-0 kubenswrapper[7756]: I0220 11:49:54.251614 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-serving-cert\") pod \"route-controller-manager-796b564b-tbg9p\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:49:54.251904 master-0 kubenswrapper[7756]: I0220 11:49:54.251720 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-client-ca\") pod \"route-controller-manager-796b564b-tbg9p\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:49:54.251904 master-0 kubenswrapper[7756]: I0220 11:49:54.251804 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-config\") pod \"route-controller-manager-796b564b-tbg9p\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:49:54.251904 master-0 kubenswrapper[7756]: I0220 11:49:54.251827 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbcxf\" (UniqueName: \"kubernetes.io/projected/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-kube-api-access-tbcxf\") pod \"route-controller-manager-796b564b-tbg9p\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:49:54.251904 master-0 kubenswrapper[7756]: I0220 11:49:54.251871 7756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/739a6cfd-c386-4ac9-8b18-cf913bd6cc61-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:54.260536 master-0 kubenswrapper[7756]: I0220 11:49:54.260472 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8594984f84-vjpf6"] Feb 20 11:49:54.261566 master-0 kubenswrapper[7756]: I0220 11:49:54.261545 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8594984f84-vjpf6"] Feb 20 11:49:54.355082 master-0 kubenswrapper[7756]: I0220 11:49:54.355015 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-client-ca\") pod \"route-controller-manager-796b564b-tbg9p\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:49:54.355344 master-0 kubenswrapper[7756]: I0220 11:49:54.355304 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-config\") pod \"route-controller-manager-796b564b-tbg9p\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:49:54.355390 master-0 kubenswrapper[7756]: I0220 11:49:54.355347 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbcxf\" (UniqueName: \"kubernetes.io/projected/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-kube-api-access-tbcxf\") pod \"route-controller-manager-796b564b-tbg9p\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:49:54.355390 master-0 kubenswrapper[7756]: I0220 11:49:54.355373 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-serving-cert\") pod \"route-controller-manager-796b564b-tbg9p\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:49:54.355449 master-0 kubenswrapper[7756]: I0220 11:49:54.355404 7756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8c376c-445b-45c2-ab0c-9269265870c4-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:54.356124 master-0 kubenswrapper[7756]: I0220 11:49:54.355948 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-client-ca\") pod \"route-controller-manager-796b564b-tbg9p\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:49:54.363565 master-0 kubenswrapper[7756]: I0220 11:49:54.359629 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-config\") pod \"route-controller-manager-796b564b-tbg9p\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:49:54.364368 master-0 kubenswrapper[7756]: I0220 11:49:54.364302 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-serving-cert\") pod \"route-controller-manager-796b564b-tbg9p\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:49:54.371842 master-0 kubenswrapper[7756]: I0220 11:49:54.371797 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbcxf\" (UniqueName: \"kubernetes.io/projected/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-kube-api-access-tbcxf\") pod \"route-controller-manager-796b564b-tbg9p\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:49:54.539539 master-0 kubenswrapper[7756]: I0220 11:49:54.538922 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:49:54.585948 master-0 kubenswrapper[7756]: I0220 11:49:54.585882 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="739a6cfd-c386-4ac9-8b18-cf913bd6cc61" path="/var/lib/kubelet/pods/739a6cfd-c386-4ac9-8b18-cf913bd6cc61/volumes" Feb 20 11:49:54.586881 master-0 kubenswrapper[7756]: I0220 11:49:54.586697 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc8c376c-445b-45c2-ab0c-9269265870c4" path="/var/lib/kubelet/pods/bc8c376c-445b-45c2-ab0c-9269265870c4/volumes" Feb 20 11:49:56.133223 master-0 kubenswrapper[7756]: I0220 11:49:56.133181 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f"] Feb 20 11:49:56.133857 master-0 kubenswrapper[7756]: I0220 11:49:56.133839 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.135474 master-0 kubenswrapper[7756]: I0220 11:49:56.135383 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 20 11:49:56.135914 master-0 kubenswrapper[7756]: I0220 11:49:56.135646 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 20 11:49:56.135914 master-0 kubenswrapper[7756]: I0220 11:49:56.135769 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 20 11:49:56.155771 master-0 kubenswrapper[7756]: I0220 11:49:56.155719 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f"] Feb 20 11:49:56.196492 master-0 kubenswrapper[7756]: I0220 11:49:56.192152 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.196492 master-0 kubenswrapper[7756]: I0220 11:49:56.192221 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.196492 master-0 kubenswrapper[7756]: I0220 11:49:56.192251 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc9wx\" (UniqueName: \"kubernetes.io/projected/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-kube-api-access-sc9wx\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.196492 master-0 kubenswrapper[7756]: I0220 11:49:56.192278 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.196492 master-0 kubenswrapper[7756]: I0220 11:49:56.192313 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.201252 master-0 kubenswrapper[7756]: I0220 11:49:56.201185 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" event={"ID":"6dfca740-0387-428a-b957-3e8a09c6e352","Type":"ContainerStarted","Data":"3d84b64b15cc0bdfd81208f0d2d2402b1dd43fcf0c81056aa6b599a33f0ef14d"} Feb 20 11:49:56.202139 master-0 kubenswrapper[7756]: I0220 11:49:56.202109 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:56.206782 master-0 kubenswrapper[7756]: I0220 11:49:56.206322 7756 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-nr4tg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.27:8080/healthz\": dial tcp 10.128.0.27:8080: connect: connection refused" start-of-body= Feb 20 11:49:56.206782 master-0 kubenswrapper[7756]: I0220 11:49:56.206371 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" podUID="6dfca740-0387-428a-b957-3e8a09c6e352" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.27:8080/healthz\": dial tcp 10.128.0.27:8080: connect: connection refused" Feb 20 11:49:56.207600 master-0 kubenswrapper[7756]: I0220 11:49:56.207574 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" event={"ID":"22bba1b3-587d-4802-b4ae-946827c3fa7a","Type":"ContainerStarted","Data":"22ce417b8aa77dcb839f6586205111a1da24045caaa4e597db48aeba3f28532b"} Feb 20 11:49:56.229469 master-0 kubenswrapper[7756]: I0220 11:49:56.229424 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p"] Feb 20 11:49:56.234760 master-0 kubenswrapper[7756]: W0220 11:49:56.234711 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e7df828_4166_4e0e_bb1e_042b3d14a6b6.slice/crio-c51fc9976235eba806445ba527326ae1186d7ebeaf647b93b2ea841938dc4883 WatchSource:0}: Error finding container c51fc9976235eba806445ba527326ae1186d7ebeaf647b93b2ea841938dc4883: Status 404 returned error can't find the container with id c51fc9976235eba806445ba527326ae1186d7ebeaf647b93b2ea841938dc4883 Feb 20 11:49:56.254120 master-0 kubenswrapper[7756]: I0220 11:49:56.254079 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5"] Feb 20 11:49:56.254965 master-0 kubenswrapper[7756]: I0220 11:49:56.254946 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.258500 master-0 kubenswrapper[7756]: I0220 11:49:56.258474 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 20 11:49:56.258666 master-0 kubenswrapper[7756]: I0220 11:49:56.258649 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 20 11:49:56.261005 master-0 kubenswrapper[7756]: I0220 11:49:56.260702 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 20 11:49:56.269033 master-0 kubenswrapper[7756]: I0220 11:49:56.268982 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5"] Feb 20 11:49:56.272077 master-0 kubenswrapper[7756]: I0220 11:49:56.272036 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 20 11:49:56.297251 master-0 kubenswrapper[7756]: I0220 11:49:56.296116 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.297251 master-0 kubenswrapper[7756]: I0220 11:49:56.296182 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxs4n\" (UniqueName: \"kubernetes.io/projected/d9f9442b-25b9-420f-b748-bb13423809fe-kube-api-access-kxs4n\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.297251 master-0 kubenswrapper[7756]: I0220 11:49:56.296208 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d9f9442b-25b9-420f-b748-bb13423809fe-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.297251 master-0 kubenswrapper[7756]: I0220 11:49:56.296263 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d9f9442b-25b9-420f-b748-bb13423809fe-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.297251 master-0 kubenswrapper[7756]: I0220 11:49:56.296285 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.297251 master-0 kubenswrapper[7756]: I0220 11:49:56.296346 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc9wx\" (UniqueName: \"kubernetes.io/projected/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-kube-api-access-sc9wx\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.297251 master-0 kubenswrapper[7756]: I0220 11:49:56.296368 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d9f9442b-25b9-420f-b748-bb13423809fe-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.297251 master-0 kubenswrapper[7756]: I0220 11:49:56.296387 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.297251 master-0 kubenswrapper[7756]: I0220 11:49:56.296722 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.297251 master-0 kubenswrapper[7756]: I0220 11:49:56.296750 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d9f9442b-25b9-420f-b748-bb13423809fe-cache\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.297251 master-0 kubenswrapper[7756]: I0220 11:49:56.296796 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d9f9442b-25b9-420f-b748-bb13423809fe-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.297251 master-0 kubenswrapper[7756]: I0220 11:49:56.297121 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.298784 master-0 kubenswrapper[7756]: I0220 11:49:56.297685 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.299875 master-0 kubenswrapper[7756]: I0220 11:49:56.299415 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.299875 master-0 kubenswrapper[7756]: E0220 11:49:56.299474 7756 projected.go:301] Couldn't get configMap payload openshift-operator-controller/operator-controller-trusted-ca-bundle: configmap references non-existent config key: ca-bundle.crt Feb 20 11:49:56.299875 master-0 kubenswrapper[7756]: E0220 11:49:56.299492 7756 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f: configmap references non-existent config key: ca-bundle.crt Feb 20 11:49:56.299875 master-0 kubenswrapper[7756]: E0220 11:49:56.299554 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-ca-certs podName:b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1 nodeName:}" failed. No retries permitted until 2026-02-20 11:49:56.799538137 +0000 UTC m=+42.541786145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-ca-certs") pod "operator-controller-controller-manager-9cc7d7bb-vs87f" (UID: "b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1") : configmap references non-existent config key: ca-bundle.crt Feb 20 11:49:56.314265 master-0 kubenswrapper[7756]: I0220 11:49:56.312744 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw"] Feb 20 11:49:56.314265 master-0 kubenswrapper[7756]: I0220 11:49:56.312921 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" podUID="67f890c8-05a1-4797-8da8-6194aea0df9a" containerName="cluster-version-operator" containerID="cri-o://fa4ba9b481647c70fe45ee5f4e5d91d6aa14b2c851cafd176c7db271a0f62932" gracePeriod=130 Feb 20 11:49:56.327633 master-0 kubenswrapper[7756]: I0220 11:49:56.327393 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc9wx\" (UniqueName: \"kubernetes.io/projected/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-kube-api-access-sc9wx\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.398649 master-0 kubenswrapper[7756]: I0220 11:49:56.397719 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d9f9442b-25b9-420f-b748-bb13423809fe-cache\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.398649 master-0 kubenswrapper[7756]: I0220 11:49:56.397765 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d9f9442b-25b9-420f-b748-bb13423809fe-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.398649 master-0 kubenswrapper[7756]: I0220 11:49:56.397811 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxs4n\" (UniqueName: \"kubernetes.io/projected/d9f9442b-25b9-420f-b748-bb13423809fe-kube-api-access-kxs4n\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.398649 master-0 kubenswrapper[7756]: I0220 11:49:56.397832 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d9f9442b-25b9-420f-b748-bb13423809fe-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.398649 master-0 kubenswrapper[7756]: I0220 11:49:56.397848 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d9f9442b-25b9-420f-b748-bb13423809fe-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.398649 master-0 kubenswrapper[7756]: I0220 11:49:56.397871 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d9f9442b-25b9-420f-b748-bb13423809fe-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.398649 master-0 kubenswrapper[7756]: I0220 11:49:56.397942 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d9f9442b-25b9-420f-b748-bb13423809fe-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.398649 master-0 kubenswrapper[7756]: I0220 11:49:56.398247 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d9f9442b-25b9-420f-b748-bb13423809fe-cache\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.399060 master-0 kubenswrapper[7756]: I0220 11:49:56.399008 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d9f9442b-25b9-420f-b748-bb13423809fe-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.400864 master-0 kubenswrapper[7756]: I0220 11:49:56.400848 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d9f9442b-25b9-420f-b748-bb13423809fe-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.411060 master-0 kubenswrapper[7756]: I0220 11:49:56.411014 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d9f9442b-25b9-420f-b748-bb13423809fe-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.417176 master-0 kubenswrapper[7756]: I0220 11:49:56.415675 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxs4n\" (UniqueName: \"kubernetes.io/projected/d9f9442b-25b9-420f-b748-bb13423809fe-kube-api-access-kxs4n\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.478155 master-0 kubenswrapper[7756]: I0220 11:49:56.478122 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:56.498445 master-0 kubenswrapper[7756]: I0220 11:49:56.498402 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-cvo-updatepayloads\") pod \"67f890c8-05a1-4797-8da8-6194aea0df9a\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " Feb 20 11:49:56.498809 master-0 kubenswrapper[7756]: I0220 11:49:56.498465 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/67f890c8-05a1-4797-8da8-6194aea0df9a-service-ca\") pod \"67f890c8-05a1-4797-8da8-6194aea0df9a\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " Feb 20 11:49:56.498809 master-0 kubenswrapper[7756]: I0220 11:49:56.498487 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-ssl-certs\") pod \"67f890c8-05a1-4797-8da8-6194aea0df9a\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " Feb 20 11:49:56.498809 master-0 kubenswrapper[7756]: I0220 11:49:56.498505 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67f890c8-05a1-4797-8da8-6194aea0df9a-kube-api-access\") pod \"67f890c8-05a1-4797-8da8-6194aea0df9a\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " Feb 20 11:49:56.498809 master-0 kubenswrapper[7756]: I0220 11:49:56.498622 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") pod \"67f890c8-05a1-4797-8da8-6194aea0df9a\" (UID: \"67f890c8-05a1-4797-8da8-6194aea0df9a\") " Feb 20 11:49:56.499604 master-0 kubenswrapper[7756]: I0220 11:49:56.498914 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "67f890c8-05a1-4797-8da8-6194aea0df9a" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:49:56.499604 master-0 kubenswrapper[7756]: I0220 11:49:56.498978 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "67f890c8-05a1-4797-8da8-6194aea0df9a" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:49:56.499604 master-0 kubenswrapper[7756]: I0220 11:49:56.499177 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f890c8-05a1-4797-8da8-6194aea0df9a-service-ca" (OuterVolumeSpecName: "service-ca") pod "67f890c8-05a1-4797-8da8-6194aea0df9a" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:49:56.503990 master-0 kubenswrapper[7756]: I0220 11:49:56.503966 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f890c8-05a1-4797-8da8-6194aea0df9a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "67f890c8-05a1-4797-8da8-6194aea0df9a" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:49:56.505439 master-0 kubenswrapper[7756]: I0220 11:49:56.505411 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "67f890c8-05a1-4797-8da8-6194aea0df9a" (UID: "67f890c8-05a1-4797-8da8-6194aea0df9a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:49:56.600340 master-0 kubenswrapper[7756]: I0220 11:49:56.600302 7756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f890c8-05a1-4797-8da8-6194aea0df9a-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:56.600340 master-0 kubenswrapper[7756]: I0220 11:49:56.600334 7756 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:56.600340 master-0 kubenswrapper[7756]: I0220 11:49:56.600347 7756 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/67f890c8-05a1-4797-8da8-6194aea0df9a-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:56.600601 master-0 kubenswrapper[7756]: I0220 11:49:56.600356 7756 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/67f890c8-05a1-4797-8da8-6194aea0df9a-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:56.600601 master-0 kubenswrapper[7756]: I0220 11:49:56.600367 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67f890c8-05a1-4797-8da8-6194aea0df9a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 11:49:56.617599 master-0 kubenswrapper[7756]: I0220 11:49:56.617557 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6"] Feb 20 11:49:56.617731 master-0 kubenswrapper[7756]: E0220 11:49:56.617716 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f890c8-05a1-4797-8da8-6194aea0df9a" containerName="cluster-version-operator" Feb 20 11:49:56.617731 master-0 kubenswrapper[7756]: I0220 11:49:56.617731 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f890c8-05a1-4797-8da8-6194aea0df9a" containerName="cluster-version-operator" Feb 20 11:49:56.617871 master-0 kubenswrapper[7756]: I0220 11:49:56.617812 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f890c8-05a1-4797-8da8-6194aea0df9a" containerName="cluster-version-operator" Feb 20 11:49:56.618124 master-0 kubenswrapper[7756]: I0220 11:49:56.618093 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.623232 master-0 kubenswrapper[7756]: I0220 11:49:56.620574 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 11:49:56.629650 master-0 kubenswrapper[7756]: I0220 11:49:56.624685 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 11:49:56.629650 master-0 kubenswrapper[7756]: I0220 11:49:56.624708 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 11:49:56.629650 master-0 kubenswrapper[7756]: I0220 11:49:56.624937 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 11:49:56.629650 master-0 kubenswrapper[7756]: I0220 11:49:56.624976 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 11:49:56.631555 master-0 kubenswrapper[7756]: I0220 11:49:56.630245 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 11:49:56.631555 master-0 kubenswrapper[7756]: I0220 11:49:56.630355 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6"] Feb 20 11:49:56.640911 master-0 kubenswrapper[7756]: I0220 11:49:56.640722 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:49:56.702215 master-0 kubenswrapper[7756]: I0220 11:49:56.701993 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-proxy-ca-bundles\") pod \"controller-manager-5b79dbbc7-zvpx6\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.702215 master-0 kubenswrapper[7756]: I0220 11:49:56.702146 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f598b79b-809a-4b22-91f0-5227017f6bcb-serving-cert\") pod \"controller-manager-5b79dbbc7-zvpx6\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.702438 master-0 kubenswrapper[7756]: I0220 11:49:56.702304 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-config\") pod \"controller-manager-5b79dbbc7-zvpx6\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.702438 master-0 kubenswrapper[7756]: I0220 11:49:56.702329 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk4rp\" (UniqueName: \"kubernetes.io/projected/f598b79b-809a-4b22-91f0-5227017f6bcb-kube-api-access-jk4rp\") pod \"controller-manager-5b79dbbc7-zvpx6\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.702438 master-0 kubenswrapper[7756]: I0220 11:49:56.702382 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-client-ca\") pod \"controller-manager-5b79dbbc7-zvpx6\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.803833 master-0 kubenswrapper[7756]: I0220 11:49:56.803473 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f598b79b-809a-4b22-91f0-5227017f6bcb-serving-cert\") pod \"controller-manager-5b79dbbc7-zvpx6\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.803833 master-0 kubenswrapper[7756]: I0220 11:49:56.803546 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-config\") pod \"controller-manager-5b79dbbc7-zvpx6\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.806347 master-0 kubenswrapper[7756]: I0220 11:49:56.804831 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk4rp\" (UniqueName: \"kubernetes.io/projected/f598b79b-809a-4b22-91f0-5227017f6bcb-kube-api-access-jk4rp\") pod \"controller-manager-5b79dbbc7-zvpx6\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.806347 master-0 kubenswrapper[7756]: I0220 11:49:56.804881 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-client-ca\") pod \"controller-manager-5b79dbbc7-zvpx6\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.806347 master-0 kubenswrapper[7756]: I0220 11:49:56.804909 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.806347 master-0 kubenswrapper[7756]: I0220 11:49:56.804939 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-proxy-ca-bundles\") pod \"controller-manager-5b79dbbc7-zvpx6\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.806347 master-0 kubenswrapper[7756]: I0220 11:49:56.805997 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-proxy-ca-bundles\") pod \"controller-manager-5b79dbbc7-zvpx6\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.807505 master-0 kubenswrapper[7756]: I0220 11:49:56.806797 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-config\") pod \"controller-manager-5b79dbbc7-zvpx6\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.807505 master-0 kubenswrapper[7756]: I0220 11:49:56.807483 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-client-ca\") pod \"controller-manager-5b79dbbc7-zvpx6\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.819941 master-0 kubenswrapper[7756]: I0220 11:49:56.809652 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f598b79b-809a-4b22-91f0-5227017f6bcb-serving-cert\") pod \"controller-manager-5b79dbbc7-zvpx6\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.819941 master-0 kubenswrapper[7756]: I0220 11:49:56.810407 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.824105 master-0 kubenswrapper[7756]: I0220 11:49:56.824052 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:49:56.830761 master-0 kubenswrapper[7756]: I0220 11:49:56.830565 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk4rp\" (UniqueName: \"kubernetes.io/projected/f598b79b-809a-4b22-91f0-5227017f6bcb-kube-api-access-jk4rp\") pod \"controller-manager-5b79dbbc7-zvpx6\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:56.935913 master-0 kubenswrapper[7756]: I0220 11:49:56.935460 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:49:57.044202 master-0 kubenswrapper[7756]: I0220 11:49:57.044059 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5"] Feb 20 11:49:57.056843 master-0 kubenswrapper[7756]: W0220 11:49:57.056671 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9f9442b_25b9_420f_b748_bb13423809fe.slice/crio-3a76972be7f15da250f8e27177b299ce05a6278ca9f8bfe782f7866364a2323b WatchSource:0}: Error finding container 3a76972be7f15da250f8e27177b299ce05a6278ca9f8bfe782f7866364a2323b: Status 404 returned error can't find the container with id 3a76972be7f15da250f8e27177b299ce05a6278ca9f8bfe782f7866364a2323b Feb 20 11:49:57.207800 master-0 kubenswrapper[7756]: I0220 11:49:57.207765 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f"] Feb 20 11:49:57.217713 master-0 kubenswrapper[7756]: I0220 11:49:57.216749 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" event={"ID":"0e7df828-4166-4e0e-bb1e-042b3d14a6b6","Type":"ContainerStarted","Data":"c51fc9976235eba806445ba527326ae1186d7ebeaf647b93b2ea841938dc4883"} Feb 20 11:49:57.218754 master-0 kubenswrapper[7756]: I0220 11:49:57.218700 7756 generic.go:334] "Generic (PLEG): container finished" podID="67f890c8-05a1-4797-8da8-6194aea0df9a" containerID="fa4ba9b481647c70fe45ee5f4e5d91d6aa14b2c851cafd176c7db271a0f62932" exitCode=0 Feb 20 11:49:57.218840 master-0 kubenswrapper[7756]: I0220 11:49:57.218821 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" Feb 20 11:49:57.219243 master-0 kubenswrapper[7756]: I0220 11:49:57.219222 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" event={"ID":"67f890c8-05a1-4797-8da8-6194aea0df9a","Type":"ContainerDied","Data":"fa4ba9b481647c70fe45ee5f4e5d91d6aa14b2c851cafd176c7db271a0f62932"} Feb 20 11:49:57.219289 master-0 kubenswrapper[7756]: I0220 11:49:57.219252 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw" event={"ID":"67f890c8-05a1-4797-8da8-6194aea0df9a","Type":"ContainerDied","Data":"7529d79a6118a295907b42ef070c06d01dccfc108d0cf68bd4817c376797c420"} Feb 20 11:49:57.219289 master-0 kubenswrapper[7756]: I0220 11:49:57.219269 7756 scope.go:117] "RemoveContainer" containerID="fa4ba9b481647c70fe45ee5f4e5d91d6aa14b2c851cafd176c7db271a0f62932" Feb 20 11:49:57.224504 master-0 kubenswrapper[7756]: I0220 11:49:57.224467 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" event={"ID":"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783","Type":"ContainerStarted","Data":"6cceab7cff3eceea2a18c3f9dabbbeccd1e0ebcb1b3ce52fedc88dcebb268425"} Feb 20 11:49:57.224593 master-0 kubenswrapper[7756]: I0220 11:49:57.224516 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" event={"ID":"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783","Type":"ContainerStarted","Data":"6e0d344ebc9083ae093b3615560303e004f95402f791a1230a823e11b3266557"} Feb 20 11:49:57.225248 master-0 kubenswrapper[7756]: W0220 11:49:57.225224 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2c2ee35_8ef2_4a79_a5c5_95cdd12653e1.slice/crio-aaa9389e6efd83bcb84425795f77ecd0592b13d2955b3048aeff511ecb88fc48 WatchSource:0}: Error finding container aaa9389e6efd83bcb84425795f77ecd0592b13d2955b3048aeff511ecb88fc48: Status 404 returned error can't find the container with id aaa9389e6efd83bcb84425795f77ecd0592b13d2955b3048aeff511ecb88fc48 Feb 20 11:49:57.226842 master-0 kubenswrapper[7756]: I0220 11:49:57.226810 7756 generic.go:334] "Generic (PLEG): container finished" podID="59c1cc61-8692-4a35-83fc-6bbef7086117" containerID="80ebc6a1a97e735d70ead1262f4a97848d649fb396b046241e543eb44b09793c" exitCode=0 Feb 20 11:49:57.227123 master-0 kubenswrapper[7756]: I0220 11:49:57.227046 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" event={"ID":"59c1cc61-8692-4a35-83fc-6bbef7086117","Type":"ContainerDied","Data":"80ebc6a1a97e735d70ead1262f4a97848d649fb396b046241e543eb44b09793c"} Feb 20 11:49:57.228660 master-0 kubenswrapper[7756]: I0220 11:49:57.228632 7756 generic.go:334] "Generic (PLEG): container finished" podID="1c183ec2-be40-4781-aabd-928c4f70661e" containerID="6c791d4c750ee2b19243dbaaa5b56296c4a7a58c936297cef8fe6bddbd6e486a" exitCode=0 Feb 20 11:49:57.228835 master-0 kubenswrapper[7756]: I0220 11:49:57.228809 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" event={"ID":"1c183ec2-be40-4781-aabd-928c4f70661e","Type":"ContainerDied","Data":"6c791d4c750ee2b19243dbaaa5b56296c4a7a58c936297cef8fe6bddbd6e486a"} Feb 20 11:49:57.229997 master-0 kubenswrapper[7756]: I0220 11:49:57.229974 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" event={"ID":"d9f9442b-25b9-420f-b748-bb13423809fe","Type":"ContainerStarted","Data":"3a76972be7f15da250f8e27177b299ce05a6278ca9f8bfe782f7866364a2323b"} Feb 20 11:49:57.232220 master-0 kubenswrapper[7756]: I0220 11:49:57.232185 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-29622" event={"ID":"1709ef31-9ddd-42bf-9a95-4be4502a0828","Type":"ContainerStarted","Data":"e56ae7466d7bc8510fc77432e12dbbf973ffc24e0ad5222f4fe4deca1b944036"} Feb 20 11:49:57.232289 master-0 kubenswrapper[7756]: I0220 11:49:57.232225 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-29622" event={"ID":"1709ef31-9ddd-42bf-9a95-4be4502a0828","Type":"ContainerStarted","Data":"b4bbcb8917d329a2b935635fe0693a9c07e63b65c8049f7a6144428567fbed5d"} Feb 20 11:49:57.235113 master-0 kubenswrapper[7756]: I0220 11:49:57.235089 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:49:57.305639 master-0 kubenswrapper[7756]: I0220 11:49:57.304987 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw"] Feb 20 11:49:57.306732 master-0 kubenswrapper[7756]: I0220 11:49:57.306702 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-4pnsw"] Feb 20 11:49:57.347552 master-0 kubenswrapper[7756]: I0220 11:49:57.346562 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-57476485-dwvgg"] Feb 20 11:49:57.347552 master-0 kubenswrapper[7756]: I0220 11:49:57.347176 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.351350 master-0 kubenswrapper[7756]: I0220 11:49:57.351291 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 20 11:49:57.351846 master-0 kubenswrapper[7756]: I0220 11:49:57.351794 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 20 11:49:57.352075 master-0 kubenswrapper[7756]: I0220 11:49:57.352049 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 20 11:49:57.366435 master-0 kubenswrapper[7756]: I0220 11:49:57.366382 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6"] Feb 20 11:49:57.411727 master-0 kubenswrapper[7756]: I0220 11:49:57.411673 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/89383482-190e-4f74-a81e-b1547e5b9ae6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.411727 master-0 kubenswrapper[7756]: I0220 11:49:57.411729 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/89383482-190e-4f74-a81e-b1547e5b9ae6-etc-ssl-certs\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.411962 master-0 kubenswrapper[7756]: I0220 11:49:57.411762 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89383482-190e-4f74-a81e-b1547e5b9ae6-service-ca\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.411962 master-0 kubenswrapper[7756]: I0220 11:49:57.411801 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89383482-190e-4f74-a81e-b1547e5b9ae6-serving-cert\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.411962 master-0 kubenswrapper[7756]: I0220 11:49:57.411818 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89383482-190e-4f74-a81e-b1547e5b9ae6-kube-api-access\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.513971 master-0 kubenswrapper[7756]: I0220 11:49:57.513862 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89383482-190e-4f74-a81e-b1547e5b9ae6-serving-cert\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.513971 master-0 kubenswrapper[7756]: I0220 11:49:57.513951 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89383482-190e-4f74-a81e-b1547e5b9ae6-kube-api-access\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.514242 master-0 kubenswrapper[7756]: I0220 11:49:57.514008 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/89383482-190e-4f74-a81e-b1547e5b9ae6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.514242 master-0 kubenswrapper[7756]: I0220 11:49:57.514054 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/89383482-190e-4f74-a81e-b1547e5b9ae6-etc-ssl-certs\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.514242 master-0 kubenswrapper[7756]: I0220 11:49:57.514103 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89383482-190e-4f74-a81e-b1547e5b9ae6-service-ca\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.515453 master-0 kubenswrapper[7756]: I0220 11:49:57.515408 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89383482-190e-4f74-a81e-b1547e5b9ae6-service-ca\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.516247 master-0 kubenswrapper[7756]: I0220 11:49:57.516184 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/89383482-190e-4f74-a81e-b1547e5b9ae6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.516334 master-0 kubenswrapper[7756]: I0220 11:49:57.516302 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/89383482-190e-4f74-a81e-b1547e5b9ae6-etc-ssl-certs\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.521049 master-0 kubenswrapper[7756]: I0220 11:49:57.520600 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89383482-190e-4f74-a81e-b1547e5b9ae6-serving-cert\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.545330 master-0 kubenswrapper[7756]: I0220 11:49:57.545250 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89383482-190e-4f74-a81e-b1547e5b9ae6-kube-api-access\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:57.693639 master-0 kubenswrapper[7756]: I0220 11:49:57.688182 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 11:49:58.249029 master-0 kubenswrapper[7756]: I0220 11:49:58.248954 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" event={"ID":"d9f9442b-25b9-420f-b748-bb13423809fe","Type":"ContainerStarted","Data":"e40652787e1056d5bea2a708b11a28c902ccf168191b9c27b9975e79166ffc60"} Feb 20 11:49:58.251074 master-0 kubenswrapper[7756]: I0220 11:49:58.251046 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" event={"ID":"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1","Type":"ContainerStarted","Data":"aaa9389e6efd83bcb84425795f77ecd0592b13d2955b3048aeff511ecb88fc48"} Feb 20 11:49:58.588200 master-0 kubenswrapper[7756]: I0220 11:49:58.588102 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f890c8-05a1-4797-8da8-6194aea0df9a" path="/var/lib/kubelet/pods/67f890c8-05a1-4797-8da8-6194aea0df9a/volumes" Feb 20 11:49:59.974511 master-0 kubenswrapper[7756]: W0220 11:49:59.974398 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf598b79b_809a_4b22_91f0_5227017f6bcb.slice/crio-79ecee64aaf5eb07e6327598335a8418cc9eadd15d6c29e3a6287aa09900ff15 WatchSource:0}: Error finding container 79ecee64aaf5eb07e6327598335a8418cc9eadd15d6c29e3a6287aa09900ff15: Status 404 returned error can't find the container with id 79ecee64aaf5eb07e6327598335a8418cc9eadd15d6c29e3a6287aa09900ff15 Feb 20 11:50:00.028093 master-0 kubenswrapper[7756]: I0220 11:50:00.028040 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:50:00.055032 master-0 kubenswrapper[7756]: I0220 11:50:00.051605 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-audit-policies\") pod \"1c183ec2-be40-4781-aabd-928c4f70661e\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " Feb 20 11:50:00.055032 master-0 kubenswrapper[7756]: I0220 11:50:00.051662 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-etcd-serving-ca\") pod \"1c183ec2-be40-4781-aabd-928c4f70661e\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " Feb 20 11:50:00.055032 master-0 kubenswrapper[7756]: I0220 11:50:00.051712 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-serving-cert\") pod \"1c183ec2-be40-4781-aabd-928c4f70661e\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " Feb 20 11:50:00.055032 master-0 kubenswrapper[7756]: I0220 11:50:00.051772 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-etcd-client\") pod \"1c183ec2-be40-4781-aabd-928c4f70661e\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " Feb 20 11:50:00.055032 master-0 kubenswrapper[7756]: I0220 11:50:00.051808 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-trusted-ca-bundle\") pod \"1c183ec2-be40-4781-aabd-928c4f70661e\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " Feb 20 11:50:00.055032 master-0 kubenswrapper[7756]: I0220 11:50:00.051833 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-encryption-config\") pod \"1c183ec2-be40-4781-aabd-928c4f70661e\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " Feb 20 11:50:00.055032 master-0 kubenswrapper[7756]: I0220 11:50:00.051895 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n4rw\" (UniqueName: \"kubernetes.io/projected/1c183ec2-be40-4781-aabd-928c4f70661e-kube-api-access-2n4rw\") pod \"1c183ec2-be40-4781-aabd-928c4f70661e\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " Feb 20 11:50:00.055032 master-0 kubenswrapper[7756]: I0220 11:50:00.051919 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c183ec2-be40-4781-aabd-928c4f70661e-audit-dir\") pod \"1c183ec2-be40-4781-aabd-928c4f70661e\" (UID: \"1c183ec2-be40-4781-aabd-928c4f70661e\") " Feb 20 11:50:00.055032 master-0 kubenswrapper[7756]: I0220 11:50:00.052153 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c183ec2-be40-4781-aabd-928c4f70661e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "1c183ec2-be40-4781-aabd-928c4f70661e" (UID: "1c183ec2-be40-4781-aabd-928c4f70661e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:50:00.055032 master-0 kubenswrapper[7756]: I0220 11:50:00.052680 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "1c183ec2-be40-4781-aabd-928c4f70661e" (UID: "1c183ec2-be40-4781-aabd-928c4f70661e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:50:00.055032 master-0 kubenswrapper[7756]: I0220 11:50:00.053167 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1c183ec2-be40-4781-aabd-928c4f70661e" (UID: "1c183ec2-be40-4781-aabd-928c4f70661e"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:50:00.055032 master-0 kubenswrapper[7756]: I0220 11:50:00.053618 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1c183ec2-be40-4781-aabd-928c4f70661e" (UID: "1c183ec2-be40-4781-aabd-928c4f70661e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:50:00.056419 master-0 kubenswrapper[7756]: I0220 11:50:00.055383 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1c183ec2-be40-4781-aabd-928c4f70661e" (UID: "1c183ec2-be40-4781-aabd-928c4f70661e"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:50:00.057385 master-0 kubenswrapper[7756]: I0220 11:50:00.057317 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c183ec2-be40-4781-aabd-928c4f70661e-kube-api-access-2n4rw" (OuterVolumeSpecName: "kube-api-access-2n4rw") pod "1c183ec2-be40-4781-aabd-928c4f70661e" (UID: "1c183ec2-be40-4781-aabd-928c4f70661e"). InnerVolumeSpecName "kube-api-access-2n4rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:50:00.057947 master-0 kubenswrapper[7756]: I0220 11:50:00.057859 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1c183ec2-be40-4781-aabd-928c4f70661e" (UID: "1c183ec2-be40-4781-aabd-928c4f70661e"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:50:00.058319 master-0 kubenswrapper[7756]: I0220 11:50:00.058257 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1c183ec2-be40-4781-aabd-928c4f70661e" (UID: "1c183ec2-be40-4781-aabd-928c4f70661e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:50:00.153067 master-0 kubenswrapper[7756]: I0220 11:50:00.152983 7756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:00.153067 master-0 kubenswrapper[7756]: I0220 11:50:00.153032 7756 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:00.153067 master-0 kubenswrapper[7756]: I0220 11:50:00.153049 7756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:00.153067 master-0 kubenswrapper[7756]: I0220 11:50:00.153067 7756 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1c183ec2-be40-4781-aabd-928c4f70661e-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:00.153480 master-0 kubenswrapper[7756]: I0220 11:50:00.153105 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n4rw\" (UniqueName: \"kubernetes.io/projected/1c183ec2-be40-4781-aabd-928c4f70661e-kube-api-access-2n4rw\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:00.153480 master-0 kubenswrapper[7756]: I0220 11:50:00.153120 7756 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c183ec2-be40-4781-aabd-928c4f70661e-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:00.153480 master-0 kubenswrapper[7756]: I0220 11:50:00.153135 7756 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:00.153480 master-0 kubenswrapper[7756]: I0220 11:50:00.153153 7756 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1c183ec2-be40-4781-aabd-928c4f70661e-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:00.264335 master-0 kubenswrapper[7756]: I0220 11:50:00.264257 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" event={"ID":"f598b79b-809a-4b22-91f0-5227017f6bcb","Type":"ContainerStarted","Data":"79ecee64aaf5eb07e6327598335a8418cc9eadd15d6c29e3a6287aa09900ff15"} Feb 20 11:50:00.266612 master-0 kubenswrapper[7756]: I0220 11:50:00.266569 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" event={"ID":"1c183ec2-be40-4781-aabd-928c4f70661e","Type":"ContainerDied","Data":"b21326213655ed5cbd77c1642d0a20ed5adab633f8bf2bcc2848b92a7011ed7f"} Feb 20 11:50:00.266736 master-0 kubenswrapper[7756]: I0220 11:50:00.266701 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx" Feb 20 11:50:00.358738 master-0 kubenswrapper[7756]: I0220 11:50:00.358684 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx"] Feb 20 11:50:00.367068 master-0 kubenswrapper[7756]: I0220 11:50:00.367014 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-58558b4f4c-b5nxx"] Feb 20 11:50:00.437899 master-0 kubenswrapper[7756]: I0220 11:50:00.437855 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-kx4ch" Feb 20 11:50:00.587642 master-0 kubenswrapper[7756]: I0220 11:50:00.587509 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c183ec2-be40-4781-aabd-928c4f70661e" path="/var/lib/kubelet/pods/1c183ec2-be40-4781-aabd-928c4f70661e/volumes" Feb 20 11:50:00.697088 master-0 kubenswrapper[7756]: I0220 11:50:00.697043 7756 scope.go:117] "RemoveContainer" containerID="fa4ba9b481647c70fe45ee5f4e5d91d6aa14b2c851cafd176c7db271a0f62932" Feb 20 11:50:00.697617 master-0 kubenswrapper[7756]: E0220 11:50:00.697583 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa4ba9b481647c70fe45ee5f4e5d91d6aa14b2c851cafd176c7db271a0f62932\": container with ID starting with fa4ba9b481647c70fe45ee5f4e5d91d6aa14b2c851cafd176c7db271a0f62932 not found: ID does not exist" containerID="fa4ba9b481647c70fe45ee5f4e5d91d6aa14b2c851cafd176c7db271a0f62932" Feb 20 11:50:00.697678 master-0 kubenswrapper[7756]: I0220 11:50:00.697617 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa4ba9b481647c70fe45ee5f4e5d91d6aa14b2c851cafd176c7db271a0f62932"} err="failed to get container status \"fa4ba9b481647c70fe45ee5f4e5d91d6aa14b2c851cafd176c7db271a0f62932\": rpc error: code = NotFound desc = could not find container \"fa4ba9b481647c70fe45ee5f4e5d91d6aa14b2c851cafd176c7db271a0f62932\": container with ID starting with fa4ba9b481647c70fe45ee5f4e5d91d6aa14b2c851cafd176c7db271a0f62932 not found: ID does not exist" Feb 20 11:50:00.697678 master-0 kubenswrapper[7756]: I0220 11:50:00.697657 7756 scope.go:117] "RemoveContainer" containerID="6c791d4c750ee2b19243dbaaa5b56296c4a7a58c936297cef8fe6bddbd6e486a" Feb 20 11:50:00.782206 master-0 kubenswrapper[7756]: W0220 11:50:00.782064 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89383482_190e_4f74_a81e_b1547e5b9ae6.slice/crio-ac8b90837a8f5e731e7b22ff050f1b380571286ef85231576020efe34cd2e430 WatchSource:0}: Error finding container ac8b90837a8f5e731e7b22ff050f1b380571286ef85231576020efe34cd2e430: Status 404 returned error can't find the container with id ac8b90837a8f5e731e7b22ff050f1b380571286ef85231576020efe34cd2e430 Feb 20 11:50:01.273164 master-0 kubenswrapper[7756]: I0220 11:50:01.273114 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" event={"ID":"89383482-190e-4f74-a81e-b1547e5b9ae6","Type":"ContainerStarted","Data":"ac8b90837a8f5e731e7b22ff050f1b380571286ef85231576020efe34cd2e430"} Feb 20 11:50:01.460254 master-0 kubenswrapper[7756]: I0220 11:50:01.460198 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 20 11:50:01.460411 master-0 kubenswrapper[7756]: E0220 11:50:01.460399 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c183ec2-be40-4781-aabd-928c4f70661e" containerName="fix-audit-permissions" Feb 20 11:50:01.460462 master-0 kubenswrapper[7756]: I0220 11:50:01.460414 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c183ec2-be40-4781-aabd-928c4f70661e" containerName="fix-audit-permissions" Feb 20 11:50:01.460559 master-0 kubenswrapper[7756]: I0220 11:50:01.460537 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c183ec2-be40-4781-aabd-928c4f70661e" containerName="fix-audit-permissions" Feb 20 11:50:01.461382 master-0 kubenswrapper[7756]: I0220 11:50:01.461343 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 11:50:01.463982 master-0 kubenswrapper[7756]: I0220 11:50:01.463332 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 20 11:50:01.469372 master-0 kubenswrapper[7756]: I0220 11:50:01.469316 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 20 11:50:01.469969 master-0 kubenswrapper[7756]: I0220 11:50:01.469950 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 11:50:01.474901 master-0 kubenswrapper[7756]: I0220 11:50:01.474482 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 20 11:50:01.482001 master-0 kubenswrapper[7756]: I0220 11:50:01.481733 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 20 11:50:01.503515 master-0 kubenswrapper[7756]: I0220 11:50:01.503453 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 20 11:50:01.569959 master-0 kubenswrapper[7756]: I0220 11:50:01.569924 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-var-lock\") pod \"installer-1-master-0\" (UID: \"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 11:50:01.570086 master-0 kubenswrapper[7756]: I0220 11:50:01.569973 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a827d746-cfd3-48a2-a20b-2ff1526986b9-kube-api-access\") pod \"installer-1-master-0\" (UID: \"a827d746-cfd3-48a2-a20b-2ff1526986b9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 11:50:01.570086 master-0 kubenswrapper[7756]: I0220 11:50:01.570006 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-kube-api-access\") pod \"installer-1-master-0\" (UID: \"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 11:50:01.570086 master-0 kubenswrapper[7756]: I0220 11:50:01.570057 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 11:50:01.570228 master-0 kubenswrapper[7756]: I0220 11:50:01.570091 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a827d746-cfd3-48a2-a20b-2ff1526986b9-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"a827d746-cfd3-48a2-a20b-2ff1526986b9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 11:50:01.570228 master-0 kubenswrapper[7756]: I0220 11:50:01.570110 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a827d746-cfd3-48a2-a20b-2ff1526986b9-var-lock\") pod \"installer-1-master-0\" (UID: \"a827d746-cfd3-48a2-a20b-2ff1526986b9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 11:50:01.626673 master-0 kubenswrapper[7756]: I0220 11:50:01.626612 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh"] Feb 20 11:50:01.627233 master-0 kubenswrapper[7756]: I0220 11:50:01.627197 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.631271 master-0 kubenswrapper[7756]: I0220 11:50:01.631167 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 20 11:50:01.631446 master-0 kubenswrapper[7756]: I0220 11:50:01.631382 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 20 11:50:01.631753 master-0 kubenswrapper[7756]: I0220 11:50:01.631535 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 20 11:50:01.632005 master-0 kubenswrapper[7756]: I0220 11:50:01.631985 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 20 11:50:01.632222 master-0 kubenswrapper[7756]: I0220 11:50:01.632203 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 20 11:50:01.633010 master-0 kubenswrapper[7756]: I0220 11:50:01.632962 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 20 11:50:01.633491 master-0 kubenswrapper[7756]: I0220 11:50:01.633112 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 20 11:50:01.633491 master-0 kubenswrapper[7756]: I0220 11:50:01.633299 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 20 11:50:01.647858 master-0 kubenswrapper[7756]: I0220 11:50:01.643323 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh"] Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.670629 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.670679 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-etcd-client\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.670738 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.670797 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fca213c3-42ca-4341-a2e6-a143b9389f9e-audit-dir\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.670830 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7krn8\" (UniqueName: \"kubernetes.io/projected/fca213c3-42ca-4341-a2e6-a143b9389f9e-kube-api-access-7krn8\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.670862 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-serving-cert\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.670933 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a827d746-cfd3-48a2-a20b-2ff1526986b9-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"a827d746-cfd3-48a2-a20b-2ff1526986b9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.671003 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a827d746-cfd3-48a2-a20b-2ff1526986b9-var-lock\") pod \"installer-1-master-0\" (UID: \"a827d746-cfd3-48a2-a20b-2ff1526986b9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.671027 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-var-lock\") pod \"installer-1-master-0\" (UID: \"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.671072 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a827d746-cfd3-48a2-a20b-2ff1526986b9-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"a827d746-cfd3-48a2-a20b-2ff1526986b9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.671078 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a827d746-cfd3-48a2-a20b-2ff1526986b9-var-lock\") pod \"installer-1-master-0\" (UID: \"a827d746-cfd3-48a2-a20b-2ff1526986b9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.671186 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-var-lock\") pod \"installer-1-master-0\" (UID: \"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.671201 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-encryption-config\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.671228 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a827d746-cfd3-48a2-a20b-2ff1526986b9-kube-api-access\") pod \"installer-1-master-0\" (UID: \"a827d746-cfd3-48a2-a20b-2ff1526986b9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.671244 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-etcd-serving-ca\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.671267 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-audit-policies\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.671298 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-trusted-ca-bundle\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.672167 master-0 kubenswrapper[7756]: I0220 11:50:01.671319 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-kube-api-access\") pod \"installer-1-master-0\" (UID: \"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 11:50:01.688388 master-0 kubenswrapper[7756]: I0220 11:50:01.687714 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a827d746-cfd3-48a2-a20b-2ff1526986b9-kube-api-access\") pod \"installer-1-master-0\" (UID: \"a827d746-cfd3-48a2-a20b-2ff1526986b9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 11:50:01.693785 master-0 kubenswrapper[7756]: I0220 11:50:01.691590 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-kube-api-access\") pod \"installer-1-master-0\" (UID: \"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 11:50:01.772076 master-0 kubenswrapper[7756]: I0220 11:50:01.772024 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-etcd-client\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.772076 master-0 kubenswrapper[7756]: I0220 11:50:01.772077 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fca213c3-42ca-4341-a2e6-a143b9389f9e-audit-dir\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.772328 master-0 kubenswrapper[7756]: I0220 11:50:01.772098 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7krn8\" (UniqueName: \"kubernetes.io/projected/fca213c3-42ca-4341-a2e6-a143b9389f9e-kube-api-access-7krn8\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.772328 master-0 kubenswrapper[7756]: I0220 11:50:01.772122 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-serving-cert\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.772328 master-0 kubenswrapper[7756]: I0220 11:50:01.772150 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-encryption-config\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.772328 master-0 kubenswrapper[7756]: I0220 11:50:01.772168 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-etcd-serving-ca\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.772328 master-0 kubenswrapper[7756]: I0220 11:50:01.772185 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-audit-policies\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.772328 master-0 kubenswrapper[7756]: I0220 11:50:01.772214 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-trusted-ca-bundle\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.772780 master-0 kubenswrapper[7756]: I0220 11:50:01.772733 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-trusted-ca-bundle\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.774614 master-0 kubenswrapper[7756]: I0220 11:50:01.774575 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-etcd-serving-ca\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.775334 master-0 kubenswrapper[7756]: I0220 11:50:01.775301 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fca213c3-42ca-4341-a2e6-a143b9389f9e-audit-dir\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.775858 master-0 kubenswrapper[7756]: I0220 11:50:01.775817 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-audit-policies\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.778978 master-0 kubenswrapper[7756]: I0220 11:50:01.778844 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-serving-cert\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.779172 master-0 kubenswrapper[7756]: I0220 11:50:01.779136 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-etcd-client\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.779225 master-0 kubenswrapper[7756]: I0220 11:50:01.779173 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-encryption-config\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.802373 master-0 kubenswrapper[7756]: I0220 11:50:01.796498 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7krn8\" (UniqueName: \"kubernetes.io/projected/fca213c3-42ca-4341-a2e6-a143b9389f9e-kube-api-access-7krn8\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:01.884313 master-0 kubenswrapper[7756]: I0220 11:50:01.884245 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 11:50:01.906049 master-0 kubenswrapper[7756]: I0220 11:50:01.905794 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 11:50:01.952105 master-0 kubenswrapper[7756]: I0220 11:50:01.945384 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:02.293850 master-0 kubenswrapper[7756]: I0220 11:50:02.293661 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" event={"ID":"89383482-190e-4f74-a81e-b1547e5b9ae6","Type":"ContainerStarted","Data":"cdc9cc9ed8b0ca2df37b48bd33917f4f6c78f23c4f8aeddaab64905dab048bcd"} Feb 20 11:50:02.295848 master-0 kubenswrapper[7756]: I0220 11:50:02.295759 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" event={"ID":"4d060bff-3c25-4eeb-bdd3-e20fb2687645","Type":"ContainerStarted","Data":"bde5416bac73fa471e56c249e60a20bae1969e15104045d6a10ee4eb30b42752"} Feb 20 11:50:02.296037 master-0 kubenswrapper[7756]: I0220 11:50:02.295976 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:50:02.301829 master-0 kubenswrapper[7756]: I0220 11:50:02.301539 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" event={"ID":"59c1cc61-8692-4a35-83fc-6bbef7086117","Type":"ContainerStarted","Data":"627c3eea69726bde50a5db114890daaad85e78fee4ced6ea37119d2dab844226"} Feb 20 11:50:02.301829 master-0 kubenswrapper[7756]: I0220 11:50:02.301607 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" event={"ID":"59c1cc61-8692-4a35-83fc-6bbef7086117","Type":"ContainerStarted","Data":"f58eac46d38b82856f3adefd4e394d96fdc7604b05c4af352b75ae45f27ac813"} Feb 20 11:50:02.305826 master-0 kubenswrapper[7756]: I0220 11:50:02.305089 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" event={"ID":"d9f9442b-25b9-420f-b748-bb13423809fe","Type":"ContainerStarted","Data":"fd156bc7a5466d6b67b1239ac8613c9df410e89cc9c884ed83f3394a7c8ae304"} Feb 20 11:50:02.305826 master-0 kubenswrapper[7756]: I0220 11:50:02.305746 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:50:02.313230 master-0 kubenswrapper[7756]: I0220 11:50:02.313047 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" event={"ID":"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1","Type":"ContainerStarted","Data":"2821f8c91f65d935805552e9ae26d43432f9d2a9d35c919ce7211efee70f0183"} Feb 20 11:50:02.313230 master-0 kubenswrapper[7756]: I0220 11:50:02.313101 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" event={"ID":"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1","Type":"ContainerStarted","Data":"5bf57c12fc70c17e6a09a820bf2ab5c2dd4edbb89e20cced0e4474b7e6ce7231"} Feb 20 11:50:02.313406 master-0 kubenswrapper[7756]: I0220 11:50:02.313383 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:50:02.325234 master-0 kubenswrapper[7756]: I0220 11:50:02.325199 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 11:50:02.326561 master-0 kubenswrapper[7756]: I0220 11:50:02.326486 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" podStartSLOduration=5.326469118 podStartE2EDuration="5.326469118s" podCreationTimestamp="2026-02-20 11:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:50:02.325538422 +0000 UTC m=+48.067786430" watchObservedRunningTime="2026-02-20 11:50:02.326469118 +0000 UTC m=+48.068717126" Feb 20 11:50:02.328775 master-0 kubenswrapper[7756]: I0220 11:50:02.328693 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" event={"ID":"0e7df828-4166-4e0e-bb1e-042b3d14a6b6","Type":"ContainerStarted","Data":"50b3d6e5e7ab30a6cfdc8ce1d5892866b82c6696745641cae28d3888bec7ebbc"} Feb 20 11:50:02.329736 master-0 kubenswrapper[7756]: I0220 11:50:02.329194 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:50:02.334542 master-0 kubenswrapper[7756]: I0220 11:50:02.332436 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" event={"ID":"d65a0af4-c96f-44f8-9384-6bae4585983b","Type":"ContainerStarted","Data":"96ebc202ceb204e388187202f47dab6fe24fa580099283a0d46582775eded058"} Feb 20 11:50:02.334542 master-0 kubenswrapper[7756]: I0220 11:50:02.332900 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:50:02.336497 master-0 kubenswrapper[7756]: I0220 11:50:02.336464 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:50:02.338601 master-0 kubenswrapper[7756]: I0220 11:50:02.337860 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" event={"ID":"dbce6cdc-040a-48e1-8a81-b6ff9c180eba","Type":"ContainerStarted","Data":"d69dad82c79e06506f238a23ca41e2826075f52d69f22b3756440b59139033ec"} Feb 20 11:50:02.338601 master-0 kubenswrapper[7756]: I0220 11:50:02.338320 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:50:02.339084 master-0 kubenswrapper[7756]: I0220 11:50:02.339037 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 11:50:02.391081 master-0 kubenswrapper[7756]: I0220 11:50:02.387813 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" podStartSLOduration=10.722043888 podStartE2EDuration="18.387790441s" podCreationTimestamp="2026-02-20 11:49:44 +0000 UTC" firstStartedPulling="2026-02-20 11:49:48.143808139 +0000 UTC m=+33.886056147" lastFinishedPulling="2026-02-20 11:49:55.809554692 +0000 UTC m=+41.551802700" observedRunningTime="2026-02-20 11:50:02.387240315 +0000 UTC m=+48.129488333" watchObservedRunningTime="2026-02-20 11:50:02.387790441 +0000 UTC m=+48.130038449" Feb 20 11:50:02.403493 master-0 kubenswrapper[7756]: W0220 11:50:02.403422 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda827d746_cfd3_48a2_a20b_2ff1526986b9.slice/crio-c266b103482362f3c418b4517deddb3769575b5e6f6333189c11e3e4fa22e93f WatchSource:0}: Error finding container c266b103482362f3c418b4517deddb3769575b5e6f6333189c11e3e4fa22e93f: Status 404 returned error can't find the container with id c266b103482362f3c418b4517deddb3769575b5e6f6333189c11e3e4fa22e93f Feb 20 11:50:02.410801 master-0 kubenswrapper[7756]: I0220 11:50:02.410542 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 20 11:50:02.420379 master-0 kubenswrapper[7756]: I0220 11:50:02.420299 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 20 11:50:02.425634 master-0 kubenswrapper[7756]: I0220 11:50:02.421058 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 20 11:50:02.428030 master-0 kubenswrapper[7756]: I0220 11:50:02.426469 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 20 11:50:02.428175 master-0 kubenswrapper[7756]: I0220 11:50:02.428110 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" podStartSLOduration=6.428085063 podStartE2EDuration="6.428085063s" podCreationTimestamp="2026-02-20 11:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:50:02.423814351 +0000 UTC m=+48.166062379" watchObservedRunningTime="2026-02-20 11:50:02.428085063 +0000 UTC m=+48.170333071" Feb 20 11:50:02.428336 master-0 kubenswrapper[7756]: I0220 11:50:02.428313 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 20 11:50:02.437962 master-0 kubenswrapper[7756]: I0220 11:50:02.437767 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 20 11:50:02.450368 master-0 kubenswrapper[7756]: I0220 11:50:02.450219 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" podStartSLOduration=6.450193035 podStartE2EDuration="6.450193035s" podCreationTimestamp="2026-02-20 11:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:50:02.45003092 +0000 UTC m=+48.192278928" watchObservedRunningTime="2026-02-20 11:50:02.450193035 +0000 UTC m=+48.192441043" Feb 20 11:50:02.487854 master-0 kubenswrapper[7756]: I0220 11:50:02.482083 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:50:02.487854 master-0 kubenswrapper[7756]: I0220 11:50:02.482505 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:50:02.490979 master-0 kubenswrapper[7756]: I0220 11:50:02.490930 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5710eb66-9717-4beb-a8b2-19f6886376b3-var-lock\") pod \"installer-1-master-0\" (UID: \"5710eb66-9717-4beb-a8b2-19f6886376b3\") " pod="openshift-etcd/installer-1-master-0" Feb 20 11:50:02.491050 master-0 kubenswrapper[7756]: I0220 11:50:02.491000 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5710eb66-9717-4beb-a8b2-19f6886376b3-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5710eb66-9717-4beb-a8b2-19f6886376b3\") " pod="openshift-etcd/installer-1-master-0" Feb 20 11:50:02.491142 master-0 kubenswrapper[7756]: I0220 11:50:02.491118 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5710eb66-9717-4beb-a8b2-19f6886376b3-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5710eb66-9717-4beb-a8b2-19f6886376b3\") " pod="openshift-etcd/installer-1-master-0" Feb 20 11:50:02.491911 master-0 kubenswrapper[7756]: I0220 11:50:02.491865 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh"] Feb 20 11:50:02.536492 master-0 kubenswrapper[7756]: I0220 11:50:02.528723 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" podStartSLOduration=4.984301231 podStartE2EDuration="9.528701419s" podCreationTimestamp="2026-02-20 11:49:53 +0000 UTC" firstStartedPulling="2026-02-20 11:49:56.237758681 +0000 UTC m=+41.980006689" lastFinishedPulling="2026-02-20 11:50:00.782158869 +0000 UTC m=+46.524406877" observedRunningTime="2026-02-20 11:50:02.528134842 +0000 UTC m=+48.270382870" watchObservedRunningTime="2026-02-20 11:50:02.528701419 +0000 UTC m=+48.270949427" Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: I0220 11:50:02.541967 7756 patch_prober.go:28] interesting pod/apiserver-7666bb78cc-jxswr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: [+]log ok Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: [+]etcd ok Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: [+]poststarthook/generic-apiserver-start-informers ok Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: [+]poststarthook/max-in-flight-filter ok Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: [+]poststarthook/project.openshift.io-projectcache ok Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: [+]poststarthook/openshift.io-startinformers ok Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: livez check failed Feb 20 11:50:02.543085 master-0 kubenswrapper[7756]: I0220 11:50:02.542019 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" podUID="59c1cc61-8692-4a35-83fc-6bbef7086117" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:50:02.592659 master-0 kubenswrapper[7756]: I0220 11:50:02.592616 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5710eb66-9717-4beb-a8b2-19f6886376b3-var-lock\") pod \"installer-1-master-0\" (UID: \"5710eb66-9717-4beb-a8b2-19f6886376b3\") " pod="openshift-etcd/installer-1-master-0" Feb 20 11:50:02.592659 master-0 kubenswrapper[7756]: I0220 11:50:02.592657 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5710eb66-9717-4beb-a8b2-19f6886376b3-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5710eb66-9717-4beb-a8b2-19f6886376b3\") " pod="openshift-etcd/installer-1-master-0" Feb 20 11:50:02.592843 master-0 kubenswrapper[7756]: I0220 11:50:02.592699 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5710eb66-9717-4beb-a8b2-19f6886376b3-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5710eb66-9717-4beb-a8b2-19f6886376b3\") " pod="openshift-etcd/installer-1-master-0" Feb 20 11:50:02.593011 master-0 kubenswrapper[7756]: I0220 11:50:02.592978 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5710eb66-9717-4beb-a8b2-19f6886376b3-var-lock\") pod \"installer-1-master-0\" (UID: \"5710eb66-9717-4beb-a8b2-19f6886376b3\") " pod="openshift-etcd/installer-1-master-0" Feb 20 11:50:02.593112 master-0 kubenswrapper[7756]: I0220 11:50:02.593096 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5710eb66-9717-4beb-a8b2-19f6886376b3-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5710eb66-9717-4beb-a8b2-19f6886376b3\") " pod="openshift-etcd/installer-1-master-0" Feb 20 11:50:02.609723 master-0 kubenswrapper[7756]: I0220 11:50:02.609671 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5710eb66-9717-4beb-a8b2-19f6886376b3-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5710eb66-9717-4beb-a8b2-19f6886376b3\") " pod="openshift-etcd/installer-1-master-0" Feb 20 11:50:02.727935 master-0 kubenswrapper[7756]: I0220 11:50:02.727868 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mf5rz"] Feb 20 11:50:02.728723 master-0 kubenswrapper[7756]: I0220 11:50:02.728702 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mf5rz" Feb 20 11:50:02.747455 master-0 kubenswrapper[7756]: I0220 11:50:02.746649 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mf5rz"] Feb 20 11:50:02.787963 master-0 kubenswrapper[7756]: I0220 11:50:02.787831 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 20 11:50:02.794584 master-0 kubenswrapper[7756]: I0220 11:50:02.794435 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z22pz\" (UniqueName: \"kubernetes.io/projected/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-kube-api-access-z22pz\") pod \"certified-operators-mf5rz\" (UID: \"339f8487-0d2b-4f4f-9872-c629e7f3e2e1\") " pod="openshift-marketplace/certified-operators-mf5rz" Feb 20 11:50:02.794584 master-0 kubenswrapper[7756]: I0220 11:50:02.794476 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-utilities\") pod \"certified-operators-mf5rz\" (UID: \"339f8487-0d2b-4f4f-9872-c629e7f3e2e1\") " pod="openshift-marketplace/certified-operators-mf5rz" Feb 20 11:50:02.794584 master-0 kubenswrapper[7756]: I0220 11:50:02.794501 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-catalog-content\") pod \"certified-operators-mf5rz\" (UID: \"339f8487-0d2b-4f4f-9872-c629e7f3e2e1\") " pod="openshift-marketplace/certified-operators-mf5rz" Feb 20 11:50:02.895819 master-0 kubenswrapper[7756]: I0220 11:50:02.895515 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z22pz\" (UniqueName: \"kubernetes.io/projected/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-kube-api-access-z22pz\") pod \"certified-operators-mf5rz\" (UID: \"339f8487-0d2b-4f4f-9872-c629e7f3e2e1\") " pod="openshift-marketplace/certified-operators-mf5rz" Feb 20 11:50:02.895819 master-0 kubenswrapper[7756]: I0220 11:50:02.895580 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-utilities\") pod \"certified-operators-mf5rz\" (UID: \"339f8487-0d2b-4f4f-9872-c629e7f3e2e1\") " pod="openshift-marketplace/certified-operators-mf5rz" Feb 20 11:50:02.895819 master-0 kubenswrapper[7756]: I0220 11:50:02.895604 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-catalog-content\") pod \"certified-operators-mf5rz\" (UID: \"339f8487-0d2b-4f4f-9872-c629e7f3e2e1\") " pod="openshift-marketplace/certified-operators-mf5rz" Feb 20 11:50:02.896217 master-0 kubenswrapper[7756]: I0220 11:50:02.896025 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-catalog-content\") pod \"certified-operators-mf5rz\" (UID: \"339f8487-0d2b-4f4f-9872-c629e7f3e2e1\") " pod="openshift-marketplace/certified-operators-mf5rz" Feb 20 11:50:02.898153 master-0 kubenswrapper[7756]: I0220 11:50:02.896613 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-utilities\") pod \"certified-operators-mf5rz\" (UID: \"339f8487-0d2b-4f4f-9872-c629e7f3e2e1\") " pod="openshift-marketplace/certified-operators-mf5rz" Feb 20 11:50:02.918616 master-0 kubenswrapper[7756]: I0220 11:50:02.916953 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z22pz\" (UniqueName: \"kubernetes.io/projected/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-kube-api-access-z22pz\") pod \"certified-operators-mf5rz\" (UID: \"339f8487-0d2b-4f4f-9872-c629e7f3e2e1\") " pod="openshift-marketplace/certified-operators-mf5rz" Feb 20 11:50:02.923497 master-0 kubenswrapper[7756]: I0220 11:50:02.923441 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tkdbv"] Feb 20 11:50:02.924707 master-0 kubenswrapper[7756]: I0220 11:50:02.924671 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkdbv" Feb 20 11:50:02.951865 master-0 kubenswrapper[7756]: I0220 11:50:02.945601 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkdbv"] Feb 20 11:50:03.002708 master-0 kubenswrapper[7756]: I0220 11:50:03.002627 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-catalog-content\") pod \"community-operators-tkdbv\" (UID: \"3733ccb5-2cea-4151-a2a7-d9c089a34cbc\") " pod="openshift-marketplace/community-operators-tkdbv" Feb 20 11:50:03.002708 master-0 kubenswrapper[7756]: I0220 11:50:03.002721 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgw4x\" (UniqueName: \"kubernetes.io/projected/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-kube-api-access-fgw4x\") pod \"community-operators-tkdbv\" (UID: \"3733ccb5-2cea-4151-a2a7-d9c089a34cbc\") " pod="openshift-marketplace/community-operators-tkdbv" Feb 20 11:50:03.003083 master-0 kubenswrapper[7756]: I0220 11:50:03.002915 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-utilities\") pod \"community-operators-tkdbv\" (UID: \"3733ccb5-2cea-4151-a2a7-d9c089a34cbc\") " pod="openshift-marketplace/community-operators-tkdbv" Feb 20 11:50:03.058931 master-0 kubenswrapper[7756]: I0220 11:50:03.058781 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mf5rz" Feb 20 11:50:03.104102 master-0 kubenswrapper[7756]: I0220 11:50:03.103915 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-utilities\") pod \"community-operators-tkdbv\" (UID: \"3733ccb5-2cea-4151-a2a7-d9c089a34cbc\") " pod="openshift-marketplace/community-operators-tkdbv" Feb 20 11:50:03.104102 master-0 kubenswrapper[7756]: I0220 11:50:03.103969 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-catalog-content\") pod \"community-operators-tkdbv\" (UID: \"3733ccb5-2cea-4151-a2a7-d9c089a34cbc\") " pod="openshift-marketplace/community-operators-tkdbv" Feb 20 11:50:03.104102 master-0 kubenswrapper[7756]: I0220 11:50:03.104002 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgw4x\" (UniqueName: \"kubernetes.io/projected/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-kube-api-access-fgw4x\") pod \"community-operators-tkdbv\" (UID: \"3733ccb5-2cea-4151-a2a7-d9c089a34cbc\") " pod="openshift-marketplace/community-operators-tkdbv" Feb 20 11:50:03.104799 master-0 kubenswrapper[7756]: I0220 11:50:03.104776 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-utilities\") pod \"community-operators-tkdbv\" (UID: \"3733ccb5-2cea-4151-a2a7-d9c089a34cbc\") " pod="openshift-marketplace/community-operators-tkdbv" Feb 20 11:50:03.105118 master-0 kubenswrapper[7756]: I0220 11:50:03.105080 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-catalog-content\") pod \"community-operators-tkdbv\" (UID: \"3733ccb5-2cea-4151-a2a7-d9c089a34cbc\") " pod="openshift-marketplace/community-operators-tkdbv" Feb 20 11:50:03.166559 master-0 kubenswrapper[7756]: I0220 11:50:03.164199 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgw4x\" (UniqueName: \"kubernetes.io/projected/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-kube-api-access-fgw4x\") pod \"community-operators-tkdbv\" (UID: \"3733ccb5-2cea-4151-a2a7-d9c089a34cbc\") " pod="openshift-marketplace/community-operators-tkdbv" Feb 20 11:50:03.233891 master-0 kubenswrapper[7756]: I0220 11:50:03.233834 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 20 11:50:03.265802 master-0 kubenswrapper[7756]: I0220 11:50:03.265745 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkdbv" Feb 20 11:50:03.349191 master-0 kubenswrapper[7756]: I0220 11:50:03.349132 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"a827d746-cfd3-48a2-a20b-2ff1526986b9","Type":"ContainerStarted","Data":"87542918fa08c7c4d02b25510f491f9813c8b4b90b5f23c58f4f083551680cc2"} Feb 20 11:50:03.349191 master-0 kubenswrapper[7756]: I0220 11:50:03.349177 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"a827d746-cfd3-48a2-a20b-2ff1526986b9","Type":"ContainerStarted","Data":"c266b103482362f3c418b4517deddb3769575b5e6f6333189c11e3e4fa22e93f"} Feb 20 11:50:03.351337 master-0 kubenswrapper[7756]: I0220 11:50:03.351303 7756 generic.go:334] "Generic (PLEG): container finished" podID="fca213c3-42ca-4341-a2e6-a143b9389f9e" containerID="046adf25484f79333206c0fe041bc8e17c66fcf7f4778670c5be72d62d2804ae" exitCode=0 Feb 20 11:50:03.351418 master-0 kubenswrapper[7756]: I0220 11:50:03.351349 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" event={"ID":"fca213c3-42ca-4341-a2e6-a143b9389f9e","Type":"ContainerDied","Data":"046adf25484f79333206c0fe041bc8e17c66fcf7f4778670c5be72d62d2804ae"} Feb 20 11:50:03.351418 master-0 kubenswrapper[7756]: I0220 11:50:03.351368 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" event={"ID":"fca213c3-42ca-4341-a2e6-a143b9389f9e","Type":"ContainerStarted","Data":"507676e6f82ab903ac83daafdb4ad3f73a28bb521382cf0074ea56ae587cb87f"} Feb 20 11:50:03.359148 master-0 kubenswrapper[7756]: I0220 11:50:03.359032 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec","Type":"ContainerStarted","Data":"ab80be68bf4e51ef54cc0eec1c7816960fc41469504dbffd4d942cebf0931414"} Feb 20 11:50:03.359148 master-0 kubenswrapper[7756]: I0220 11:50:03.359092 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec","Type":"ContainerStarted","Data":"ef5687cfa7e6e042604067e80f4e15ff2c4827e0ae1ef872b3cf5d173fd3b030"} Feb 20 11:50:03.367447 master-0 kubenswrapper[7756]: I0220 11:50:03.367377 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=2.367353779 podStartE2EDuration="2.367353779s" podCreationTimestamp="2026-02-20 11:50:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:50:03.364146208 +0000 UTC m=+49.106394216" watchObservedRunningTime="2026-02-20 11:50:03.367353779 +0000 UTC m=+49.109601797" Feb 20 11:50:03.386130 master-0 kubenswrapper[7756]: I0220 11:50:03.385588 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=2.38556944 podStartE2EDuration="2.38556944s" podCreationTimestamp="2026-02-20 11:50:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:50:03.384575101 +0000 UTC m=+49.126823109" watchObservedRunningTime="2026-02-20 11:50:03.38556944 +0000 UTC m=+49.127817448" Feb 20 11:50:03.906571 master-0 kubenswrapper[7756]: I0220 11:50:03.905145 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 20 11:50:03.906571 master-0 kubenswrapper[7756]: I0220 11:50:03.905995 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 11:50:03.908900 master-0 kubenswrapper[7756]: I0220 11:50:03.908857 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 20 11:50:03.922589 master-0 kubenswrapper[7756]: I0220 11:50:03.922507 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 20 11:50:04.024947 master-0 kubenswrapper[7756]: I0220 11:50:04.024715 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/74e9ba02-39d0-41fb-aed1-39923698bc0b-var-lock\") pod \"installer-1-master-0\" (UID: \"74e9ba02-39d0-41fb-aed1-39923698bc0b\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 11:50:04.024947 master-0 kubenswrapper[7756]: I0220 11:50:04.024790 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74e9ba02-39d0-41fb-aed1-39923698bc0b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"74e9ba02-39d0-41fb-aed1-39923698bc0b\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 11:50:04.025211 master-0 kubenswrapper[7756]: I0220 11:50:04.025030 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74e9ba02-39d0-41fb-aed1-39923698bc0b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"74e9ba02-39d0-41fb-aed1-39923698bc0b\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 11:50:04.125899 master-0 kubenswrapper[7756]: I0220 11:50:04.125856 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74e9ba02-39d0-41fb-aed1-39923698bc0b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"74e9ba02-39d0-41fb-aed1-39923698bc0b\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 11:50:04.125899 master-0 kubenswrapper[7756]: I0220 11:50:04.125909 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/74e9ba02-39d0-41fb-aed1-39923698bc0b-var-lock\") pod \"installer-1-master-0\" (UID: \"74e9ba02-39d0-41fb-aed1-39923698bc0b\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 11:50:04.126447 master-0 kubenswrapper[7756]: I0220 11:50:04.125930 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74e9ba02-39d0-41fb-aed1-39923698bc0b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"74e9ba02-39d0-41fb-aed1-39923698bc0b\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 11:50:04.126447 master-0 kubenswrapper[7756]: I0220 11:50:04.126008 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74e9ba02-39d0-41fb-aed1-39923698bc0b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"74e9ba02-39d0-41fb-aed1-39923698bc0b\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 11:50:04.126447 master-0 kubenswrapper[7756]: I0220 11:50:04.126256 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/74e9ba02-39d0-41fb-aed1-39923698bc0b-var-lock\") pod \"installer-1-master-0\" (UID: \"74e9ba02-39d0-41fb-aed1-39923698bc0b\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 11:50:04.144201 master-0 kubenswrapper[7756]: I0220 11:50:04.144168 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74e9ba02-39d0-41fb-aed1-39923698bc0b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"74e9ba02-39d0-41fb-aed1-39923698bc0b\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 11:50:04.225623 master-0 kubenswrapper[7756]: I0220 11:50:04.225420 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 11:50:04.340817 master-0 kubenswrapper[7756]: I0220 11:50:04.340760 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4d8l9"] Feb 20 11:50:04.342417 master-0 kubenswrapper[7756]: I0220 11:50:04.342362 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4d8l9" Feb 20 11:50:04.353120 master-0 kubenswrapper[7756]: I0220 11:50:04.352303 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4d8l9"] Feb 20 11:50:04.377952 master-0 kubenswrapper[7756]: I0220 11:50:04.377898 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"5710eb66-9717-4beb-a8b2-19f6886376b3","Type":"ContainerStarted","Data":"97dbf6403141d9540379400f393a21ef236f6a9b6384164aeddd18c354d998df"} Feb 20 11:50:04.429716 master-0 kubenswrapper[7756]: I0220 11:50:04.429674 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr6kw\" (UniqueName: \"kubernetes.io/projected/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-kube-api-access-dr6kw\") pod \"redhat-marketplace-4d8l9\" (UID: \"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd\") " pod="openshift-marketplace/redhat-marketplace-4d8l9" Feb 20 11:50:04.429798 master-0 kubenswrapper[7756]: I0220 11:50:04.429779 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-catalog-content\") pod \"redhat-marketplace-4d8l9\" (UID: \"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd\") " pod="openshift-marketplace/redhat-marketplace-4d8l9" Feb 20 11:50:04.429964 master-0 kubenswrapper[7756]: I0220 11:50:04.429937 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-utilities\") pod \"redhat-marketplace-4d8l9\" (UID: \"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd\") " pod="openshift-marketplace/redhat-marketplace-4d8l9" Feb 20 11:50:04.531384 master-0 kubenswrapper[7756]: I0220 11:50:04.531342 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr6kw\" (UniqueName: \"kubernetes.io/projected/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-kube-api-access-dr6kw\") pod \"redhat-marketplace-4d8l9\" (UID: \"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd\") " pod="openshift-marketplace/redhat-marketplace-4d8l9" Feb 20 11:50:04.531603 master-0 kubenswrapper[7756]: I0220 11:50:04.531584 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-catalog-content\") pod \"redhat-marketplace-4d8l9\" (UID: \"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd\") " pod="openshift-marketplace/redhat-marketplace-4d8l9" Feb 20 11:50:04.532364 master-0 kubenswrapper[7756]: I0220 11:50:04.532343 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-utilities\") pod \"redhat-marketplace-4d8l9\" (UID: \"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd\") " pod="openshift-marketplace/redhat-marketplace-4d8l9" Feb 20 11:50:04.532520 master-0 kubenswrapper[7756]: I0220 11:50:04.532475 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-catalog-content\") pod \"redhat-marketplace-4d8l9\" (UID: \"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd\") " pod="openshift-marketplace/redhat-marketplace-4d8l9" Feb 20 11:50:04.532969 master-0 kubenswrapper[7756]: I0220 11:50:04.532949 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-utilities\") pod \"redhat-marketplace-4d8l9\" (UID: \"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd\") " pod="openshift-marketplace/redhat-marketplace-4d8l9" Feb 20 11:50:04.567774 master-0 kubenswrapper[7756]: I0220 11:50:04.560424 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr6kw\" (UniqueName: \"kubernetes.io/projected/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-kube-api-access-dr6kw\") pod \"redhat-marketplace-4d8l9\" (UID: \"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd\") " pod="openshift-marketplace/redhat-marketplace-4d8l9" Feb 20 11:50:04.718150 master-0 kubenswrapper[7756]: I0220 11:50:04.718073 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4d8l9" Feb 20 11:50:04.820569 master-0 kubenswrapper[7756]: I0220 11:50:04.820500 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 20 11:50:04.887631 master-0 kubenswrapper[7756]: I0220 11:50:04.887513 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mf5rz"] Feb 20 11:50:04.905359 master-0 kubenswrapper[7756]: I0220 11:50:04.905268 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkdbv"] Feb 20 11:50:04.918201 master-0 kubenswrapper[7756]: W0220 11:50:04.918146 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3733ccb5_2cea_4151_a2a7_d9c089a34cbc.slice/crio-271cc74d3ac13c4272233cef14b9564ed745bec604aeb22abbc28cf2cb340d3b WatchSource:0}: Error finding container 271cc74d3ac13c4272233cef14b9564ed745bec604aeb22abbc28cf2cb340d3b: Status 404 returned error can't find the container with id 271cc74d3ac13c4272233cef14b9564ed745bec604aeb22abbc28cf2cb340d3b Feb 20 11:50:05.141238 master-0 kubenswrapper[7756]: I0220 11:50:05.141205 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4d8l9"] Feb 20 11:50:05.390848 master-0 kubenswrapper[7756]: I0220 11:50:05.390582 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" event={"ID":"fca213c3-42ca-4341-a2e6-a143b9389f9e","Type":"ContainerStarted","Data":"712cccc43a13d92ba3162867d1808dfa460185a842ecad1856435d4ad90aa071"} Feb 20 11:50:05.392877 master-0 kubenswrapper[7756]: I0220 11:50:05.392836 7756 generic.go:334] "Generic (PLEG): container finished" podID="339f8487-0d2b-4f4f-9872-c629e7f3e2e1" containerID="9ba882d3908425079028ab1281244bc811ead996e74f3222e0a28296913d80f5" exitCode=0 Feb 20 11:50:05.392932 master-0 kubenswrapper[7756]: I0220 11:50:05.392889 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mf5rz" event={"ID":"339f8487-0d2b-4f4f-9872-c629e7f3e2e1","Type":"ContainerDied","Data":"9ba882d3908425079028ab1281244bc811ead996e74f3222e0a28296913d80f5"} Feb 20 11:50:05.392932 master-0 kubenswrapper[7756]: I0220 11:50:05.392905 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mf5rz" event={"ID":"339f8487-0d2b-4f4f-9872-c629e7f3e2e1","Type":"ContainerStarted","Data":"acab5b54f256c278bb063800e469a049966e6a788255cb6555d858b4efd4df61"} Feb 20 11:50:05.394415 master-0 kubenswrapper[7756]: I0220 11:50:05.394388 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"5710eb66-9717-4beb-a8b2-19f6886376b3","Type":"ContainerStarted","Data":"b016752d8ba5cbc29441e53dbfb424ff953b01aa96097dd394c1910c4e093b09"} Feb 20 11:50:05.405033 master-0 kubenswrapper[7756]: I0220 11:50:05.398922 7756 generic.go:334] "Generic (PLEG): container finished" podID="3733ccb5-2cea-4151-a2a7-d9c089a34cbc" containerID="be445a861a8c4eb01f24b747bb03f34ee72cd75d7bc09a2585ad126da8f250fe" exitCode=0 Feb 20 11:50:05.405033 master-0 kubenswrapper[7756]: I0220 11:50:05.398972 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkdbv" event={"ID":"3733ccb5-2cea-4151-a2a7-d9c089a34cbc","Type":"ContainerDied","Data":"be445a861a8c4eb01f24b747bb03f34ee72cd75d7bc09a2585ad126da8f250fe"} Feb 20 11:50:05.405033 master-0 kubenswrapper[7756]: I0220 11:50:05.399052 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkdbv" event={"ID":"3733ccb5-2cea-4151-a2a7-d9c089a34cbc","Type":"ContainerStarted","Data":"271cc74d3ac13c4272233cef14b9564ed745bec604aeb22abbc28cf2cb340d3b"} Feb 20 11:50:05.405033 master-0 kubenswrapper[7756]: I0220 11:50:05.400816 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4d8l9" event={"ID":"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd","Type":"ContainerStarted","Data":"bed75b7b50642092b0adab8613c8dafc1d6047b83c4e361f7ca29d93aa1af83c"} Feb 20 11:50:05.405033 master-0 kubenswrapper[7756]: I0220 11:50:05.402329 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" event={"ID":"f598b79b-809a-4b22-91f0-5227017f6bcb","Type":"ContainerStarted","Data":"6d932daec2143a48a4c215d84bda4caa56f09b2ba0e0db4926bdd00bce87f8c8"} Feb 20 11:50:05.405033 master-0 kubenswrapper[7756]: I0220 11:50:05.402824 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:50:05.405033 master-0 kubenswrapper[7756]: I0220 11:50:05.403927 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"74e9ba02-39d0-41fb-aed1-39923698bc0b","Type":"ContainerStarted","Data":"ca54dfc79fe363224f0633dc3e9a5365e79752aa92793a430f4511b5aeb939dc"} Feb 20 11:50:05.405033 master-0 kubenswrapper[7756]: I0220 11:50:05.403968 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"74e9ba02-39d0-41fb-aed1-39923698bc0b","Type":"ContainerStarted","Data":"79f70b0ba5af48f333359cfd6f71307155a704d196b35bf91b2237ea4c31acbc"} Feb 20 11:50:05.410122 master-0 kubenswrapper[7756]: I0220 11:50:05.410075 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:50:05.430239 master-0 kubenswrapper[7756]: I0220 11:50:05.430177 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" podStartSLOduration=11.430157398 podStartE2EDuration="11.430157398s" podCreationTimestamp="2026-02-20 11:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:50:05.429917691 +0000 UTC m=+51.172165719" watchObservedRunningTime="2026-02-20 11:50:05.430157398 +0000 UTC m=+51.172405406" Feb 20 11:50:05.453014 master-0 kubenswrapper[7756]: I0220 11:50:05.452959 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" podStartSLOduration=8.003458343 podStartE2EDuration="12.452939729s" podCreationTimestamp="2026-02-20 11:49:53 +0000 UTC" firstStartedPulling="2026-02-20 11:49:59.976732968 +0000 UTC m=+45.718981026" lastFinishedPulling="2026-02-20 11:50:04.426214374 +0000 UTC m=+50.168462412" observedRunningTime="2026-02-20 11:50:05.450921981 +0000 UTC m=+51.193169989" watchObservedRunningTime="2026-02-20 11:50:05.452939729 +0000 UTC m=+51.195187737" Feb 20 11:50:05.471326 master-0 kubenswrapper[7756]: I0220 11:50:05.471257 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=3.471242062 podStartE2EDuration="3.471242062s" podCreationTimestamp="2026-02-20 11:50:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:50:05.470638765 +0000 UTC m=+51.212886773" watchObservedRunningTime="2026-02-20 11:50:05.471242062 +0000 UTC m=+51.213490060" Feb 20 11:50:05.492999 master-0 kubenswrapper[7756]: I0220 11:50:05.492599 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=2.492578912 podStartE2EDuration="2.492578912s" podCreationTimestamp="2026-02-20 11:50:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:50:05.49181048 +0000 UTC m=+51.234058508" watchObservedRunningTime="2026-02-20 11:50:05.492578912 +0000 UTC m=+51.234826920" Feb 20 11:50:05.530616 master-0 kubenswrapper[7756]: I0220 11:50:05.530214 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ps68j"] Feb 20 11:50:05.533973 master-0 kubenswrapper[7756]: I0220 11:50:05.531306 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps68j" Feb 20 11:50:05.552598 master-0 kubenswrapper[7756]: I0220 11:50:05.552551 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ps68j"] Feb 20 11:50:05.645521 master-0 kubenswrapper[7756]: I0220 11:50:05.645391 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx9z6\" (UniqueName: \"kubernetes.io/projected/50084c46-32ff-4e8a-b35e-8e7b1943cc04-kube-api-access-hx9z6\") pod \"redhat-operators-ps68j\" (UID: \"50084c46-32ff-4e8a-b35e-8e7b1943cc04\") " pod="openshift-marketplace/redhat-operators-ps68j" Feb 20 11:50:05.645521 master-0 kubenswrapper[7756]: I0220 11:50:05.645475 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50084c46-32ff-4e8a-b35e-8e7b1943cc04-catalog-content\") pod \"redhat-operators-ps68j\" (UID: \"50084c46-32ff-4e8a-b35e-8e7b1943cc04\") " pod="openshift-marketplace/redhat-operators-ps68j" Feb 20 11:50:05.645521 master-0 kubenswrapper[7756]: I0220 11:50:05.645514 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50084c46-32ff-4e8a-b35e-8e7b1943cc04-utilities\") pod \"redhat-operators-ps68j\" (UID: \"50084c46-32ff-4e8a-b35e-8e7b1943cc04\") " pod="openshift-marketplace/redhat-operators-ps68j" Feb 20 11:50:05.747242 master-0 kubenswrapper[7756]: I0220 11:50:05.747181 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50084c46-32ff-4e8a-b35e-8e7b1943cc04-catalog-content\") pod \"redhat-operators-ps68j\" (UID: \"50084c46-32ff-4e8a-b35e-8e7b1943cc04\") " pod="openshift-marketplace/redhat-operators-ps68j" Feb 20 11:50:05.747242 master-0 kubenswrapper[7756]: I0220 11:50:05.747245 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50084c46-32ff-4e8a-b35e-8e7b1943cc04-utilities\") pod \"redhat-operators-ps68j\" (UID: \"50084c46-32ff-4e8a-b35e-8e7b1943cc04\") " pod="openshift-marketplace/redhat-operators-ps68j" Feb 20 11:50:05.747509 master-0 kubenswrapper[7756]: I0220 11:50:05.747287 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx9z6\" (UniqueName: \"kubernetes.io/projected/50084c46-32ff-4e8a-b35e-8e7b1943cc04-kube-api-access-hx9z6\") pod \"redhat-operators-ps68j\" (UID: \"50084c46-32ff-4e8a-b35e-8e7b1943cc04\") " pod="openshift-marketplace/redhat-operators-ps68j" Feb 20 11:50:05.748253 master-0 kubenswrapper[7756]: I0220 11:50:05.748043 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50084c46-32ff-4e8a-b35e-8e7b1943cc04-catalog-content\") pod \"redhat-operators-ps68j\" (UID: \"50084c46-32ff-4e8a-b35e-8e7b1943cc04\") " pod="openshift-marketplace/redhat-operators-ps68j" Feb 20 11:50:05.748253 master-0 kubenswrapper[7756]: I0220 11:50:05.748090 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50084c46-32ff-4e8a-b35e-8e7b1943cc04-utilities\") pod \"redhat-operators-ps68j\" (UID: \"50084c46-32ff-4e8a-b35e-8e7b1943cc04\") " pod="openshift-marketplace/redhat-operators-ps68j" Feb 20 11:50:05.764448 master-0 kubenswrapper[7756]: I0220 11:50:05.763732 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx9z6\" (UniqueName: \"kubernetes.io/projected/50084c46-32ff-4e8a-b35e-8e7b1943cc04-kube-api-access-hx9z6\") pod \"redhat-operators-ps68j\" (UID: \"50084c46-32ff-4e8a-b35e-8e7b1943cc04\") " pod="openshift-marketplace/redhat-operators-ps68j" Feb 20 11:50:05.850692 master-0 kubenswrapper[7756]: I0220 11:50:05.850600 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps68j" Feb 20 11:50:06.269501 master-0 kubenswrapper[7756]: I0220 11:50:06.269421 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ps68j"] Feb 20 11:50:06.421873 master-0 kubenswrapper[7756]: I0220 11:50:06.421293 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps68j" event={"ID":"50084c46-32ff-4e8a-b35e-8e7b1943cc04","Type":"ContainerStarted","Data":"d3234baa1ee8eded2cc9fe34f7722a82dcfa31c0fd310f1ff2830fbcd0a573d2"} Feb 20 11:50:06.421873 master-0 kubenswrapper[7756]: I0220 11:50:06.421359 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps68j" event={"ID":"50084c46-32ff-4e8a-b35e-8e7b1943cc04","Type":"ContainerStarted","Data":"177018fd0692881df9a10c94ae902b21c26f441b2c1f7434ed6731b2dc1c1347"} Feb 20 11:50:06.424997 master-0 kubenswrapper[7756]: I0220 11:50:06.424974 7756 generic.go:334] "Generic (PLEG): container finished" podID="c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd" containerID="3d30310d5d64e87cb7f50a1908bf38d936e48855ada0342580eebece80d1e3b2" exitCode=0 Feb 20 11:50:06.425092 master-0 kubenswrapper[7756]: I0220 11:50:06.425027 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4d8l9" event={"ID":"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd","Type":"ContainerDied","Data":"3d30310d5d64e87cb7f50a1908bf38d936e48855ada0342580eebece80d1e3b2"} Feb 20 11:50:06.647847 master-0 kubenswrapper[7756]: I0220 11:50:06.647724 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:50:06.829512 master-0 kubenswrapper[7756]: I0220 11:50:06.829454 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:50:06.948378 master-0 kubenswrapper[7756]: I0220 11:50:06.948261 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:06.948610 master-0 kubenswrapper[7756]: I0220 11:50:06.948593 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:06.967263 master-0 kubenswrapper[7756]: I0220 11:50:06.967216 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:07.118245 master-0 kubenswrapper[7756]: I0220 11:50:07.116467 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mf5rz"] Feb 20 11:50:07.432135 master-0 kubenswrapper[7756]: I0220 11:50:07.432021 7756 generic.go:334] "Generic (PLEG): container finished" podID="50084c46-32ff-4e8a-b35e-8e7b1943cc04" containerID="d3234baa1ee8eded2cc9fe34f7722a82dcfa31c0fd310f1ff2830fbcd0a573d2" exitCode=0 Feb 20 11:50:07.433803 master-0 kubenswrapper[7756]: I0220 11:50:07.433771 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps68j" event={"ID":"50084c46-32ff-4e8a-b35e-8e7b1943cc04","Type":"ContainerDied","Data":"d3234baa1ee8eded2cc9fe34f7722a82dcfa31c0fd310f1ff2830fbcd0a573d2"} Feb 20 11:50:07.439954 master-0 kubenswrapper[7756]: I0220 11:50:07.439488 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 11:50:07.539366 master-0 kubenswrapper[7756]: I0220 11:50:07.539179 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:50:07.544810 master-0 kubenswrapper[7756]: I0220 11:50:07.544776 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 11:50:07.608396 master-0 kubenswrapper[7756]: I0220 11:50:07.608339 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-76v4z"] Feb 20 11:50:07.610395 master-0 kubenswrapper[7756]: I0220 11:50:07.610359 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-76v4z" Feb 20 11:50:07.615271 master-0 kubenswrapper[7756]: I0220 11:50:07.615235 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-llz42" Feb 20 11:50:07.673766 master-0 kubenswrapper[7756]: I0220 11:50:07.670224 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-76v4z"] Feb 20 11:50:07.676600 master-0 kubenswrapper[7756]: I0220 11:50:07.676568 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19cf75ed-6a4e-444d-8975-fa6ecba79f13-catalog-content\") pod \"certified-operators-76v4z\" (UID: \"19cf75ed-6a4e-444d-8975-fa6ecba79f13\") " pod="openshift-marketplace/certified-operators-76v4z" Feb 20 11:50:07.676683 master-0 kubenswrapper[7756]: I0220 11:50:07.676664 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19cf75ed-6a4e-444d-8975-fa6ecba79f13-utilities\") pod \"certified-operators-76v4z\" (UID: \"19cf75ed-6a4e-444d-8975-fa6ecba79f13\") " pod="openshift-marketplace/certified-operators-76v4z" Feb 20 11:50:07.676811 master-0 kubenswrapper[7756]: I0220 11:50:07.676792 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hxz5\" (UniqueName: \"kubernetes.io/projected/19cf75ed-6a4e-444d-8975-fa6ecba79f13-kube-api-access-7hxz5\") pod \"certified-operators-76v4z\" (UID: \"19cf75ed-6a4e-444d-8975-fa6ecba79f13\") " pod="openshift-marketplace/certified-operators-76v4z" Feb 20 11:50:07.778445 master-0 kubenswrapper[7756]: I0220 11:50:07.777774 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19cf75ed-6a4e-444d-8975-fa6ecba79f13-catalog-content\") pod \"certified-operators-76v4z\" (UID: \"19cf75ed-6a4e-444d-8975-fa6ecba79f13\") " pod="openshift-marketplace/certified-operators-76v4z" Feb 20 11:50:07.778445 master-0 kubenswrapper[7756]: I0220 11:50:07.777821 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19cf75ed-6a4e-444d-8975-fa6ecba79f13-utilities\") pod \"certified-operators-76v4z\" (UID: \"19cf75ed-6a4e-444d-8975-fa6ecba79f13\") " pod="openshift-marketplace/certified-operators-76v4z" Feb 20 11:50:07.778445 master-0 kubenswrapper[7756]: I0220 11:50:07.777845 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hxz5\" (UniqueName: \"kubernetes.io/projected/19cf75ed-6a4e-444d-8975-fa6ecba79f13-kube-api-access-7hxz5\") pod \"certified-operators-76v4z\" (UID: \"19cf75ed-6a4e-444d-8975-fa6ecba79f13\") " pod="openshift-marketplace/certified-operators-76v4z" Feb 20 11:50:07.779569 master-0 kubenswrapper[7756]: I0220 11:50:07.779492 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19cf75ed-6a4e-444d-8975-fa6ecba79f13-catalog-content\") pod \"certified-operators-76v4z\" (UID: \"19cf75ed-6a4e-444d-8975-fa6ecba79f13\") " pod="openshift-marketplace/certified-operators-76v4z" Feb 20 11:50:07.779922 master-0 kubenswrapper[7756]: I0220 11:50:07.779865 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19cf75ed-6a4e-444d-8975-fa6ecba79f13-utilities\") pod \"certified-operators-76v4z\" (UID: \"19cf75ed-6a4e-444d-8975-fa6ecba79f13\") " pod="openshift-marketplace/certified-operators-76v4z" Feb 20 11:50:07.904835 master-0 kubenswrapper[7756]: I0220 11:50:07.903808 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hxz5\" (UniqueName: \"kubernetes.io/projected/19cf75ed-6a4e-444d-8975-fa6ecba79f13-kube-api-access-7hxz5\") pod \"certified-operators-76v4z\" (UID: \"19cf75ed-6a4e-444d-8975-fa6ecba79f13\") " pod="openshift-marketplace/certified-operators-76v4z" Feb 20 11:50:07.930662 master-0 kubenswrapper[7756]: I0220 11:50:07.930460 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-76v4z" Feb 20 11:50:08.489608 master-0 kubenswrapper[7756]: I0220 11:50:08.489497 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-76v4z"] Feb 20 11:50:08.525016 master-0 kubenswrapper[7756]: W0220 11:50:08.524785 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19cf75ed_6a4e_444d_8975_fa6ecba79f13.slice/crio-84bc6873f1c2f152a188b93adf9b13caf01f769508b7055c0e1ef90ebe5496e8 WatchSource:0}: Error finding container 84bc6873f1c2f152a188b93adf9b13caf01f769508b7055c0e1ef90ebe5496e8: Status 404 returned error can't find the container with id 84bc6873f1c2f152a188b93adf9b13caf01f769508b7055c0e1ef90ebe5496e8 Feb 20 11:50:08.533108 master-0 kubenswrapper[7756]: I0220 11:50:08.533039 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkdbv"] Feb 20 11:50:08.922078 master-0 kubenswrapper[7756]: I0220 11:50:08.922009 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7kn5q"] Feb 20 11:50:08.924912 master-0 kubenswrapper[7756]: I0220 11:50:08.924886 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kn5q" Feb 20 11:50:08.926811 master-0 kubenswrapper[7756]: I0220 11:50:08.926756 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-jmbqp" Feb 20 11:50:08.940150 master-0 kubenswrapper[7756]: I0220 11:50:08.939989 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7kn5q"] Feb 20 11:50:09.004432 master-0 kubenswrapper[7756]: I0220 11:50:09.003141 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34382460-b2d7-4154-87ba-c0347a4c0f1b-catalog-content\") pod \"community-operators-7kn5q\" (UID: \"34382460-b2d7-4154-87ba-c0347a4c0f1b\") " pod="openshift-marketplace/community-operators-7kn5q" Feb 20 11:50:09.004432 master-0 kubenswrapper[7756]: I0220 11:50:09.003195 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34382460-b2d7-4154-87ba-c0347a4c0f1b-utilities\") pod \"community-operators-7kn5q\" (UID: \"34382460-b2d7-4154-87ba-c0347a4c0f1b\") " pod="openshift-marketplace/community-operators-7kn5q" Feb 20 11:50:09.004432 master-0 kubenswrapper[7756]: I0220 11:50:09.003248 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dx9s\" (UniqueName: \"kubernetes.io/projected/34382460-b2d7-4154-87ba-c0347a4c0f1b-kube-api-access-5dx9s\") pod \"community-operators-7kn5q\" (UID: \"34382460-b2d7-4154-87ba-c0347a4c0f1b\") " pod="openshift-marketplace/community-operators-7kn5q" Feb 20 11:50:09.110103 master-0 kubenswrapper[7756]: I0220 11:50:09.104586 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dx9s\" (UniqueName: \"kubernetes.io/projected/34382460-b2d7-4154-87ba-c0347a4c0f1b-kube-api-access-5dx9s\") pod \"community-operators-7kn5q\" (UID: \"34382460-b2d7-4154-87ba-c0347a4c0f1b\") " pod="openshift-marketplace/community-operators-7kn5q" Feb 20 11:50:09.110103 master-0 kubenswrapper[7756]: I0220 11:50:09.104668 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34382460-b2d7-4154-87ba-c0347a4c0f1b-catalog-content\") pod \"community-operators-7kn5q\" (UID: \"34382460-b2d7-4154-87ba-c0347a4c0f1b\") " pod="openshift-marketplace/community-operators-7kn5q" Feb 20 11:50:09.110103 master-0 kubenswrapper[7756]: I0220 11:50:09.104698 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34382460-b2d7-4154-87ba-c0347a4c0f1b-utilities\") pod \"community-operators-7kn5q\" (UID: \"34382460-b2d7-4154-87ba-c0347a4c0f1b\") " pod="openshift-marketplace/community-operators-7kn5q" Feb 20 11:50:09.110103 master-0 kubenswrapper[7756]: I0220 11:50:09.105161 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34382460-b2d7-4154-87ba-c0347a4c0f1b-utilities\") pod \"community-operators-7kn5q\" (UID: \"34382460-b2d7-4154-87ba-c0347a4c0f1b\") " pod="openshift-marketplace/community-operators-7kn5q" Feb 20 11:50:09.110103 master-0 kubenswrapper[7756]: I0220 11:50:09.105644 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34382460-b2d7-4154-87ba-c0347a4c0f1b-catalog-content\") pod \"community-operators-7kn5q\" (UID: \"34382460-b2d7-4154-87ba-c0347a4c0f1b\") " pod="openshift-marketplace/community-operators-7kn5q" Feb 20 11:50:09.120270 master-0 kubenswrapper[7756]: I0220 11:50:09.120231 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dx9s\" (UniqueName: \"kubernetes.io/projected/34382460-b2d7-4154-87ba-c0347a4c0f1b-kube-api-access-5dx9s\") pod \"community-operators-7kn5q\" (UID: \"34382460-b2d7-4154-87ba-c0347a4c0f1b\") " pod="openshift-marketplace/community-operators-7kn5q" Feb 20 11:50:09.246013 master-0 kubenswrapper[7756]: I0220 11:50:09.245934 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kn5q" Feb 20 11:50:09.451829 master-0 kubenswrapper[7756]: I0220 11:50:09.451584 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76v4z" event={"ID":"19cf75ed-6a4e-444d-8975-fa6ecba79f13","Type":"ContainerDied","Data":"6ae49cdae8b47f749dc0f6149f6ebf356a1a949182d0fffef1f74b151688ef30"} Feb 20 11:50:09.451829 master-0 kubenswrapper[7756]: I0220 11:50:09.451617 7756 generic.go:334] "Generic (PLEG): container finished" podID="19cf75ed-6a4e-444d-8975-fa6ecba79f13" containerID="6ae49cdae8b47f749dc0f6149f6ebf356a1a949182d0fffef1f74b151688ef30" exitCode=0 Feb 20 11:50:09.452075 master-0 kubenswrapper[7756]: I0220 11:50:09.451830 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76v4z" event={"ID":"19cf75ed-6a4e-444d-8975-fa6ecba79f13","Type":"ContainerStarted","Data":"84bc6873f1c2f152a188b93adf9b13caf01f769508b7055c0e1ef90ebe5496e8"} Feb 20 11:50:09.661152 master-0 kubenswrapper[7756]: I0220 11:50:09.660511 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7kn5q"] Feb 20 11:50:09.668482 master-0 kubenswrapper[7756]: W0220 11:50:09.668158 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34382460_b2d7_4154_87ba_c0347a4c0f1b.slice/crio-7bc9a872390b5d9f7e6deaa6fe763c395d3fe8f5593fe4a35eea402b1c688808 WatchSource:0}: Error finding container 7bc9a872390b5d9f7e6deaa6fe763c395d3fe8f5593fe4a35eea402b1c688808: Status 404 returned error can't find the container with id 7bc9a872390b5d9f7e6deaa6fe763c395d3fe8f5593fe4a35eea402b1c688808 Feb 20 11:50:09.916629 master-0 kubenswrapper[7756]: I0220 11:50:09.916504 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4d8l9"] Feb 20 11:50:10.423393 master-0 kubenswrapper[7756]: I0220 11:50:10.423343 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-89t2q"] Feb 20 11:50:10.424208 master-0 kubenswrapper[7756]: I0220 11:50:10.424171 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 11:50:10.426417 master-0 kubenswrapper[7756]: I0220 11:50:10.426355 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-lv5zr" Feb 20 11:50:10.458552 master-0 kubenswrapper[7756]: I0220 11:50:10.458471 7756 generic.go:334] "Generic (PLEG): container finished" podID="34382460-b2d7-4154-87ba-c0347a4c0f1b" containerID="f4e2e12c0322e37b5aee8b5bfc056b8e62e780d54f553d8e5ba777ff04b0e41e" exitCode=0 Feb 20 11:50:10.458712 master-0 kubenswrapper[7756]: I0220 11:50:10.458562 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kn5q" event={"ID":"34382460-b2d7-4154-87ba-c0347a4c0f1b","Type":"ContainerDied","Data":"f4e2e12c0322e37b5aee8b5bfc056b8e62e780d54f553d8e5ba777ff04b0e41e"} Feb 20 11:50:10.458712 master-0 kubenswrapper[7756]: I0220 11:50:10.458607 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kn5q" event={"ID":"34382460-b2d7-4154-87ba-c0347a4c0f1b","Type":"ContainerStarted","Data":"7bc9a872390b5d9f7e6deaa6fe763c395d3fe8f5593fe4a35eea402b1c688808"} Feb 20 11:50:10.529925 master-0 kubenswrapper[7756]: I0220 11:50:10.529835 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8df029f2-d0ec-4543-9371-7694b1e85a06-utilities\") pod \"redhat-marketplace-89t2q\" (UID: \"8df029f2-d0ec-4543-9371-7694b1e85a06\") " pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 11:50:10.530160 master-0 kubenswrapper[7756]: I0220 11:50:10.529955 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8df029f2-d0ec-4543-9371-7694b1e85a06-catalog-content\") pod \"redhat-marketplace-89t2q\" (UID: \"8df029f2-d0ec-4543-9371-7694b1e85a06\") " pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 11:50:10.530160 master-0 kubenswrapper[7756]: I0220 11:50:10.530001 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwgg6\" (UniqueName: \"kubernetes.io/projected/8df029f2-d0ec-4543-9371-7694b1e85a06-kube-api-access-kwgg6\") pod \"redhat-marketplace-89t2q\" (UID: \"8df029f2-d0ec-4543-9371-7694b1e85a06\") " pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 11:50:10.631748 master-0 kubenswrapper[7756]: I0220 11:50:10.631674 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwgg6\" (UniqueName: \"kubernetes.io/projected/8df029f2-d0ec-4543-9371-7694b1e85a06-kube-api-access-kwgg6\") pod \"redhat-marketplace-89t2q\" (UID: \"8df029f2-d0ec-4543-9371-7694b1e85a06\") " pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 11:50:10.632006 master-0 kubenswrapper[7756]: I0220 11:50:10.631784 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8df029f2-d0ec-4543-9371-7694b1e85a06-utilities\") pod \"redhat-marketplace-89t2q\" (UID: \"8df029f2-d0ec-4543-9371-7694b1e85a06\") " pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 11:50:10.632006 master-0 kubenswrapper[7756]: I0220 11:50:10.631820 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8df029f2-d0ec-4543-9371-7694b1e85a06-catalog-content\") pod \"redhat-marketplace-89t2q\" (UID: \"8df029f2-d0ec-4543-9371-7694b1e85a06\") " pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 11:50:10.632354 master-0 kubenswrapper[7756]: I0220 11:50:10.632311 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8df029f2-d0ec-4543-9371-7694b1e85a06-catalog-content\") pod \"redhat-marketplace-89t2q\" (UID: \"8df029f2-d0ec-4543-9371-7694b1e85a06\") " pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 11:50:10.632664 master-0 kubenswrapper[7756]: I0220 11:50:10.632615 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8df029f2-d0ec-4543-9371-7694b1e85a06-utilities\") pod \"redhat-marketplace-89t2q\" (UID: \"8df029f2-d0ec-4543-9371-7694b1e85a06\") " pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 11:50:10.660177 master-0 kubenswrapper[7756]: I0220 11:50:10.659462 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89t2q"] Feb 20 11:50:11.474336 master-0 kubenswrapper[7756]: I0220 11:50:11.472498 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwgg6\" (UniqueName: \"kubernetes.io/projected/8df029f2-d0ec-4543-9371-7694b1e85a06-kube-api-access-kwgg6\") pod \"redhat-marketplace-89t2q\" (UID: \"8df029f2-d0ec-4543-9371-7694b1e85a06\") " pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 11:50:11.493635 master-0 kubenswrapper[7756]: I0220 11:50:11.488229 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 20 11:50:11.493635 master-0 kubenswrapper[7756]: I0220 11:50:11.488560 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="52c9c322-a0d1-4d27-b3bc-aaa8bd25beec" containerName="installer" containerID="cri-o://ab80be68bf4e51ef54cc0eec1c7816960fc41469504dbffd4d942cebf0931414" gracePeriod=30 Feb 20 11:50:11.505506 master-0 kubenswrapper[7756]: I0220 11:50:11.505434 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ps68j"] Feb 20 11:50:11.657383 master-0 kubenswrapper[7756]: I0220 11:50:11.657324 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 11:50:11.720110 master-0 kubenswrapper[7756]: I0220 11:50:11.719869 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q287t"] Feb 20 11:50:11.720771 master-0 kubenswrapper[7756]: I0220 11:50:11.720740 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q287t" Feb 20 11:50:11.722755 master-0 kubenswrapper[7756]: I0220 11:50:11.722254 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ffxph" Feb 20 11:50:11.733080 master-0 kubenswrapper[7756]: I0220 11:50:11.732980 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q287t"] Feb 20 11:50:11.759038 master-0 kubenswrapper[7756]: I0220 11:50:11.758776 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11aaad8c-2f25-460f-b4af-f27d8bc682a0-catalog-content\") pod \"redhat-operators-q287t\" (UID: \"11aaad8c-2f25-460f-b4af-f27d8bc682a0\") " pod="openshift-marketplace/redhat-operators-q287t" Feb 20 11:50:11.759038 master-0 kubenswrapper[7756]: I0220 11:50:11.758874 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11aaad8c-2f25-460f-b4af-f27d8bc682a0-utilities\") pod \"redhat-operators-q287t\" (UID: \"11aaad8c-2f25-460f-b4af-f27d8bc682a0\") " pod="openshift-marketplace/redhat-operators-q287t" Feb 20 11:50:11.759038 master-0 kubenswrapper[7756]: I0220 11:50:11.758919 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5z86\" (UniqueName: \"kubernetes.io/projected/11aaad8c-2f25-460f-b4af-f27d8bc682a0-kube-api-access-x5z86\") pod \"redhat-operators-q287t\" (UID: \"11aaad8c-2f25-460f-b4af-f27d8bc682a0\") " pod="openshift-marketplace/redhat-operators-q287t" Feb 20 11:50:11.860595 master-0 kubenswrapper[7756]: I0220 11:50:11.859992 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11aaad8c-2f25-460f-b4af-f27d8bc682a0-catalog-content\") pod \"redhat-operators-q287t\" (UID: \"11aaad8c-2f25-460f-b4af-f27d8bc682a0\") " pod="openshift-marketplace/redhat-operators-q287t" Feb 20 11:50:11.860595 master-0 kubenswrapper[7756]: I0220 11:50:11.860090 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11aaad8c-2f25-460f-b4af-f27d8bc682a0-utilities\") pod \"redhat-operators-q287t\" (UID: \"11aaad8c-2f25-460f-b4af-f27d8bc682a0\") " pod="openshift-marketplace/redhat-operators-q287t" Feb 20 11:50:11.860595 master-0 kubenswrapper[7756]: I0220 11:50:11.860136 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5z86\" (UniqueName: \"kubernetes.io/projected/11aaad8c-2f25-460f-b4af-f27d8bc682a0-kube-api-access-x5z86\") pod \"redhat-operators-q287t\" (UID: \"11aaad8c-2f25-460f-b4af-f27d8bc682a0\") " pod="openshift-marketplace/redhat-operators-q287t" Feb 20 11:50:11.861502 master-0 kubenswrapper[7756]: I0220 11:50:11.861214 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11aaad8c-2f25-460f-b4af-f27d8bc682a0-catalog-content\") pod \"redhat-operators-q287t\" (UID: \"11aaad8c-2f25-460f-b4af-f27d8bc682a0\") " pod="openshift-marketplace/redhat-operators-q287t" Feb 20 11:50:11.861502 master-0 kubenswrapper[7756]: I0220 11:50:11.861463 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11aaad8c-2f25-460f-b4af-f27d8bc682a0-utilities\") pod \"redhat-operators-q287t\" (UID: \"11aaad8c-2f25-460f-b4af-f27d8bc682a0\") " pod="openshift-marketplace/redhat-operators-q287t" Feb 20 11:50:11.885396 master-0 kubenswrapper[7756]: I0220 11:50:11.885217 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5z86\" (UniqueName: \"kubernetes.io/projected/11aaad8c-2f25-460f-b4af-f27d8bc682a0-kube-api-access-x5z86\") pod \"redhat-operators-q287t\" (UID: \"11aaad8c-2f25-460f-b4af-f27d8bc682a0\") " pod="openshift-marketplace/redhat-operators-q287t" Feb 20 11:50:12.049663 master-0 kubenswrapper[7756]: I0220 11:50:12.049541 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q287t" Feb 20 11:50:12.131203 master-0 kubenswrapper[7756]: I0220 11:50:12.131166 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89t2q"] Feb 20 11:50:12.142619 master-0 kubenswrapper[7756]: W0220 11:50:12.142544 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8df029f2_d0ec_4543_9371_7694b1e85a06.slice/crio-e112dc6a9d5f726f666b1385197c77d837257cbee8251d26060f19151f5ada2f WatchSource:0}: Error finding container e112dc6a9d5f726f666b1385197c77d837257cbee8251d26060f19151f5ada2f: Status 404 returned error can't find the container with id e112dc6a9d5f726f666b1385197c77d837257cbee8251d26060f19151f5ada2f Feb 20 11:50:12.308843 master-0 kubenswrapper[7756]: I0220 11:50:12.308788 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q287t"] Feb 20 11:50:12.316943 master-0 kubenswrapper[7756]: W0220 11:50:12.316823 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11aaad8c_2f25_460f_b4af_f27d8bc682a0.slice/crio-1691f192a8834aa22572ce2ad682bc87e326607190761ea473a2ecaf32c9e175 WatchSource:0}: Error finding container 1691f192a8834aa22572ce2ad682bc87e326607190761ea473a2ecaf32c9e175: Status 404 returned error can't find the container with id 1691f192a8834aa22572ce2ad682bc87e326607190761ea473a2ecaf32c9e175 Feb 20 11:50:12.489644 master-0 kubenswrapper[7756]: I0220 11:50:12.489581 7756 generic.go:334] "Generic (PLEG): container finished" podID="8df029f2-d0ec-4543-9371-7694b1e85a06" containerID="9d2f7db518937a8a9582f2e5f129e777d3a2b50cb5bc9e1a2c9bbfd577def479" exitCode=0 Feb 20 11:50:12.490199 master-0 kubenswrapper[7756]: I0220 11:50:12.489709 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89t2q" event={"ID":"8df029f2-d0ec-4543-9371-7694b1e85a06","Type":"ContainerDied","Data":"9d2f7db518937a8a9582f2e5f129e777d3a2b50cb5bc9e1a2c9bbfd577def479"} Feb 20 11:50:12.490263 master-0 kubenswrapper[7756]: I0220 11:50:12.490240 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89t2q" event={"ID":"8df029f2-d0ec-4543-9371-7694b1e85a06","Type":"ContainerStarted","Data":"e112dc6a9d5f726f666b1385197c77d837257cbee8251d26060f19151f5ada2f"} Feb 20 11:50:12.492579 master-0 kubenswrapper[7756]: I0220 11:50:12.491753 7756 generic.go:334] "Generic (PLEG): container finished" podID="11aaad8c-2f25-460f-b4af-f27d8bc682a0" containerID="ff081d256f16ffc5993aee690d47f471a95bc015fd6771879fb0da9d5c9c2b0b" exitCode=0 Feb 20 11:50:12.492579 master-0 kubenswrapper[7756]: I0220 11:50:12.491794 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q287t" event={"ID":"11aaad8c-2f25-460f-b4af-f27d8bc682a0","Type":"ContainerDied","Data":"ff081d256f16ffc5993aee690d47f471a95bc015fd6771879fb0da9d5c9c2b0b"} Feb 20 11:50:12.492579 master-0 kubenswrapper[7756]: I0220 11:50:12.491843 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q287t" event={"ID":"11aaad8c-2f25-460f-b4af-f27d8bc682a0","Type":"ContainerStarted","Data":"1691f192a8834aa22572ce2ad682bc87e326607190761ea473a2ecaf32c9e175"} Feb 20 11:50:13.010371 master-0 kubenswrapper[7756]: I0220 11:50:13.010253 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 20 11:50:13.012892 master-0 kubenswrapper[7756]: I0220 11:50:13.012838 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 11:50:13.014311 master-0 kubenswrapper[7756]: I0220 11:50:13.014261 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 20 11:50:13.017380 master-0 kubenswrapper[7756]: I0220 11:50:13.017096 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-zgwqj" Feb 20 11:50:13.082756 master-0 kubenswrapper[7756]: I0220 11:50:13.082311 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7bd4430b-8dbc-46df-9efe-49d520a7c75a-var-lock\") pod \"installer-2-master-0\" (UID: \"7bd4430b-8dbc-46df-9efe-49d520a7c75a\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 11:50:13.082756 master-0 kubenswrapper[7756]: I0220 11:50:13.082372 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7bd4430b-8dbc-46df-9efe-49d520a7c75a-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7bd4430b-8dbc-46df-9efe-49d520a7c75a\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 11:50:13.082756 master-0 kubenswrapper[7756]: I0220 11:50:13.082403 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7bd4430b-8dbc-46df-9efe-49d520a7c75a-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7bd4430b-8dbc-46df-9efe-49d520a7c75a\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 11:50:13.183175 master-0 kubenswrapper[7756]: I0220 11:50:13.182451 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6"] Feb 20 11:50:13.183175 master-0 kubenswrapper[7756]: I0220 11:50:13.182696 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" podUID="f598b79b-809a-4b22-91f0-5227017f6bcb" containerName="controller-manager" containerID="cri-o://6d932daec2143a48a4c215d84bda4caa56f09b2ba0e0db4926bdd00bce87f8c8" gracePeriod=30 Feb 20 11:50:13.183423 master-0 kubenswrapper[7756]: I0220 11:50:13.183290 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7bd4430b-8dbc-46df-9efe-49d520a7c75a-var-lock\") pod \"installer-2-master-0\" (UID: \"7bd4430b-8dbc-46df-9efe-49d520a7c75a\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 11:50:13.183423 master-0 kubenswrapper[7756]: I0220 11:50:13.183333 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7bd4430b-8dbc-46df-9efe-49d520a7c75a-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7bd4430b-8dbc-46df-9efe-49d520a7c75a\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 11:50:13.183423 master-0 kubenswrapper[7756]: I0220 11:50:13.183362 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7bd4430b-8dbc-46df-9efe-49d520a7c75a-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7bd4430b-8dbc-46df-9efe-49d520a7c75a\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 11:50:13.185075 master-0 kubenswrapper[7756]: I0220 11:50:13.183701 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7bd4430b-8dbc-46df-9efe-49d520a7c75a-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7bd4430b-8dbc-46df-9efe-49d520a7c75a\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 11:50:13.185075 master-0 kubenswrapper[7756]: I0220 11:50:13.183620 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7bd4430b-8dbc-46df-9efe-49d520a7c75a-var-lock\") pod \"installer-2-master-0\" (UID: \"7bd4430b-8dbc-46df-9efe-49d520a7c75a\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 11:50:13.258436 master-0 kubenswrapper[7756]: I0220 11:50:13.258392 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7bd4430b-8dbc-46df-9efe-49d520a7c75a-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7bd4430b-8dbc-46df-9efe-49d520a7c75a\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 11:50:13.266913 master-0 kubenswrapper[7756]: I0220 11:50:13.266792 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p"] Feb 20 11:50:13.267409 master-0 kubenswrapper[7756]: I0220 11:50:13.267341 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" podUID="0e7df828-4166-4e0e-bb1e-042b3d14a6b6" containerName="route-controller-manager" containerID="cri-o://50b3d6e5e7ab30a6cfdc8ce1d5892866b82c6696745641cae28d3888bec7ebbc" gracePeriod=30 Feb 20 11:50:13.406247 master-0 kubenswrapper[7756]: I0220 11:50:13.406188 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 11:50:13.496247 master-0 kubenswrapper[7756]: I0220 11:50:13.496192 7756 generic.go:334] "Generic (PLEG): container finished" podID="f598b79b-809a-4b22-91f0-5227017f6bcb" containerID="6d932daec2143a48a4c215d84bda4caa56f09b2ba0e0db4926bdd00bce87f8c8" exitCode=0 Feb 20 11:50:13.496692 master-0 kubenswrapper[7756]: I0220 11:50:13.496258 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" event={"ID":"f598b79b-809a-4b22-91f0-5227017f6bcb","Type":"ContainerDied","Data":"6d932daec2143a48a4c215d84bda4caa56f09b2ba0e0db4926bdd00bce87f8c8"} Feb 20 11:50:13.497104 master-0 kubenswrapper[7756]: I0220 11:50:13.497074 7756 generic.go:334] "Generic (PLEG): container finished" podID="0e7df828-4166-4e0e-bb1e-042b3d14a6b6" containerID="50b3d6e5e7ab30a6cfdc8ce1d5892866b82c6696745641cae28d3888bec7ebbc" exitCode=0 Feb 20 11:50:13.497104 master-0 kubenswrapper[7756]: I0220 11:50:13.497097 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" event={"ID":"0e7df828-4166-4e0e-bb1e-042b3d14a6b6","Type":"ContainerDied","Data":"50b3d6e5e7ab30a6cfdc8ce1d5892866b82c6696745641cae28d3888bec7ebbc"} Feb 20 11:50:13.895071 master-0 kubenswrapper[7756]: I0220 11:50:13.895014 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:50:13.965148 master-0 kubenswrapper[7756]: I0220 11:50:13.965122 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:50:14.098181 master-0 kubenswrapper[7756]: I0220 11:50:14.096138 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-proxy-ca-bundles\") pod \"f598b79b-809a-4b22-91f0-5227017f6bcb\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " Feb 20 11:50:14.098181 master-0 kubenswrapper[7756]: I0220 11:50:14.097120 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-config\") pod \"f598b79b-809a-4b22-91f0-5227017f6bcb\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " Feb 20 11:50:14.098181 master-0 kubenswrapper[7756]: I0220 11:50:14.097158 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-client-ca\") pod \"f598b79b-809a-4b22-91f0-5227017f6bcb\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " Feb 20 11:50:14.098181 master-0 kubenswrapper[7756]: I0220 11:50:14.096752 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f598b79b-809a-4b22-91f0-5227017f6bcb" (UID: "f598b79b-809a-4b22-91f0-5227017f6bcb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:50:14.098181 master-0 kubenswrapper[7756]: I0220 11:50:14.097197 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-config\") pod \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " Feb 20 11:50:14.098181 master-0 kubenswrapper[7756]: I0220 11:50:14.097250 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-client-ca\") pod \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " Feb 20 11:50:14.098181 master-0 kubenswrapper[7756]: I0220 11:50:14.097289 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk4rp\" (UniqueName: \"kubernetes.io/projected/f598b79b-809a-4b22-91f0-5227017f6bcb-kube-api-access-jk4rp\") pod \"f598b79b-809a-4b22-91f0-5227017f6bcb\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " Feb 20 11:50:14.098181 master-0 kubenswrapper[7756]: I0220 11:50:14.097321 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbcxf\" (UniqueName: \"kubernetes.io/projected/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-kube-api-access-tbcxf\") pod \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " Feb 20 11:50:14.098181 master-0 kubenswrapper[7756]: I0220 11:50:14.097352 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f598b79b-809a-4b22-91f0-5227017f6bcb-serving-cert\") pod \"f598b79b-809a-4b22-91f0-5227017f6bcb\" (UID: \"f598b79b-809a-4b22-91f0-5227017f6bcb\") " Feb 20 11:50:14.098181 master-0 kubenswrapper[7756]: I0220 11:50:14.097418 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-serving-cert\") pod \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\" (UID: \"0e7df828-4166-4e0e-bb1e-042b3d14a6b6\") " Feb 20 11:50:14.098181 master-0 kubenswrapper[7756]: I0220 11:50:14.097687 7756 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:14.098181 master-0 kubenswrapper[7756]: I0220 11:50:14.097803 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-config" (OuterVolumeSpecName: "config") pod "f598b79b-809a-4b22-91f0-5227017f6bcb" (UID: "f598b79b-809a-4b22-91f0-5227017f6bcb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:50:14.098181 master-0 kubenswrapper[7756]: I0220 11:50:14.097963 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-client-ca" (OuterVolumeSpecName: "client-ca") pod "f598b79b-809a-4b22-91f0-5227017f6bcb" (UID: "f598b79b-809a-4b22-91f0-5227017f6bcb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:50:14.098720 master-0 kubenswrapper[7756]: I0220 11:50:14.098483 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-config" (OuterVolumeSpecName: "config") pod "0e7df828-4166-4e0e-bb1e-042b3d14a6b6" (UID: "0e7df828-4166-4e0e-bb1e-042b3d14a6b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:50:14.101660 master-0 kubenswrapper[7756]: I0220 11:50:14.098762 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-client-ca" (OuterVolumeSpecName: "client-ca") pod "0e7df828-4166-4e0e-bb1e-042b3d14a6b6" (UID: "0e7df828-4166-4e0e-bb1e-042b3d14a6b6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:50:14.102692 master-0 kubenswrapper[7756]: I0220 11:50:14.102079 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0e7df828-4166-4e0e-bb1e-042b3d14a6b6" (UID: "0e7df828-4166-4e0e-bb1e-042b3d14a6b6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:50:14.102692 master-0 kubenswrapper[7756]: I0220 11:50:14.102613 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-kube-api-access-tbcxf" (OuterVolumeSpecName: "kube-api-access-tbcxf") pod "0e7df828-4166-4e0e-bb1e-042b3d14a6b6" (UID: "0e7df828-4166-4e0e-bb1e-042b3d14a6b6"). InnerVolumeSpecName "kube-api-access-tbcxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:50:14.103413 master-0 kubenswrapper[7756]: I0220 11:50:14.103317 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f598b79b-809a-4b22-91f0-5227017f6bcb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f598b79b-809a-4b22-91f0-5227017f6bcb" (UID: "f598b79b-809a-4b22-91f0-5227017f6bcb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:50:14.105980 master-0 kubenswrapper[7756]: I0220 11:50:14.105951 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f598b79b-809a-4b22-91f0-5227017f6bcb-kube-api-access-jk4rp" (OuterVolumeSpecName: "kube-api-access-jk4rp") pod "f598b79b-809a-4b22-91f0-5227017f6bcb" (UID: "f598b79b-809a-4b22-91f0-5227017f6bcb"). InnerVolumeSpecName "kube-api-access-jk4rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:50:14.198398 master-0 kubenswrapper[7756]: I0220 11:50:14.198254 7756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:14.198398 master-0 kubenswrapper[7756]: I0220 11:50:14.198287 7756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:14.198398 master-0 kubenswrapper[7756]: I0220 11:50:14.198296 7756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f598b79b-809a-4b22-91f0-5227017f6bcb-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:14.198398 master-0 kubenswrapper[7756]: I0220 11:50:14.198304 7756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:14.198398 master-0 kubenswrapper[7756]: I0220 11:50:14.198313 7756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:14.198398 master-0 kubenswrapper[7756]: I0220 11:50:14.198324 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk4rp\" (UniqueName: \"kubernetes.io/projected/f598b79b-809a-4b22-91f0-5227017f6bcb-kube-api-access-jk4rp\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:14.198398 master-0 kubenswrapper[7756]: I0220 11:50:14.198335 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbcxf\" (UniqueName: \"kubernetes.io/projected/0e7df828-4166-4e0e-bb1e-042b3d14a6b6-kube-api-access-tbcxf\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:14.198398 master-0 kubenswrapper[7756]: I0220 11:50:14.198343 7756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f598b79b-809a-4b22-91f0-5227017f6bcb-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:14.574891 master-0 kubenswrapper[7756]: I0220 11:50:14.574858 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" event={"ID":"f598b79b-809a-4b22-91f0-5227017f6bcb","Type":"ContainerDied","Data":"79ecee64aaf5eb07e6327598335a8418cc9eadd15d6c29e3a6287aa09900ff15"} Feb 20 11:50:14.575226 master-0 kubenswrapper[7756]: I0220 11:50:14.575212 7756 scope.go:117] "RemoveContainer" containerID="6d932daec2143a48a4c215d84bda4caa56f09b2ba0e0db4926bdd00bce87f8c8" Feb 20 11:50:14.575396 master-0 kubenswrapper[7756]: I0220 11:50:14.575382 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6" Feb 20 11:50:14.587848 master-0 kubenswrapper[7756]: I0220 11:50:14.587800 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" Feb 20 11:50:14.630149 master-0 kubenswrapper[7756]: I0220 11:50:14.630105 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p" event={"ID":"0e7df828-4166-4e0e-bb1e-042b3d14a6b6","Type":"ContainerDied","Data":"c51fc9976235eba806445ba527326ae1186d7ebeaf647b93b2ea841938dc4883"} Feb 20 11:50:14.630365 master-0 kubenswrapper[7756]: I0220 11:50:14.630352 7756 scope.go:117] "RemoveContainer" containerID="50b3d6e5e7ab30a6cfdc8ce1d5892866b82c6696745641cae28d3888bec7ebbc" Feb 20 11:50:14.643734 master-0 kubenswrapper[7756]: I0220 11:50:14.643679 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-599c7886f5-zltnd"] Feb 20 11:50:14.644107 master-0 kubenswrapper[7756]: E0220 11:50:14.644027 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7df828-4166-4e0e-bb1e-042b3d14a6b6" containerName="route-controller-manager" Feb 20 11:50:14.644107 master-0 kubenswrapper[7756]: I0220 11:50:14.644047 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7df828-4166-4e0e-bb1e-042b3d14a6b6" containerName="route-controller-manager" Feb 20 11:50:14.644107 master-0 kubenswrapper[7756]: E0220 11:50:14.644079 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f598b79b-809a-4b22-91f0-5227017f6bcb" containerName="controller-manager" Feb 20 11:50:14.644220 master-0 kubenswrapper[7756]: I0220 11:50:14.644114 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f598b79b-809a-4b22-91f0-5227017f6bcb" containerName="controller-manager" Feb 20 11:50:14.644220 master-0 kubenswrapper[7756]: I0220 11:50:14.644202 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f598b79b-809a-4b22-91f0-5227017f6bcb" containerName="controller-manager" Feb 20 11:50:14.644220 master-0 kubenswrapper[7756]: I0220 11:50:14.644214 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7df828-4166-4e0e-bb1e-042b3d14a6b6" containerName="route-controller-manager" Feb 20 11:50:14.644645 master-0 kubenswrapper[7756]: I0220 11:50:14.644621 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:14.649238 master-0 kubenswrapper[7756]: I0220 11:50:14.649205 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 11:50:14.650295 master-0 kubenswrapper[7756]: I0220 11:50:14.650189 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 11:50:14.650295 master-0 kubenswrapper[7756]: I0220 11:50:14.650221 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 11:50:14.651121 master-0 kubenswrapper[7756]: I0220 11:50:14.651107 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-69d2b" Feb 20 11:50:14.652165 master-0 kubenswrapper[7756]: I0220 11:50:14.651584 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 11:50:14.652165 master-0 kubenswrapper[7756]: I0220 11:50:14.652085 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 11:50:14.668160 master-0 kubenswrapper[7756]: I0220 11:50:14.667638 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 11:50:14.668450 master-0 kubenswrapper[7756]: I0220 11:50:14.668248 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 20 11:50:14.721316 master-0 kubenswrapper[7756]: I0220 11:50:14.721274 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-599c7886f5-zltnd"] Feb 20 11:50:14.780069 master-0 kubenswrapper[7756]: I0220 11:50:14.778126 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2hwr\" (UniqueName: \"kubernetes.io/projected/98226a59-5234-48f3-a9cd-21de305810dc-kube-api-access-j2hwr\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:14.780069 master-0 kubenswrapper[7756]: I0220 11:50:14.778176 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98226a59-5234-48f3-a9cd-21de305810dc-serving-cert\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:14.780069 master-0 kubenswrapper[7756]: I0220 11:50:14.778230 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-client-ca\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:14.780069 master-0 kubenswrapper[7756]: I0220 11:50:14.778252 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-proxy-ca-bundles\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:14.780069 master-0 kubenswrapper[7756]: I0220 11:50:14.778273 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-config\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:14.879198 master-0 kubenswrapper[7756]: I0220 11:50:14.879096 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2hwr\" (UniqueName: \"kubernetes.io/projected/98226a59-5234-48f3-a9cd-21de305810dc-kube-api-access-j2hwr\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:14.879198 master-0 kubenswrapper[7756]: I0220 11:50:14.879147 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98226a59-5234-48f3-a9cd-21de305810dc-serving-cert\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:14.879198 master-0 kubenswrapper[7756]: I0220 11:50:14.879198 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-client-ca\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:14.879425 master-0 kubenswrapper[7756]: I0220 11:50:14.879217 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-proxy-ca-bundles\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:14.879425 master-0 kubenswrapper[7756]: I0220 11:50:14.879241 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-config\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:14.880459 master-0 kubenswrapper[7756]: I0220 11:50:14.880427 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-config\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:14.881795 master-0 kubenswrapper[7756]: I0220 11:50:14.881727 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-client-ca\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:14.882138 master-0 kubenswrapper[7756]: I0220 11:50:14.882111 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-proxy-ca-bundles\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:14.883349 master-0 kubenswrapper[7756]: I0220 11:50:14.883321 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98226a59-5234-48f3-a9cd-21de305810dc-serving-cert\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:15.385891 master-0 kubenswrapper[7756]: I0220 11:50:15.381594 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5"] Feb 20 11:50:15.385891 master-0 kubenswrapper[7756]: I0220 11:50:15.382514 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" Feb 20 11:50:15.387553 master-0 kubenswrapper[7756]: I0220 11:50:15.387199 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 20 11:50:15.387553 master-0 kubenswrapper[7756]: I0220 11:50:15.387358 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-vncjl" Feb 20 11:50:15.392755 master-0 kubenswrapper[7756]: I0220 11:50:15.389022 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 20 11:50:15.393250 master-0 kubenswrapper[7756]: I0220 11:50:15.393114 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 20 11:50:15.405635 master-0 kubenswrapper[7756]: I0220 11:50:15.405389 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5"] Feb 20 11:50:15.407436 master-0 kubenswrapper[7756]: I0220 11:50:15.407334 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2hwr\" (UniqueName: \"kubernetes.io/projected/98226a59-5234-48f3-a9cd-21de305810dc-kube-api-access-j2hwr\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:15.489254 master-0 kubenswrapper[7756]: I0220 11:50:15.489212 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/21e8e44b-b883-4afb-af90-d6c1265edf34-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-fn7j5\" (UID: \"21e8e44b-b883-4afb-af90-d6c1265edf34\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" Feb 20 11:50:15.489452 master-0 kubenswrapper[7756]: I0220 11:50:15.489284 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk6hv\" (UniqueName: \"kubernetes.io/projected/21e8e44b-b883-4afb-af90-d6c1265edf34-kube-api-access-rk6hv\") pod \"control-plane-machine-set-operator-686847ff5f-fn7j5\" (UID: \"21e8e44b-b883-4afb-af90-d6c1265edf34\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" Feb 20 11:50:15.590888 master-0 kubenswrapper[7756]: I0220 11:50:15.590787 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/21e8e44b-b883-4afb-af90-d6c1265edf34-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-fn7j5\" (UID: \"21e8e44b-b883-4afb-af90-d6c1265edf34\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" Feb 20 11:50:15.591724 master-0 kubenswrapper[7756]: I0220 11:50:15.591058 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk6hv\" (UniqueName: \"kubernetes.io/projected/21e8e44b-b883-4afb-af90-d6c1265edf34-kube-api-access-rk6hv\") pod \"control-plane-machine-set-operator-686847ff5f-fn7j5\" (UID: \"21e8e44b-b883-4afb-af90-d6c1265edf34\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" Feb 20 11:50:15.594713 master-0 kubenswrapper[7756]: I0220 11:50:15.594123 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7bd4430b-8dbc-46df-9efe-49d520a7c75a","Type":"ContainerStarted","Data":"e80cac2721cbb0873c9a56ecbcc2ab13f0cf0ddd137a7458a4798813bbf93c32"} Feb 20 11:50:15.594713 master-0 kubenswrapper[7756]: I0220 11:50:15.594165 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7bd4430b-8dbc-46df-9efe-49d520a7c75a","Type":"ContainerStarted","Data":"55661699f170197933eff4a7d62dfa673dfa4d47667b396a98f1b608289f577a"} Feb 20 11:50:15.594713 master-0 kubenswrapper[7756]: I0220 11:50:15.594638 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/21e8e44b-b883-4afb-af90-d6c1265edf34-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-fn7j5\" (UID: \"21e8e44b-b883-4afb-af90-d6c1265edf34\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" Feb 20 11:50:15.785388 master-0 kubenswrapper[7756]: I0220 11:50:15.784622 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:15.808139 master-0 kubenswrapper[7756]: I0220 11:50:15.808018 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p"] Feb 20 11:50:15.842338 master-0 kubenswrapper[7756]: I0220 11:50:15.842305 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-796b564b-tbg9p"] Feb 20 11:50:15.858151 master-0 kubenswrapper[7756]: I0220 11:50:15.857907 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 20 11:50:15.858342 master-0 kubenswrapper[7756]: I0220 11:50:15.858292 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="a827d746-cfd3-48a2-a20b-2ff1526986b9" containerName="installer" containerID="cri-o://87542918fa08c7c4d02b25510f491f9813c8b4b90b5f23c58f4f083551680cc2" gracePeriod=30 Feb 20 11:50:15.877845 master-0 kubenswrapper[7756]: I0220 11:50:15.877420 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk6hv\" (UniqueName: \"kubernetes.io/projected/21e8e44b-b883-4afb-af90-d6c1265edf34-kube-api-access-rk6hv\") pod \"control-plane-machine-set-operator-686847ff5f-fn7j5\" (UID: \"21e8e44b-b883-4afb-af90-d6c1265edf34\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" Feb 20 11:50:15.954786 master-0 kubenswrapper[7756]: I0220 11:50:15.954716 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6"] Feb 20 11:50:15.961693 master-0 kubenswrapper[7756]: I0220 11:50:15.961652 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b79dbbc7-zvpx6"] Feb 20 11:50:16.033396 master-0 kubenswrapper[7756]: I0220 11:50:16.033335 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" Feb 20 11:50:16.290244 master-0 kubenswrapper[7756]: I0220 11:50:16.290173 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-599c7886f5-zltnd"] Feb 20 11:50:16.304330 master-0 kubenswrapper[7756]: W0220 11:50:16.303485 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98226a59_5234_48f3_a9cd_21de305810dc.slice/crio-e668bf18622f735aba88fe56630f792fd4bf653bbe4e51d87240b3f22f8d64bd WatchSource:0}: Error finding container e668bf18622f735aba88fe56630f792fd4bf653bbe4e51d87240b3f22f8d64bd: Status 404 returned error can't find the container with id e668bf18622f735aba88fe56630f792fd4bf653bbe4e51d87240b3f22f8d64bd Feb 20 11:50:16.434858 master-0 kubenswrapper[7756]: I0220 11:50:16.434806 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5"] Feb 20 11:50:16.438357 master-0 kubenswrapper[7756]: W0220 11:50:16.438303 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21e8e44b_b883_4afb_af90_d6c1265edf34.slice/crio-0ede86c860ac980d49efbb5f04d472fabe03c4653074a1a827ff49d2034894a1 WatchSource:0}: Error finding container 0ede86c860ac980d49efbb5f04d472fabe03c4653074a1a827ff49d2034894a1: Status 404 returned error can't find the container with id 0ede86c860ac980d49efbb5f04d472fabe03c4653074a1a827ff49d2034894a1 Feb 20 11:50:16.593666 master-0 kubenswrapper[7756]: I0220 11:50:16.589027 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e7df828-4166-4e0e-bb1e-042b3d14a6b6" path="/var/lib/kubelet/pods/0e7df828-4166-4e0e-bb1e-042b3d14a6b6/volumes" Feb 20 11:50:16.593666 master-0 kubenswrapper[7756]: I0220 11:50:16.589588 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f598b79b-809a-4b22-91f0-5227017f6bcb" path="/var/lib/kubelet/pods/f598b79b-809a-4b22-91f0-5227017f6bcb/volumes" Feb 20 11:50:16.614567 master-0 kubenswrapper[7756]: I0220 11:50:16.602567 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" event={"ID":"21e8e44b-b883-4afb-af90-d6c1265edf34","Type":"ContainerStarted","Data":"0ede86c860ac980d49efbb5f04d472fabe03c4653074a1a827ff49d2034894a1"} Feb 20 11:50:16.614567 master-0 kubenswrapper[7756]: I0220 11:50:16.607821 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" event={"ID":"98226a59-5234-48f3-a9cd-21de305810dc","Type":"ContainerStarted","Data":"c5857fd0f578f323286023fc24db8dcdefabd0753d52c557c0cb0421ff06a92f"} Feb 20 11:50:16.614567 master-0 kubenswrapper[7756]: I0220 11:50:16.607878 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" event={"ID":"98226a59-5234-48f3-a9cd-21de305810dc","Type":"ContainerStarted","Data":"e668bf18622f735aba88fe56630f792fd4bf653bbe4e51d87240b3f22f8d64bd"} Feb 20 11:50:16.614567 master-0 kubenswrapper[7756]: I0220 11:50:16.608050 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:16.616362 master-0 kubenswrapper[7756]: I0220 11:50:16.616214 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:50:16.627631 master-0 kubenswrapper[7756]: I0220 11:50:16.627555 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=4.627519711 podStartE2EDuration="4.627519711s" podCreationTimestamp="2026-02-20 11:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:50:16.626698057 +0000 UTC m=+62.368946075" watchObservedRunningTime="2026-02-20 11:50:16.627519711 +0000 UTC m=+62.369767719" Feb 20 11:50:16.648058 master-0 kubenswrapper[7756]: I0220 11:50:16.648004 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6"] Feb 20 11:50:16.648814 master-0 kubenswrapper[7756]: I0220 11:50:16.648797 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:50:16.651387 master-0 kubenswrapper[7756]: I0220 11:50:16.651344 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-nd6lj" Feb 20 11:50:16.651387 master-0 kubenswrapper[7756]: I0220 11:50:16.651379 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 20 11:50:16.651482 master-0 kubenswrapper[7756]: I0220 11:50:16.651410 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 20 11:50:16.651511 master-0 kubenswrapper[7756]: I0220 11:50:16.651490 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 20 11:50:16.651618 master-0 kubenswrapper[7756]: I0220 11:50:16.651596 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 20 11:50:16.652540 master-0 kubenswrapper[7756]: I0220 11:50:16.652505 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 20 11:50:16.692107 master-0 kubenswrapper[7756]: I0220 11:50:16.689058 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" podStartSLOduration=3.689029419 podStartE2EDuration="3.689029419s" podCreationTimestamp="2026-02-20 11:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:50:16.650985162 +0000 UTC m=+62.393233190" watchObservedRunningTime="2026-02-20 11:50:16.689029419 +0000 UTC m=+62.431277417" Feb 20 11:50:16.692107 master-0 kubenswrapper[7756]: I0220 11:50:16.690593 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6"] Feb 20 11:50:16.708245 master-0 kubenswrapper[7756]: I0220 11:50:16.706405 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-client-ca\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:50:16.708245 master-0 kubenswrapper[7756]: I0220 11:50:16.706457 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7k2n\" (UniqueName: \"kubernetes.io/projected/c29fd426-7c89-434e-8332-1ca31075d4bf-kube-api-access-z7k2n\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:50:16.708245 master-0 kubenswrapper[7756]: I0220 11:50:16.706492 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29fd426-7c89-434e-8332-1ca31075d4bf-serving-cert\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:50:16.708245 master-0 kubenswrapper[7756]: I0220 11:50:16.706580 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-config\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:50:16.807906 master-0 kubenswrapper[7756]: I0220 11:50:16.807498 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7k2n\" (UniqueName: \"kubernetes.io/projected/c29fd426-7c89-434e-8332-1ca31075d4bf-kube-api-access-z7k2n\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:50:16.807906 master-0 kubenswrapper[7756]: I0220 11:50:16.807556 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29fd426-7c89-434e-8332-1ca31075d4bf-serving-cert\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:50:16.807906 master-0 kubenswrapper[7756]: I0220 11:50:16.807757 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-config\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:50:16.807906 master-0 kubenswrapper[7756]: I0220 11:50:16.807881 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-client-ca\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:50:16.808710 master-0 kubenswrapper[7756]: I0220 11:50:16.808690 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-client-ca\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:50:16.810078 master-0 kubenswrapper[7756]: I0220 11:50:16.810041 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-config\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:50:16.811452 master-0 kubenswrapper[7756]: I0220 11:50:16.811418 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29fd426-7c89-434e-8332-1ca31075d4bf-serving-cert\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:50:16.826996 master-0 kubenswrapper[7756]: I0220 11:50:16.826310 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7k2n\" (UniqueName: \"kubernetes.io/projected/c29fd426-7c89-434e-8332-1ca31075d4bf-kube-api-access-z7k2n\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:50:17.007411 master-0 kubenswrapper[7756]: I0220 11:50:17.006949 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:50:17.924897 master-0 kubenswrapper[7756]: I0220 11:50:17.924835 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-cll9p"] Feb 20 11:50:17.925900 master-0 kubenswrapper[7756]: I0220 11:50:17.925879 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:50:17.929690 master-0 kubenswrapper[7756]: I0220 11:50:17.928190 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-l5hc4" Feb 20 11:50:17.929690 master-0 kubenswrapper[7756]: I0220 11:50:17.928810 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 20 11:50:17.929690 master-0 kubenswrapper[7756]: I0220 11:50:17.929096 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 20 11:50:17.929690 master-0 kubenswrapper[7756]: I0220 11:50:17.929290 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 20 11:50:17.929690 master-0 kubenswrapper[7756]: I0220 11:50:17.929490 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 20 11:50:17.930307 master-0 kubenswrapper[7756]: I0220 11:50:17.930152 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 20 11:50:18.031614 master-0 kubenswrapper[7756]: I0220 11:50:18.031568 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bbfa556b-3986-44b5-bf47-be113d732ad8-machine-approver-tls\") pod \"machine-approver-798b897698-cll9p\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:50:18.031614 master-0 kubenswrapper[7756]: I0220 11:50:18.031621 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbfa556b-3986-44b5-bf47-be113d732ad8-config\") pod \"machine-approver-798b897698-cll9p\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:50:18.031876 master-0 kubenswrapper[7756]: I0220 11:50:18.031651 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzfwx\" (UniqueName: \"kubernetes.io/projected/bbfa556b-3986-44b5-bf47-be113d732ad8-kube-api-access-gzfwx\") pod \"machine-approver-798b897698-cll9p\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:50:18.031876 master-0 kubenswrapper[7756]: I0220 11:50:18.031684 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbfa556b-3986-44b5-bf47-be113d732ad8-auth-proxy-config\") pod \"machine-approver-798b897698-cll9p\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:50:18.133626 master-0 kubenswrapper[7756]: I0220 11:50:18.132802 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bbfa556b-3986-44b5-bf47-be113d732ad8-machine-approver-tls\") pod \"machine-approver-798b897698-cll9p\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:50:18.133626 master-0 kubenswrapper[7756]: I0220 11:50:18.132871 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbfa556b-3986-44b5-bf47-be113d732ad8-config\") pod \"machine-approver-798b897698-cll9p\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:50:18.133626 master-0 kubenswrapper[7756]: I0220 11:50:18.132920 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzfwx\" (UniqueName: \"kubernetes.io/projected/bbfa556b-3986-44b5-bf47-be113d732ad8-kube-api-access-gzfwx\") pod \"machine-approver-798b897698-cll9p\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:50:18.133626 master-0 kubenswrapper[7756]: I0220 11:50:18.132973 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbfa556b-3986-44b5-bf47-be113d732ad8-auth-proxy-config\") pod \"machine-approver-798b897698-cll9p\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:50:18.134121 master-0 kubenswrapper[7756]: I0220 11:50:18.133812 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbfa556b-3986-44b5-bf47-be113d732ad8-auth-proxy-config\") pod \"machine-approver-798b897698-cll9p\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:50:18.135896 master-0 kubenswrapper[7756]: I0220 11:50:18.135843 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbfa556b-3986-44b5-bf47-be113d732ad8-config\") pod \"machine-approver-798b897698-cll9p\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:50:18.147404 master-0 kubenswrapper[7756]: I0220 11:50:18.147362 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bbfa556b-3986-44b5-bf47-be113d732ad8-machine-approver-tls\") pod \"machine-approver-798b897698-cll9p\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:50:18.151961 master-0 kubenswrapper[7756]: I0220 11:50:18.151922 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzfwx\" (UniqueName: \"kubernetes.io/projected/bbfa556b-3986-44b5-bf47-be113d732ad8-kube-api-access-gzfwx\") pod \"machine-approver-798b897698-cll9p\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:50:18.220657 master-0 kubenswrapper[7756]: I0220 11:50:18.220322 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 20 11:50:18.225542 master-0 kubenswrapper[7756]: I0220 11:50:18.220928 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:50:18.225542 master-0 kubenswrapper[7756]: I0220 11:50:18.223622 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-h4rwl" Feb 20 11:50:18.232600 master-0 kubenswrapper[7756]: I0220 11:50:18.229950 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 20 11:50:18.262288 master-0 kubenswrapper[7756]: I0220 11:50:18.262245 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:50:18.363627 master-0 kubenswrapper[7756]: I0220 11:50:18.363563 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb93420d-7c5a-4492-bd16-0104104406b4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"eb93420d-7c5a-4492-bd16-0104104406b4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:50:18.363627 master-0 kubenswrapper[7756]: I0220 11:50:18.363614 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eb93420d-7c5a-4492-bd16-0104104406b4-var-lock\") pod \"installer-2-master-0\" (UID: \"eb93420d-7c5a-4492-bd16-0104104406b4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:50:18.363959 master-0 kubenswrapper[7756]: I0220 11:50:18.363909 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb93420d-7c5a-4492-bd16-0104104406b4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"eb93420d-7c5a-4492-bd16-0104104406b4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:50:18.464996 master-0 kubenswrapper[7756]: I0220 11:50:18.464923 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb93420d-7c5a-4492-bd16-0104104406b4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"eb93420d-7c5a-4492-bd16-0104104406b4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:50:18.464996 master-0 kubenswrapper[7756]: I0220 11:50:18.464979 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb93420d-7c5a-4492-bd16-0104104406b4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"eb93420d-7c5a-4492-bd16-0104104406b4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:50:18.465250 master-0 kubenswrapper[7756]: I0220 11:50:18.465073 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb93420d-7c5a-4492-bd16-0104104406b4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"eb93420d-7c5a-4492-bd16-0104104406b4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:50:18.465250 master-0 kubenswrapper[7756]: I0220 11:50:18.465135 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eb93420d-7c5a-4492-bd16-0104104406b4-var-lock\") pod \"installer-2-master-0\" (UID: \"eb93420d-7c5a-4492-bd16-0104104406b4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:50:18.465250 master-0 kubenswrapper[7756]: I0220 11:50:18.465223 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eb93420d-7c5a-4492-bd16-0104104406b4-var-lock\") pod \"installer-2-master-0\" (UID: \"eb93420d-7c5a-4492-bd16-0104104406b4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:50:18.480200 master-0 kubenswrapper[7756]: I0220 11:50:18.480117 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb93420d-7c5a-4492-bd16-0104104406b4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"eb93420d-7c5a-4492-bd16-0104104406b4\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:50:18.583550 master-0 kubenswrapper[7756]: I0220 11:50:18.577585 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:50:20.783637 master-0 kubenswrapper[7756]: I0220 11:50:20.782321 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q"] Feb 20 11:50:20.784290 master-0 kubenswrapper[7756]: I0220 11:50:20.783948 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 11:50:20.788196 master-0 kubenswrapper[7756]: I0220 11:50:20.788145 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 20 11:50:20.788480 master-0 kubenswrapper[7756]: I0220 11:50:20.788459 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 20 11:50:20.788761 master-0 kubenswrapper[7756]: I0220 11:50:20.788747 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-9z85g" Feb 20 11:50:20.789094 master-0 kubenswrapper[7756]: I0220 11:50:20.789073 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 20 11:50:20.793855 master-0 kubenswrapper[7756]: I0220 11:50:20.793651 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 20 11:50:20.815313 master-0 kubenswrapper[7756]: I0220 11:50:20.809133 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q"] Feb 20 11:50:20.901231 master-0 kubenswrapper[7756]: I0220 11:50:20.901170 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef18ace4-7316-4600-9be9-2adc792705e9-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 11:50:20.901419 master-0 kubenswrapper[7756]: I0220 11:50:20.901278 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef18ace4-7316-4600-9be9-2adc792705e9-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 11:50:20.901419 master-0 kubenswrapper[7756]: I0220 11:50:20.901351 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn7cs\" (UniqueName: \"kubernetes.io/projected/ef18ace4-7316-4600-9be9-2adc792705e9-kube-api-access-kn7cs\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 11:50:21.002237 master-0 kubenswrapper[7756]: I0220 11:50:21.002190 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn7cs\" (UniqueName: \"kubernetes.io/projected/ef18ace4-7316-4600-9be9-2adc792705e9-kube-api-access-kn7cs\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 11:50:21.002237 master-0 kubenswrapper[7756]: I0220 11:50:21.002253 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef18ace4-7316-4600-9be9-2adc792705e9-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 11:50:21.002573 master-0 kubenswrapper[7756]: I0220 11:50:21.002315 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef18ace4-7316-4600-9be9-2adc792705e9-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 11:50:21.004309 master-0 kubenswrapper[7756]: I0220 11:50:21.003830 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef18ace4-7316-4600-9be9-2adc792705e9-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 11:50:21.007968 master-0 kubenswrapper[7756]: I0220 11:50:21.007943 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef18ace4-7316-4600-9be9-2adc792705e9-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 11:50:21.030791 master-0 kubenswrapper[7756]: I0220 11:50:21.030750 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn7cs\" (UniqueName: \"kubernetes.io/projected/ef18ace4-7316-4600-9be9-2adc792705e9-kube-api-access-kn7cs\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 11:50:21.121087 master-0 kubenswrapper[7756]: I0220 11:50:21.120957 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 11:50:21.492094 master-0 kubenswrapper[7756]: I0220 11:50:21.491327 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj"] Feb 20 11:50:21.492305 master-0 kubenswrapper[7756]: I0220 11:50:21.492267 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 11:50:21.498723 master-0 kubenswrapper[7756]: I0220 11:50:21.498687 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xsh5v" Feb 20 11:50:21.498723 master-0 kubenswrapper[7756]: I0220 11:50:21.498706 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 20 11:50:21.507050 master-0 kubenswrapper[7756]: I0220 11:50:21.498948 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 20 11:50:21.507050 master-0 kubenswrapper[7756]: I0220 11:50:21.499264 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 20 11:50:21.508963 master-0 kubenswrapper[7756]: I0220 11:50:21.508919 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlcjf\" (UniqueName: \"kubernetes.io/projected/5c104245-d078-4856-9a60-207bb6efcfe8-kube-api-access-nlcjf\") pod \"cluster-samples-operator-65c5c48b9b-2k7xj\" (UID: \"5c104245-d078-4856-9a60-207bb6efcfe8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 11:50:21.509024 master-0 kubenswrapper[7756]: I0220 11:50:21.509011 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c104245-d078-4856-9a60-207bb6efcfe8-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-2k7xj\" (UID: \"5c104245-d078-4856-9a60-207bb6efcfe8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 11:50:21.511494 master-0 kubenswrapper[7756]: I0220 11:50:21.511383 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj"] Feb 20 11:50:21.610377 master-0 kubenswrapper[7756]: I0220 11:50:21.610321 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlcjf\" (UniqueName: \"kubernetes.io/projected/5c104245-d078-4856-9a60-207bb6efcfe8-kube-api-access-nlcjf\") pod \"cluster-samples-operator-65c5c48b9b-2k7xj\" (UID: \"5c104245-d078-4856-9a60-207bb6efcfe8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 11:50:21.610565 master-0 kubenswrapper[7756]: I0220 11:50:21.610454 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c104245-d078-4856-9a60-207bb6efcfe8-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-2k7xj\" (UID: \"5c104245-d078-4856-9a60-207bb6efcfe8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 11:50:21.615391 master-0 kubenswrapper[7756]: I0220 11:50:21.615350 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c104245-d078-4856-9a60-207bb6efcfe8-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-2k7xj\" (UID: \"5c104245-d078-4856-9a60-207bb6efcfe8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 11:50:21.633168 master-0 kubenswrapper[7756]: I0220 11:50:21.633105 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlcjf\" (UniqueName: \"kubernetes.io/projected/5c104245-d078-4856-9a60-207bb6efcfe8-kube-api-access-nlcjf\") pod \"cluster-samples-operator-65c5c48b9b-2k7xj\" (UID: \"5c104245-d078-4856-9a60-207bb6efcfe8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 11:50:21.882505 master-0 kubenswrapper[7756]: I0220 11:50:21.882449 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 11:50:22.116618 master-0 kubenswrapper[7756]: I0220 11:50:22.116544 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq"] Feb 20 11:50:22.123346 master-0 kubenswrapper[7756]: I0220 11:50:22.123303 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq"] Feb 20 11:50:22.123518 master-0 kubenswrapper[7756]: I0220 11:50:22.123415 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.129197 master-0 kubenswrapper[7756]: I0220 11:50:22.127647 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 20 11:50:22.129197 master-0 kubenswrapper[7756]: I0220 11:50:22.127865 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-jfq59" Feb 20 11:50:22.129197 master-0 kubenswrapper[7756]: I0220 11:50:22.128128 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 20 11:50:22.129197 master-0 kubenswrapper[7756]: I0220 11:50:22.128193 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 20 11:50:22.129197 master-0 kubenswrapper[7756]: I0220 11:50:22.128404 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 20 11:50:22.219002 master-0 kubenswrapper[7756]: I0220 11:50:22.218934 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.219249 master-0 kubenswrapper[7756]: I0220 11:50:22.219064 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.219249 master-0 kubenswrapper[7756]: I0220 11:50:22.219133 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.219249 master-0 kubenswrapper[7756]: I0220 11:50:22.219168 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.219249 master-0 kubenswrapper[7756]: I0220 11:50:22.219196 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zztmz\" (UniqueName: \"kubernetes.io/projected/bd609bd3-2525-4b88-8f07-94a0418fb582-kube-api-access-zztmz\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.320378 master-0 kubenswrapper[7756]: I0220 11:50:22.320328 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.320627 master-0 kubenswrapper[7756]: I0220 11:50:22.320395 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.320695 master-0 kubenswrapper[7756]: I0220 11:50:22.320610 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.320737 master-0 kubenswrapper[7756]: I0220 11:50:22.320692 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.320771 master-0 kubenswrapper[7756]: I0220 11:50:22.320737 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zztmz\" (UniqueName: \"kubernetes.io/projected/bd609bd3-2525-4b88-8f07-94a0418fb582-kube-api-access-zztmz\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.321253 master-0 kubenswrapper[7756]: I0220 11:50:22.321231 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.321410 master-0 kubenswrapper[7756]: I0220 11:50:22.321369 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.324460 master-0 kubenswrapper[7756]: I0220 11:50:22.324185 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.325696 master-0 kubenswrapper[7756]: I0220 11:50:22.325658 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.338766 master-0 kubenswrapper[7756]: I0220 11:50:22.338696 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zztmz\" (UniqueName: \"kubernetes.io/projected/bd609bd3-2525-4b88-8f07-94a0418fb582-kube-api-access-zztmz\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:22.453931 master-0 kubenswrapper[7756]: I0220 11:50:22.453798 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:50:24.078378 master-0 kubenswrapper[7756]: I0220 11:50:24.078283 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt"] Feb 20 11:50:24.080191 master-0 kubenswrapper[7756]: I0220 11:50:24.080143 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 11:50:24.083458 master-0 kubenswrapper[7756]: I0220 11:50:24.083388 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-mhfhg" Feb 20 11:50:24.083704 master-0 kubenswrapper[7756]: I0220 11:50:24.083639 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 20 11:50:24.084013 master-0 kubenswrapper[7756]: I0220 11:50:24.083947 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 20 11:50:24.145863 master-0 kubenswrapper[7756]: I0220 11:50:24.145811 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ab951b1-6898-4357-b813-16365f3f89d5-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 11:50:24.146116 master-0 kubenswrapper[7756]: I0220 11:50:24.146083 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ab951b1-6898-4357-b813-16365f3f89d5-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 11:50:24.146453 master-0 kubenswrapper[7756]: I0220 11:50:24.146422 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdzzt\" (UniqueName: \"kubernetes.io/projected/8ab951b1-6898-4357-b813-16365f3f89d5-kube-api-access-xdzzt\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 11:50:24.247687 master-0 kubenswrapper[7756]: I0220 11:50:24.247584 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ab951b1-6898-4357-b813-16365f3f89d5-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 11:50:24.248025 master-0 kubenswrapper[7756]: I0220 11:50:24.247783 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdzzt\" (UniqueName: \"kubernetes.io/projected/8ab951b1-6898-4357-b813-16365f3f89d5-kube-api-access-xdzzt\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 11:50:24.248025 master-0 kubenswrapper[7756]: I0220 11:50:24.247852 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ab951b1-6898-4357-b813-16365f3f89d5-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 11:50:24.248860 master-0 kubenswrapper[7756]: I0220 11:50:24.248799 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ab951b1-6898-4357-b813-16365f3f89d5-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 11:50:24.253446 master-0 kubenswrapper[7756]: I0220 11:50:24.253388 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ab951b1-6898-4357-b813-16365f3f89d5-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 11:50:24.564187 master-0 kubenswrapper[7756]: I0220 11:50:24.564140 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt"] Feb 20 11:50:24.588231 master-0 kubenswrapper[7756]: I0220 11:50:24.588197 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdzzt\" (UniqueName: \"kubernetes.io/projected/8ab951b1-6898-4357-b813-16365f3f89d5-kube-api-access-xdzzt\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 11:50:24.679433 master-0 kubenswrapper[7756]: I0220 11:50:24.673175 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-hsjr7"] Feb 20 11:50:24.679433 master-0 kubenswrapper[7756]: I0220 11:50:24.678031 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:24.691546 master-0 kubenswrapper[7756]: I0220 11:50:24.683979 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-8ksk5" Feb 20 11:50:24.691546 master-0 kubenswrapper[7756]: I0220 11:50:24.684942 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 20 11:50:24.691546 master-0 kubenswrapper[7756]: I0220 11:50:24.685066 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 20 11:50:24.691546 master-0 kubenswrapper[7756]: I0220 11:50:24.685177 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 20 11:50:24.691546 master-0 kubenswrapper[7756]: I0220 11:50:24.685273 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 20 11:50:24.703552 master-0 kubenswrapper[7756]: I0220 11:50:24.695945 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 20 11:50:24.714543 master-0 kubenswrapper[7756]: I0220 11:50:24.709666 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-hsjr7"] Feb 20 11:50:24.718473 master-0 kubenswrapper[7756]: I0220 11:50:24.717879 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 11:50:24.719861 master-0 kubenswrapper[7756]: I0220 11:50:24.718890 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg"] Feb 20 11:50:24.719861 master-0 kubenswrapper[7756]: I0220 11:50:24.719537 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 11:50:24.729171 master-0 kubenswrapper[7756]: I0220 11:50:24.726219 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-rjk9v" Feb 20 11:50:24.729171 master-0 kubenswrapper[7756]: I0220 11:50:24.726488 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 20 11:50:24.748609 master-0 kubenswrapper[7756]: I0220 11:50:24.747918 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg"] Feb 20 11:50:24.871657 master-0 kubenswrapper[7756]: I0220 11:50:24.868016 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbdbadd9-eeaa-46ef-936e-5db8d395c118-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-d9vsg\" (UID: \"bbdbadd9-eeaa-46ef-936e-5db8d395c118\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 11:50:24.871657 master-0 kubenswrapper[7756]: I0220 11:50:24.868078 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:24.871657 master-0 kubenswrapper[7756]: I0220 11:50:24.868111 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78bqv\" (UniqueName: \"kubernetes.io/projected/daf25ef5-8247-4dbb-bdc1-55104b1015b7-kube-api-access-78bqv\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:24.871657 master-0 kubenswrapper[7756]: I0220 11:50:24.868139 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf25ef5-8247-4dbb-bdc1-55104b1015b7-serving-cert\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:24.871657 master-0 kubenswrapper[7756]: I0220 11:50:24.868174 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-service-ca-bundle\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:24.871657 master-0 kubenswrapper[7756]: I0220 11:50:24.868204 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/daf25ef5-8247-4dbb-bdc1-55104b1015b7-snapshots\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:24.871657 master-0 kubenswrapper[7756]: I0220 11:50:24.868328 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttmwx\" (UniqueName: \"kubernetes.io/projected/bbdbadd9-eeaa-46ef-936e-5db8d395c118-kube-api-access-ttmwx\") pod \"cluster-storage-operator-f94476f49-d9vsg\" (UID: \"bbdbadd9-eeaa-46ef-936e-5db8d395c118\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 11:50:24.969549 master-0 kubenswrapper[7756]: I0220 11:50:24.969224 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttmwx\" (UniqueName: \"kubernetes.io/projected/bbdbadd9-eeaa-46ef-936e-5db8d395c118-kube-api-access-ttmwx\") pod \"cluster-storage-operator-f94476f49-d9vsg\" (UID: \"bbdbadd9-eeaa-46ef-936e-5db8d395c118\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 11:50:24.969549 master-0 kubenswrapper[7756]: I0220 11:50:24.969300 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbdbadd9-eeaa-46ef-936e-5db8d395c118-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-d9vsg\" (UID: \"bbdbadd9-eeaa-46ef-936e-5db8d395c118\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 11:50:24.969549 master-0 kubenswrapper[7756]: I0220 11:50:24.969331 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:24.969549 master-0 kubenswrapper[7756]: I0220 11:50:24.969353 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78bqv\" (UniqueName: \"kubernetes.io/projected/daf25ef5-8247-4dbb-bdc1-55104b1015b7-kube-api-access-78bqv\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:24.969549 master-0 kubenswrapper[7756]: I0220 11:50:24.969370 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf25ef5-8247-4dbb-bdc1-55104b1015b7-serving-cert\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:24.969549 master-0 kubenswrapper[7756]: I0220 11:50:24.969395 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-service-ca-bundle\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:24.969549 master-0 kubenswrapper[7756]: I0220 11:50:24.969451 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/daf25ef5-8247-4dbb-bdc1-55104b1015b7-snapshots\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:24.976638 master-0 kubenswrapper[7756]: I0220 11:50:24.976594 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:24.993694 master-0 kubenswrapper[7756]: I0220 11:50:24.987339 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-service-ca-bundle\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:24.993694 master-0 kubenswrapper[7756]: I0220 11:50:24.990455 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbdbadd9-eeaa-46ef-936e-5db8d395c118-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-d9vsg\" (UID: \"bbdbadd9-eeaa-46ef-936e-5db8d395c118\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 11:50:24.993694 master-0 kubenswrapper[7756]: I0220 11:50:24.991470 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/daf25ef5-8247-4dbb-bdc1-55104b1015b7-snapshots\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:24.994588 master-0 kubenswrapper[7756]: I0220 11:50:24.994551 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf25ef5-8247-4dbb-bdc1-55104b1015b7-serving-cert\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:25.047059 master-0 kubenswrapper[7756]: I0220 11:50:25.047014 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttmwx\" (UniqueName: \"kubernetes.io/projected/bbdbadd9-eeaa-46ef-936e-5db8d395c118-kube-api-access-ttmwx\") pod \"cluster-storage-operator-f94476f49-d9vsg\" (UID: \"bbdbadd9-eeaa-46ef-936e-5db8d395c118\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 11:50:25.053322 master-0 kubenswrapper[7756]: I0220 11:50:25.053284 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78bqv\" (UniqueName: \"kubernetes.io/projected/daf25ef5-8247-4dbb-bdc1-55104b1015b7-kube-api-access-78bqv\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:25.073392 master-0 kubenswrapper[7756]: I0220 11:50:25.072826 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 11:50:25.089598 master-0 kubenswrapper[7756]: I0220 11:50:25.082338 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq"] Feb 20 11:50:25.091824 master-0 kubenswrapper[7756]: I0220 11:50:25.091801 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.097543 master-0 kubenswrapper[7756]: I0220 11:50:25.093391 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 20 11:50:25.097543 master-0 kubenswrapper[7756]: I0220 11:50:25.093608 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 20 11:50:25.097543 master-0 kubenswrapper[7756]: I0220 11:50:25.094887 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 20 11:50:25.097543 master-0 kubenswrapper[7756]: I0220 11:50:25.095235 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 20 11:50:25.097543 master-0 kubenswrapper[7756]: I0220 11:50:25.095423 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 11:50:25.097543 master-0 kubenswrapper[7756]: I0220 11:50:25.095540 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-g5hcq" Feb 20 11:50:25.273613 master-0 kubenswrapper[7756]: I0220 11:50:25.273550 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.273824 master-0 kubenswrapper[7756]: I0220 11:50:25.273634 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.273866 master-0 kubenswrapper[7756]: I0220 11:50:25.273808 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.273968 master-0 kubenswrapper[7756]: I0220 11:50:25.273946 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rngh2\" (UniqueName: \"kubernetes.io/projected/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-kube-api-access-rngh2\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.274009 master-0 kubenswrapper[7756]: I0220 11:50:25.273997 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.290237 master-0 kubenswrapper[7756]: I0220 11:50:25.290192 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw"] Feb 20 11:50:25.291460 master-0 kubenswrapper[7756]: I0220 11:50:25.291437 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:50:25.296243 master-0 kubenswrapper[7756]: I0220 11:50:25.293918 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-r89nt" Feb 20 11:50:25.296243 master-0 kubenswrapper[7756]: I0220 11:50:25.294128 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 20 11:50:25.315309 master-0 kubenswrapper[7756]: I0220 11:50:25.315266 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw"] Feb 20 11:50:25.345915 master-0 kubenswrapper[7756]: I0220 11:50:25.345860 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:50:25.375793 master-0 kubenswrapper[7756]: I0220 11:50:25.375738 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-apiservice-cert\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:50:25.375793 master-0 kubenswrapper[7756]: I0220 11:50:25.375793 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.376026 master-0 kubenswrapper[7756]: I0220 11:50:25.375832 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.376026 master-0 kubenswrapper[7756]: I0220 11:50:25.375853 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-webhook-cert\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:50:25.376026 master-0 kubenswrapper[7756]: I0220 11:50:25.375871 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf682\" (UniqueName: \"kubernetes.io/projected/ae1fd116-6f63-4344-b7af-278665649e5a-kube-api-access-wf682\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:50:25.376026 master-0 kubenswrapper[7756]: I0220 11:50:25.375907 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rngh2\" (UniqueName: \"kubernetes.io/projected/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-kube-api-access-rngh2\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.376026 master-0 kubenswrapper[7756]: I0220 11:50:25.375929 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.376026 master-0 kubenswrapper[7756]: I0220 11:50:25.375963 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ae1fd116-6f63-4344-b7af-278665649e5a-tmpfs\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:50:25.376026 master-0 kubenswrapper[7756]: I0220 11:50:25.375984 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.376219 master-0 kubenswrapper[7756]: I0220 11:50:25.376088 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.377601 master-0 kubenswrapper[7756]: I0220 11:50:25.377108 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.377601 master-0 kubenswrapper[7756]: I0220 11:50:25.377376 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.382252 master-0 kubenswrapper[7756]: I0220 11:50:25.382217 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.397229 master-0 kubenswrapper[7756]: I0220 11:50:25.397198 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rngh2\" (UniqueName: \"kubernetes.io/projected/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-kube-api-access-rngh2\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.412123 master-0 kubenswrapper[7756]: I0220 11:50:25.412076 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:50:25.476969 master-0 kubenswrapper[7756]: I0220 11:50:25.476917 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ae1fd116-6f63-4344-b7af-278665649e5a-tmpfs\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:50:25.477148 master-0 kubenswrapper[7756]: I0220 11:50:25.476983 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-apiservice-cert\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:50:25.477148 master-0 kubenswrapper[7756]: I0220 11:50:25.477014 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-webhook-cert\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:50:25.477271 master-0 kubenswrapper[7756]: I0220 11:50:25.477235 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf682\" (UniqueName: \"kubernetes.io/projected/ae1fd116-6f63-4344-b7af-278665649e5a-kube-api-access-wf682\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:50:25.477604 master-0 kubenswrapper[7756]: I0220 11:50:25.477577 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ae1fd116-6f63-4344-b7af-278665649e5a-tmpfs\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:50:25.480493 master-0 kubenswrapper[7756]: I0220 11:50:25.480458 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-apiservice-cert\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:50:25.480714 master-0 kubenswrapper[7756]: I0220 11:50:25.480671 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-webhook-cert\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:50:25.499480 master-0 kubenswrapper[7756]: I0220 11:50:25.499158 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf682\" (UniqueName: \"kubernetes.io/projected/ae1fd116-6f63-4344-b7af-278665649e5a-kube-api-access-wf682\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:50:25.610010 master-0 kubenswrapper[7756]: I0220 11:50:25.609863 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:50:25.690733 master-0 kubenswrapper[7756]: I0220 11:50:25.690682 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr"] Feb 20 11:50:25.691497 master-0 kubenswrapper[7756]: I0220 11:50:25.691475 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:50:25.693898 master-0 kubenswrapper[7756]: I0220 11:50:25.693850 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 20 11:50:25.694327 master-0 kubenswrapper[7756]: I0220 11:50:25.694302 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 20 11:50:25.694520 master-0 kubenswrapper[7756]: I0220 11:50:25.694495 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 20 11:50:25.694914 master-0 kubenswrapper[7756]: I0220 11:50:25.694900 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-st2x9" Feb 20 11:50:25.711908 master-0 kubenswrapper[7756]: I0220 11:50:25.711863 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr"] Feb 20 11:50:25.781859 master-0 kubenswrapper[7756]: I0220 11:50:25.781619 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/62fc400b-b3dd-4134-bd27-69dd8369153a-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:50:25.781859 master-0 kubenswrapper[7756]: I0220 11:50:25.781769 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-images\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:50:25.781859 master-0 kubenswrapper[7756]: I0220 11:50:25.781815 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-config\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:50:25.781859 master-0 kubenswrapper[7756]: I0220 11:50:25.781898 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbsxw\" (UniqueName: \"kubernetes.io/projected/62fc400b-b3dd-4134-bd27-69dd8369153a-kube-api-access-zbsxw\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:50:25.883625 master-0 kubenswrapper[7756]: I0220 11:50:25.883273 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-images\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:50:25.883625 master-0 kubenswrapper[7756]: I0220 11:50:25.883478 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-config\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:50:25.884183 master-0 kubenswrapper[7756]: I0220 11:50:25.884124 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbsxw\" (UniqueName: \"kubernetes.io/projected/62fc400b-b3dd-4134-bd27-69dd8369153a-kube-api-access-zbsxw\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:50:25.884321 master-0 kubenswrapper[7756]: I0220 11:50:25.884293 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/62fc400b-b3dd-4134-bd27-69dd8369153a-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:50:25.884374 master-0 kubenswrapper[7756]: I0220 11:50:25.884357 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-config\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:50:25.884553 master-0 kubenswrapper[7756]: I0220 11:50:25.884510 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-images\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:50:25.887821 master-0 kubenswrapper[7756]: I0220 11:50:25.887788 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/62fc400b-b3dd-4134-bd27-69dd8369153a-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:50:25.903276 master-0 kubenswrapper[7756]: I0220 11:50:25.903230 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbsxw\" (UniqueName: \"kubernetes.io/projected/62fc400b-b3dd-4134-bd27-69dd8369153a-kube-api-access-zbsxw\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:50:26.026143 master-0 kubenswrapper[7756]: I0220 11:50:26.026057 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:50:26.855620 master-0 kubenswrapper[7756]: I0220 11:50:26.855208 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mpwks"] Feb 20 11:50:26.857210 master-0 kubenswrapper[7756]: I0220 11:50:26.857188 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 11:50:26.861774 master-0 kubenswrapper[7756]: I0220 11:50:26.861236 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-g5hlk" Feb 20 11:50:26.862218 master-0 kubenswrapper[7756]: I0220 11:50:26.862162 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 20 11:50:27.020507 master-0 kubenswrapper[7756]: I0220 11:50:27.020410 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkn7h\" (UniqueName: \"kubernetes.io/projected/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-kube-api-access-qkn7h\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 11:50:27.021043 master-0 kubenswrapper[7756]: I0220 11:50:27.021001 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-rootfs\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 11:50:27.021094 master-0 kubenswrapper[7756]: I0220 11:50:27.021061 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-proxy-tls\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 11:50:27.021094 master-0 kubenswrapper[7756]: I0220 11:50:27.021081 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-mcd-auth-proxy-config\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 11:50:27.122259 master-0 kubenswrapper[7756]: I0220 11:50:27.122039 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkn7h\" (UniqueName: \"kubernetes.io/projected/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-kube-api-access-qkn7h\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 11:50:27.122259 master-0 kubenswrapper[7756]: I0220 11:50:27.122241 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-rootfs\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 11:50:27.122492 master-0 kubenswrapper[7756]: I0220 11:50:27.122270 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-proxy-tls\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 11:50:27.122492 master-0 kubenswrapper[7756]: I0220 11:50:27.122290 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-mcd-auth-proxy-config\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 11:50:27.123921 master-0 kubenswrapper[7756]: I0220 11:50:27.122682 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-rootfs\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 11:50:27.123921 master-0 kubenswrapper[7756]: I0220 11:50:27.123557 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-mcd-auth-proxy-config\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 11:50:27.134422 master-0 kubenswrapper[7756]: I0220 11:50:27.126974 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-proxy-tls\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 11:50:27.138423 master-0 kubenswrapper[7756]: I0220 11:50:27.138381 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkn7h\" (UniqueName: \"kubernetes.io/projected/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-kube-api-access-qkn7h\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 11:50:27.195215 master-0 kubenswrapper[7756]: I0220 11:50:27.195152 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 11:50:32.820458 master-0 kubenswrapper[7756]: I0220 11:50:32.820390 7756 generic.go:334] "Generic (PLEG): container finished" podID="01e90033-9ddf-41b4-ab61-e89add6c2fde" containerID="731cb148dbfdffc2b55c2372adae7ffe3b1128ca5f50a9d64465c2aba12d6905" exitCode=0 Feb 20 11:50:32.820458 master-0 kubenswrapper[7756]: I0220 11:50:32.820461 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" event={"ID":"01e90033-9ddf-41b4-ab61-e89add6c2fde","Type":"ContainerDied","Data":"731cb148dbfdffc2b55c2372adae7ffe3b1128ca5f50a9d64465c2aba12d6905"} Feb 20 11:50:32.821804 master-0 kubenswrapper[7756]: I0220 11:50:32.821147 7756 scope.go:117] "RemoveContainer" containerID="731cb148dbfdffc2b55c2372adae7ffe3b1128ca5f50a9d64465c2aba12d6905" Feb 20 11:50:33.829614 master-0 kubenswrapper[7756]: I0220 11:50:33.829559 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_a827d746-cfd3-48a2-a20b-2ff1526986b9/installer/0.log" Feb 20 11:50:33.830322 master-0 kubenswrapper[7756]: I0220 11:50:33.829626 7756 generic.go:334] "Generic (PLEG): container finished" podID="a827d746-cfd3-48a2-a20b-2ff1526986b9" containerID="87542918fa08c7c4d02b25510f491f9813c8b4b90b5f23c58f4f083551680cc2" exitCode=1 Feb 20 11:50:33.830322 master-0 kubenswrapper[7756]: I0220 11:50:33.829669 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"a827d746-cfd3-48a2-a20b-2ff1526986b9","Type":"ContainerDied","Data":"87542918fa08c7c4d02b25510f491f9813c8b4b90b5f23c58f4f083551680cc2"} Feb 20 11:50:34.837066 master-0 kubenswrapper[7756]: I0220 11:50:34.836900 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_52c9c322-a0d1-4d27-b3bc-aaa8bd25beec/installer/0.log" Feb 20 11:50:34.837066 master-0 kubenswrapper[7756]: I0220 11:50:34.836990 7756 generic.go:334] "Generic (PLEG): container finished" podID="52c9c322-a0d1-4d27-b3bc-aaa8bd25beec" containerID="ab80be68bf4e51ef54cc0eec1c7816960fc41469504dbffd4d942cebf0931414" exitCode=1 Feb 20 11:50:34.837066 master-0 kubenswrapper[7756]: I0220 11:50:34.837051 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec","Type":"ContainerDied","Data":"ab80be68bf4e51ef54cc0eec1c7816960fc41469504dbffd4d942cebf0931414"} Feb 20 11:50:35.926595 master-0 kubenswrapper[7756]: I0220 11:50:35.923725 7756 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 20 11:50:35.926595 master-0 kubenswrapper[7756]: I0220 11:50:35.924079 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" containerID="cri-o://f5f43068fbb5a9da164f8ee835b3b81c0e487b16f18a0855b330e8a595241a1a" gracePeriod=30 Feb 20 11:50:35.926595 master-0 kubenswrapper[7756]: I0220 11:50:35.924278 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" containerID="cri-o://96886d8a032fd5d62adc57b52a624b84e10414b0186d56899d96874f35313ca3" gracePeriod=30 Feb 20 11:50:35.939359 master-0 kubenswrapper[7756]: I0220 11:50:35.939246 7756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 20 11:50:35.943649 master-0 kubenswrapper[7756]: E0220 11:50:35.939637 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 20 11:50:35.943649 master-0 kubenswrapper[7756]: I0220 11:50:35.939662 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 20 11:50:35.943649 master-0 kubenswrapper[7756]: E0220 11:50:35.939684 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 20 11:50:35.943649 master-0 kubenswrapper[7756]: I0220 11:50:35.939697 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 20 11:50:35.943649 master-0 kubenswrapper[7756]: I0220 11:50:35.939950 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 20 11:50:35.943649 master-0 kubenswrapper[7756]: I0220 11:50:35.939981 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 20 11:50:35.951496 master-0 kubenswrapper[7756]: I0220 11:50:35.951440 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.064029 master-0 kubenswrapper[7756]: I0220 11:50:36.063925 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.064029 master-0 kubenswrapper[7756]: I0220 11:50:36.064018 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.064332 master-0 kubenswrapper[7756]: I0220 11:50:36.064061 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.064332 master-0 kubenswrapper[7756]: I0220 11:50:36.064104 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.064332 master-0 kubenswrapper[7756]: I0220 11:50:36.064238 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.064332 master-0 kubenswrapper[7756]: I0220 11:50:36.064304 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.165752 master-0 kubenswrapper[7756]: I0220 11:50:36.165597 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.165752 master-0 kubenswrapper[7756]: I0220 11:50:36.165727 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.166119 master-0 kubenswrapper[7756]: I0220 11:50:36.165766 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.166119 master-0 kubenswrapper[7756]: I0220 11:50:36.165878 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.166119 master-0 kubenswrapper[7756]: I0220 11:50:36.165926 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.166119 master-0 kubenswrapper[7756]: I0220 11:50:36.165936 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.166119 master-0 kubenswrapper[7756]: I0220 11:50:36.165991 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.166119 master-0 kubenswrapper[7756]: I0220 11:50:36.166006 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.166119 master-0 kubenswrapper[7756]: I0220 11:50:36.166047 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.166119 master-0 kubenswrapper[7756]: I0220 11:50:36.166085 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.166119 master-0 kubenswrapper[7756]: I0220 11:50:36.166052 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.166478 master-0 kubenswrapper[7756]: I0220 11:50:36.166130 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:50:36.849720 master-0 kubenswrapper[7756]: I0220 11:50:36.849651 7756 generic.go:334] "Generic (PLEG): container finished" podID="5710eb66-9717-4beb-a8b2-19f6886376b3" containerID="b016752d8ba5cbc29441e53dbfb424ff953b01aa96097dd394c1910c4e093b09" exitCode=0 Feb 20 11:50:36.849720 master-0 kubenswrapper[7756]: I0220 11:50:36.849692 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"5710eb66-9717-4beb-a8b2-19f6886376b3","Type":"ContainerDied","Data":"b016752d8ba5cbc29441e53dbfb424ff953b01aa96097dd394c1910c4e093b09"} Feb 20 11:50:37.606469 master-0 kubenswrapper[7756]: I0220 11:50:37.606409 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 11:50:39.421613 master-0 kubenswrapper[7756]: W0220 11:50:39.421489 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbfa556b_3986_44b5_bf47_be113d732ad8.slice/crio-11c133e52238e057b93eaa645207b925b45693009945d2a2b0773bd924046bd2 WatchSource:0}: Error finding container 11c133e52238e057b93eaa645207b925b45693009945d2a2b0773bd924046bd2: Status 404 returned error can't find the container with id 11c133e52238e057b93eaa645207b925b45693009945d2a2b0773bd924046bd2 Feb 20 11:50:39.519972 master-0 kubenswrapper[7756]: I0220 11:50:39.519913 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 20 11:50:39.528430 master-0 kubenswrapper[7756]: I0220 11:50:39.528371 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_52c9c322-a0d1-4d27-b3bc-aaa8bd25beec/installer/0.log" Feb 20 11:50:39.528519 master-0 kubenswrapper[7756]: I0220 11:50:39.528482 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 11:50:39.538130 master-0 kubenswrapper[7756]: I0220 11:50:39.538063 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_a827d746-cfd3-48a2-a20b-2ff1526986b9/installer/0.log" Feb 20 11:50:39.538228 master-0 kubenswrapper[7756]: I0220 11:50:39.538210 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 11:50:39.632827 master-0 kubenswrapper[7756]: I0220 11:50:39.632565 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5710eb66-9717-4beb-a8b2-19f6886376b3-var-lock\") pod \"5710eb66-9717-4beb-a8b2-19f6886376b3\" (UID: \"5710eb66-9717-4beb-a8b2-19f6886376b3\") " Feb 20 11:50:39.632827 master-0 kubenswrapper[7756]: I0220 11:50:39.632795 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5710eb66-9717-4beb-a8b2-19f6886376b3-kubelet-dir\") pod \"5710eb66-9717-4beb-a8b2-19f6886376b3\" (UID: \"5710eb66-9717-4beb-a8b2-19f6886376b3\") " Feb 20 11:50:39.633398 master-0 kubenswrapper[7756]: I0220 11:50:39.632776 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5710eb66-9717-4beb-a8b2-19f6886376b3-var-lock" (OuterVolumeSpecName: "var-lock") pod "5710eb66-9717-4beb-a8b2-19f6886376b3" (UID: "5710eb66-9717-4beb-a8b2-19f6886376b3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:50:39.633398 master-0 kubenswrapper[7756]: I0220 11:50:39.632885 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5710eb66-9717-4beb-a8b2-19f6886376b3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5710eb66-9717-4beb-a8b2-19f6886376b3" (UID: "5710eb66-9717-4beb-a8b2-19f6886376b3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:50:39.633398 master-0 kubenswrapper[7756]: I0220 11:50:39.632871 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5710eb66-9717-4beb-a8b2-19f6886376b3-kube-api-access\") pod \"5710eb66-9717-4beb-a8b2-19f6886376b3\" (UID: \"5710eb66-9717-4beb-a8b2-19f6886376b3\") " Feb 20 11:50:39.633398 master-0 kubenswrapper[7756]: I0220 11:50:39.633046 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-kubelet-dir\") pod \"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec\" (UID: \"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec\") " Feb 20 11:50:39.633398 master-0 kubenswrapper[7756]: I0220 11:50:39.633297 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-kube-api-access\") pod \"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec\" (UID: \"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec\") " Feb 20 11:50:39.633398 master-0 kubenswrapper[7756]: I0220 11:50:39.633310 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "52c9c322-a0d1-4d27-b3bc-aaa8bd25beec" (UID: "52c9c322-a0d1-4d27-b3bc-aaa8bd25beec"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:50:39.633398 master-0 kubenswrapper[7756]: I0220 11:50:39.633409 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-var-lock\") pod \"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec\" (UID: \"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec\") " Feb 20 11:50:39.634085 master-0 kubenswrapper[7756]: I0220 11:50:39.633479 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-var-lock" (OuterVolumeSpecName: "var-lock") pod "52c9c322-a0d1-4d27-b3bc-aaa8bd25beec" (UID: "52c9c322-a0d1-4d27-b3bc-aaa8bd25beec"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:50:39.634085 master-0 kubenswrapper[7756]: I0220 11:50:39.633938 7756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:39.634085 master-0 kubenswrapper[7756]: I0220 11:50:39.633982 7756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5710eb66-9717-4beb-a8b2-19f6886376b3-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:39.634085 master-0 kubenswrapper[7756]: I0220 11:50:39.634047 7756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5710eb66-9717-4beb-a8b2-19f6886376b3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:39.634085 master-0 kubenswrapper[7756]: I0220 11:50:39.634096 7756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:39.639268 master-0 kubenswrapper[7756]: I0220 11:50:39.639193 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "52c9c322-a0d1-4d27-b3bc-aaa8bd25beec" (UID: "52c9c322-a0d1-4d27-b3bc-aaa8bd25beec"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:50:39.640796 master-0 kubenswrapper[7756]: I0220 11:50:39.640729 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5710eb66-9717-4beb-a8b2-19f6886376b3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5710eb66-9717-4beb-a8b2-19f6886376b3" (UID: "5710eb66-9717-4beb-a8b2-19f6886376b3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:50:39.735306 master-0 kubenswrapper[7756]: I0220 11:50:39.735234 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a827d746-cfd3-48a2-a20b-2ff1526986b9-var-lock\") pod \"a827d746-cfd3-48a2-a20b-2ff1526986b9\" (UID: \"a827d746-cfd3-48a2-a20b-2ff1526986b9\") " Feb 20 11:50:39.736289 master-0 kubenswrapper[7756]: I0220 11:50:39.735412 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a827d746-cfd3-48a2-a20b-2ff1526986b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "a827d746-cfd3-48a2-a20b-2ff1526986b9" (UID: "a827d746-cfd3-48a2-a20b-2ff1526986b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:50:39.736289 master-0 kubenswrapper[7756]: I0220 11:50:39.735460 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a827d746-cfd3-48a2-a20b-2ff1526986b9-kubelet-dir\") pod \"a827d746-cfd3-48a2-a20b-2ff1526986b9\" (UID: \"a827d746-cfd3-48a2-a20b-2ff1526986b9\") " Feb 20 11:50:39.736289 master-0 kubenswrapper[7756]: I0220 11:50:39.735660 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a827d746-cfd3-48a2-a20b-2ff1526986b9-kube-api-access\") pod \"a827d746-cfd3-48a2-a20b-2ff1526986b9\" (UID: \"a827d746-cfd3-48a2-a20b-2ff1526986b9\") " Feb 20 11:50:39.736289 master-0 kubenswrapper[7756]: I0220 11:50:39.735645 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a827d746-cfd3-48a2-a20b-2ff1526986b9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a827d746-cfd3-48a2-a20b-2ff1526986b9" (UID: "a827d746-cfd3-48a2-a20b-2ff1526986b9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:50:39.736289 master-0 kubenswrapper[7756]: I0220 11:50:39.736173 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5710eb66-9717-4beb-a8b2-19f6886376b3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:39.736289 master-0 kubenswrapper[7756]: I0220 11:50:39.736197 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:39.736289 master-0 kubenswrapper[7756]: I0220 11:50:39.736220 7756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a827d746-cfd3-48a2-a20b-2ff1526986b9-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:39.736289 master-0 kubenswrapper[7756]: I0220 11:50:39.736239 7756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a827d746-cfd3-48a2-a20b-2ff1526986b9-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:39.740714 master-0 kubenswrapper[7756]: I0220 11:50:39.740643 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a827d746-cfd3-48a2-a20b-2ff1526986b9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a827d746-cfd3-48a2-a20b-2ff1526986b9" (UID: "a827d746-cfd3-48a2-a20b-2ff1526986b9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:50:39.838566 master-0 kubenswrapper[7756]: I0220 11:50:39.838104 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a827d746-cfd3-48a2-a20b-2ff1526986b9-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:39.894512 master-0 kubenswrapper[7756]: I0220 11:50:39.894370 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_a827d746-cfd3-48a2-a20b-2ff1526986b9/installer/0.log" Feb 20 11:50:39.894729 master-0 kubenswrapper[7756]: I0220 11:50:39.894585 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 11:50:39.894729 master-0 kubenswrapper[7756]: I0220 11:50:39.894649 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"a827d746-cfd3-48a2-a20b-2ff1526986b9","Type":"ContainerDied","Data":"c266b103482362f3c418b4517deddb3769575b5e6f6333189c11e3e4fa22e93f"} Feb 20 11:50:39.894866 master-0 kubenswrapper[7756]: I0220 11:50:39.894751 7756 scope.go:117] "RemoveContainer" containerID="87542918fa08c7c4d02b25510f491f9813c8b4b90b5f23c58f4f083551680cc2" Feb 20 11:50:39.897335 master-0 kubenswrapper[7756]: I0220 11:50:39.897180 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_52c9c322-a0d1-4d27-b3bc-aaa8bd25beec/installer/0.log" Feb 20 11:50:39.897409 master-0 kubenswrapper[7756]: I0220 11:50:39.897324 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"52c9c322-a0d1-4d27-b3bc-aaa8bd25beec","Type":"ContainerDied","Data":"ef5687cfa7e6e042604067e80f4e15ff2c4827e0ae1ef872b3cf5d173fd3b030"} Feb 20 11:50:39.897506 master-0 kubenswrapper[7756]: I0220 11:50:39.897422 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 11:50:39.900342 master-0 kubenswrapper[7756]: I0220 11:50:39.900277 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"5710eb66-9717-4beb-a8b2-19f6886376b3","Type":"ContainerDied","Data":"97dbf6403141d9540379400f393a21ef236f6a9b6384164aeddd18c354d998df"} Feb 20 11:50:39.900435 master-0 kubenswrapper[7756]: I0220 11:50:39.900345 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97dbf6403141d9540379400f393a21ef236f6a9b6384164aeddd18c354d998df" Feb 20 11:50:39.900435 master-0 kubenswrapper[7756]: I0220 11:50:39.900309 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 20 11:50:39.903730 master-0 kubenswrapper[7756]: I0220 11:50:39.903673 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" event={"ID":"01e90033-9ddf-41b4-ab61-e89add6c2fde","Type":"ContainerStarted","Data":"5602fcf86766ef7d0d60953da5d2c52d3e2681c284b76402a701dd6648958446"} Feb 20 11:50:39.905831 master-0 kubenswrapper[7756]: I0220 11:50:39.905761 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" event={"ID":"bbfa556b-3986-44b5-bf47-be113d732ad8","Type":"ContainerStarted","Data":"11c133e52238e057b93eaa645207b925b45693009945d2a2b0773bd924046bd2"} Feb 20 11:50:40.159761 master-0 kubenswrapper[7756]: W0220 11:50:40.159700 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37cb3bb1_f5ba_4b7b_9af9_55bf61906a51.slice/crio-e6e96cc43446a3135181efd422d5641ca4e6cd2f71bf5238bf91b4954d41a24a WatchSource:0}: Error finding container e6e96cc43446a3135181efd422d5641ca4e6cd2f71bf5238bf91b4954d41a24a: Status 404 returned error can't find the container with id e6e96cc43446a3135181efd422d5641ca4e6cd2f71bf5238bf91b4954d41a24a Feb 20 11:50:40.234840 master-0 kubenswrapper[7756]: I0220 11:50:40.234070 7756 scope.go:117] "RemoveContainer" containerID="ab80be68bf4e51ef54cc0eec1c7816960fc41469504dbffd4d942cebf0931414" Feb 20 11:50:40.926702 master-0 kubenswrapper[7756]: I0220 11:50:40.918660 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" event={"ID":"bbfa556b-3986-44b5-bf47-be113d732ad8","Type":"ContainerStarted","Data":"9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea"} Feb 20 11:50:40.926702 master-0 kubenswrapper[7756]: I0220 11:50:40.922433 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q287t" event={"ID":"11aaad8c-2f25-460f-b4af-f27d8bc682a0","Type":"ContainerStarted","Data":"063c2153ddfff922b46919bbdf5dbe745ed9d91ad8a4df1a43233846341ae431"} Feb 20 11:50:40.926702 master-0 kubenswrapper[7756]: I0220 11:50:40.924893 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4d8l9" event={"ID":"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd","Type":"ContainerStarted","Data":"410b92b4dd25a08c9b5a88bbedeb856333ddffc0db63cdb372b8beaa95d6dfd4"} Feb 20 11:50:40.926702 master-0 kubenswrapper[7756]: I0220 11:50:40.925007 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4d8l9" podUID="c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd" containerName="extract-content" containerID="cri-o://410b92b4dd25a08c9b5a88bbedeb856333ddffc0db63cdb372b8beaa95d6dfd4" gracePeriod=2 Feb 20 11:50:40.934922 master-0 kubenswrapper[7756]: I0220 11:50:40.934871 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" event={"ID":"21e8e44b-b883-4afb-af90-d6c1265edf34","Type":"ContainerStarted","Data":"35ae4f95ab0c966594fd2d547d61e743ca73d94994a40e72e5d8f5673d88afb4"} Feb 20 11:50:40.947546 master-0 kubenswrapper[7756]: I0220 11:50:40.941028 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mpwks" event={"ID":"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51","Type":"ContainerStarted","Data":"cd16bd752b73b8b49c9f915a16effe79766d7670ad8d9f340d00a15fdc577892"} Feb 20 11:50:40.947546 master-0 kubenswrapper[7756]: I0220 11:50:40.941116 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mpwks" event={"ID":"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51","Type":"ContainerStarted","Data":"e6e96cc43446a3135181efd422d5641ca4e6cd2f71bf5238bf91b4954d41a24a"} Feb 20 11:50:40.947546 master-0 kubenswrapper[7756]: I0220 11:50:40.943255 7756 generic.go:334] "Generic (PLEG): container finished" podID="339f8487-0d2b-4f4f-9872-c629e7f3e2e1" containerID="2780fb76dab8cb4ff20cae7af7ff9e5c1ede2b033bf8f471a79ed57430cd17b2" exitCode=0 Feb 20 11:50:40.947546 master-0 kubenswrapper[7756]: I0220 11:50:40.943299 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mf5rz" event={"ID":"339f8487-0d2b-4f4f-9872-c629e7f3e2e1","Type":"ContainerDied","Data":"2780fb76dab8cb4ff20cae7af7ff9e5c1ede2b033bf8f471a79ed57430cd17b2"} Feb 20 11:50:40.947546 master-0 kubenswrapper[7756]: I0220 11:50:40.946983 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps68j" event={"ID":"50084c46-32ff-4e8a-b35e-8e7b1943cc04","Type":"ContainerStarted","Data":"468e279fb52c85842f933592a3b63e523be408db6840f8f285b2d5cee20979c5"} Feb 20 11:50:40.947546 master-0 kubenswrapper[7756]: I0220 11:50:40.947263 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ps68j" podUID="50084c46-32ff-4e8a-b35e-8e7b1943cc04" containerName="extract-content" containerID="cri-o://468e279fb52c85842f933592a3b63e523be408db6840f8f285b2d5cee20979c5" gracePeriod=2 Feb 20 11:50:40.950562 master-0 kubenswrapper[7756]: I0220 11:50:40.949636 7756 generic.go:334] "Generic (PLEG): container finished" podID="3733ccb5-2cea-4151-a2a7-d9c089a34cbc" containerID="ac2a6c5224eacb16ddebb88b66cf9a14c11e9518cb03b6b4a4a9bf567c4e7641" exitCode=0 Feb 20 11:50:40.950562 master-0 kubenswrapper[7756]: I0220 11:50:40.949684 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkdbv" event={"ID":"3733ccb5-2cea-4151-a2a7-d9c089a34cbc","Type":"ContainerDied","Data":"ac2a6c5224eacb16ddebb88b66cf9a14c11e9518cb03b6b4a4a9bf567c4e7641"} Feb 20 11:50:40.959826 master-0 kubenswrapper[7756]: I0220 11:50:40.954960 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76v4z" event={"ID":"19cf75ed-6a4e-444d-8975-fa6ecba79f13","Type":"ContainerStarted","Data":"8d7f06e36f50ed12da8519e84b3ce5adfb4b80cd958663f0c41472dd9f14ecbe"} Feb 20 11:50:40.959826 master-0 kubenswrapper[7756]: I0220 11:50:40.956018 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" event={"ID":"5e270e14-6e48-4fd0-bbd6-73e401a88e1d","Type":"ContainerStarted","Data":"33702da26d6f4dc7caebef6e36ac571d9d9f35d8ceadf09833d286f6ffd2ab74"} Feb 20 11:50:40.962599 master-0 kubenswrapper[7756]: I0220 11:50:40.960129 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kn5q" event={"ID":"34382460-b2d7-4154-87ba-c0347a4c0f1b","Type":"ContainerStarted","Data":"2584c39a031705eeedc4fb35b529a7825665a56d2f5188033d504f6edec7e39f"} Feb 20 11:50:41.537136 master-0 kubenswrapper[7756]: I0220 11:50:41.531603 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkdbv" Feb 20 11:50:41.561979 master-0 kubenswrapper[7756]: I0220 11:50:41.561934 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mf5rz" Feb 20 11:50:41.565065 master-0 kubenswrapper[7756]: I0220 11:50:41.564981 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4d8l9_c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd/extract-content/0.log" Feb 20 11:50:41.566597 master-0 kubenswrapper[7756]: I0220 11:50:41.566502 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4d8l9" Feb 20 11:50:41.568973 master-0 kubenswrapper[7756]: I0220 11:50:41.568894 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ps68j_50084c46-32ff-4e8a-b35e-8e7b1943cc04/extract-content/0.log" Feb 20 11:50:41.569589 master-0 kubenswrapper[7756]: I0220 11:50:41.569547 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps68j" Feb 20 11:50:41.662411 master-0 kubenswrapper[7756]: I0220 11:50:41.662327 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-catalog-content\") pod \"3733ccb5-2cea-4151-a2a7-d9c089a34cbc\" (UID: \"3733ccb5-2cea-4151-a2a7-d9c089a34cbc\") " Feb 20 11:50:41.662518 master-0 kubenswrapper[7756]: I0220 11:50:41.662430 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50084c46-32ff-4e8a-b35e-8e7b1943cc04-catalog-content\") pod \"50084c46-32ff-4e8a-b35e-8e7b1943cc04\" (UID: \"50084c46-32ff-4e8a-b35e-8e7b1943cc04\") " Feb 20 11:50:41.662518 master-0 kubenswrapper[7756]: I0220 11:50:41.662492 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-utilities\") pod \"3733ccb5-2cea-4151-a2a7-d9c089a34cbc\" (UID: \"3733ccb5-2cea-4151-a2a7-d9c089a34cbc\") " Feb 20 11:50:41.662675 master-0 kubenswrapper[7756]: I0220 11:50:41.662622 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z22pz\" (UniqueName: \"kubernetes.io/projected/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-kube-api-access-z22pz\") pod \"339f8487-0d2b-4f4f-9872-c629e7f3e2e1\" (UID: \"339f8487-0d2b-4f4f-9872-c629e7f3e2e1\") " Feb 20 11:50:41.662783 master-0 kubenswrapper[7756]: I0220 11:50:41.662747 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-catalog-content\") pod \"339f8487-0d2b-4f4f-9872-c629e7f3e2e1\" (UID: \"339f8487-0d2b-4f4f-9872-c629e7f3e2e1\") " Feb 20 11:50:41.662826 master-0 kubenswrapper[7756]: I0220 11:50:41.662806 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-utilities\") pod \"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd\" (UID: \"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd\") " Feb 20 11:50:41.662873 master-0 kubenswrapper[7756]: I0220 11:50:41.662845 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50084c46-32ff-4e8a-b35e-8e7b1943cc04-utilities\") pod \"50084c46-32ff-4e8a-b35e-8e7b1943cc04\" (UID: \"50084c46-32ff-4e8a-b35e-8e7b1943cc04\") " Feb 20 11:50:41.662931 master-0 kubenswrapper[7756]: I0220 11:50:41.662906 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx9z6\" (UniqueName: \"kubernetes.io/projected/50084c46-32ff-4e8a-b35e-8e7b1943cc04-kube-api-access-hx9z6\") pod \"50084c46-32ff-4e8a-b35e-8e7b1943cc04\" (UID: \"50084c46-32ff-4e8a-b35e-8e7b1943cc04\") " Feb 20 11:50:41.663020 master-0 kubenswrapper[7756]: I0220 11:50:41.662985 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-utilities\") pod \"339f8487-0d2b-4f4f-9872-c629e7f3e2e1\" (UID: \"339f8487-0d2b-4f4f-9872-c629e7f3e2e1\") " Feb 20 11:50:41.663057 master-0 kubenswrapper[7756]: I0220 11:50:41.663031 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgw4x\" (UniqueName: \"kubernetes.io/projected/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-kube-api-access-fgw4x\") pod \"3733ccb5-2cea-4151-a2a7-d9c089a34cbc\" (UID: \"3733ccb5-2cea-4151-a2a7-d9c089a34cbc\") " Feb 20 11:50:41.663127 master-0 kubenswrapper[7756]: I0220 11:50:41.663101 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-catalog-content\") pod \"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd\" (UID: \"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd\") " Feb 20 11:50:41.663193 master-0 kubenswrapper[7756]: I0220 11:50:41.663171 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr6kw\" (UniqueName: \"kubernetes.io/projected/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-kube-api-access-dr6kw\") pod \"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd\" (UID: \"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd\") " Feb 20 11:50:41.663801 master-0 kubenswrapper[7756]: I0220 11:50:41.663742 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-utilities" (OuterVolumeSpecName: "utilities") pod "3733ccb5-2cea-4151-a2a7-d9c089a34cbc" (UID: "3733ccb5-2cea-4151-a2a7-d9c089a34cbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 11:50:41.663901 master-0 kubenswrapper[7756]: I0220 11:50:41.663842 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50084c46-32ff-4e8a-b35e-8e7b1943cc04-utilities" (OuterVolumeSpecName: "utilities") pod "50084c46-32ff-4e8a-b35e-8e7b1943cc04" (UID: "50084c46-32ff-4e8a-b35e-8e7b1943cc04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 11:50:41.665146 master-0 kubenswrapper[7756]: I0220 11:50:41.664196 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-utilities" (OuterVolumeSpecName: "utilities") pod "c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd" (UID: "c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 11:50:41.665146 master-0 kubenswrapper[7756]: I0220 11:50:41.664789 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-utilities" (OuterVolumeSpecName: "utilities") pod "339f8487-0d2b-4f4f-9872-c629e7f3e2e1" (UID: "339f8487-0d2b-4f4f-9872-c629e7f3e2e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 11:50:41.668185 master-0 kubenswrapper[7756]: I0220 11:50:41.667899 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-kube-api-access-z22pz" (OuterVolumeSpecName: "kube-api-access-z22pz") pod "339f8487-0d2b-4f4f-9872-c629e7f3e2e1" (UID: "339f8487-0d2b-4f4f-9872-c629e7f3e2e1"). InnerVolumeSpecName "kube-api-access-z22pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:50:41.669473 master-0 kubenswrapper[7756]: I0220 11:50:41.669115 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50084c46-32ff-4e8a-b35e-8e7b1943cc04-kube-api-access-hx9z6" (OuterVolumeSpecName: "kube-api-access-hx9z6") pod "50084c46-32ff-4e8a-b35e-8e7b1943cc04" (UID: "50084c46-32ff-4e8a-b35e-8e7b1943cc04"). InnerVolumeSpecName "kube-api-access-hx9z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:50:41.670358 master-0 kubenswrapper[7756]: I0220 11:50:41.669848 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-kube-api-access-dr6kw" (OuterVolumeSpecName: "kube-api-access-dr6kw") pod "c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd" (UID: "c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd"). InnerVolumeSpecName "kube-api-access-dr6kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:50:41.671237 master-0 kubenswrapper[7756]: I0220 11:50:41.671190 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-kube-api-access-fgw4x" (OuterVolumeSpecName: "kube-api-access-fgw4x") pod "3733ccb5-2cea-4151-a2a7-d9c089a34cbc" (UID: "3733ccb5-2cea-4151-a2a7-d9c089a34cbc"). InnerVolumeSpecName "kube-api-access-fgw4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:50:41.672795 master-0 kubenswrapper[7756]: I0220 11:50:41.672767 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd" (UID: "c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 11:50:41.681455 master-0 kubenswrapper[7756]: I0220 11:50:41.681393 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50084c46-32ff-4e8a-b35e-8e7b1943cc04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50084c46-32ff-4e8a-b35e-8e7b1943cc04" (UID: "50084c46-32ff-4e8a-b35e-8e7b1943cc04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 11:50:41.738934 master-0 kubenswrapper[7756]: I0220 11:50:41.738811 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3733ccb5-2cea-4151-a2a7-d9c089a34cbc" (UID: "3733ccb5-2cea-4151-a2a7-d9c089a34cbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 11:50:41.766383 master-0 kubenswrapper[7756]: I0220 11:50:41.766281 7756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:41.766383 master-0 kubenswrapper[7756]: I0220 11:50:41.766345 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr6kw\" (UniqueName: \"kubernetes.io/projected/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-kube-api-access-dr6kw\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:41.766383 master-0 kubenswrapper[7756]: I0220 11:50:41.766359 7756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:41.766383 master-0 kubenswrapper[7756]: I0220 11:50:41.766370 7756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50084c46-32ff-4e8a-b35e-8e7b1943cc04-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:41.766383 master-0 kubenswrapper[7756]: I0220 11:50:41.766383 7756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-utilities\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:41.766383 master-0 kubenswrapper[7756]: I0220 11:50:41.766393 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z22pz\" (UniqueName: \"kubernetes.io/projected/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-kube-api-access-z22pz\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:41.766383 master-0 kubenswrapper[7756]: I0220 11:50:41.766404 7756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50084c46-32ff-4e8a-b35e-8e7b1943cc04-utilities\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:41.767005 master-0 kubenswrapper[7756]: I0220 11:50:41.766416 7756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd-utilities\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:41.767005 master-0 kubenswrapper[7756]: I0220 11:50:41.766428 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx9z6\" (UniqueName: \"kubernetes.io/projected/50084c46-32ff-4e8a-b35e-8e7b1943cc04-kube-api-access-hx9z6\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:41.767005 master-0 kubenswrapper[7756]: I0220 11:50:41.766439 7756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-utilities\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:41.767005 master-0 kubenswrapper[7756]: I0220 11:50:41.766449 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgw4x\" (UniqueName: \"kubernetes.io/projected/3733ccb5-2cea-4151-a2a7-d9c089a34cbc-kube-api-access-fgw4x\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:41.781939 master-0 kubenswrapper[7756]: I0220 11:50:41.781874 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "339f8487-0d2b-4f4f-9872-c629e7f3e2e1" (UID: "339f8487-0d2b-4f4f-9872-c629e7f3e2e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 11:50:41.868695 master-0 kubenswrapper[7756]: I0220 11:50:41.868597 7756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/339f8487-0d2b-4f4f-9872-c629e7f3e2e1-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 20 11:50:41.975783 master-0 kubenswrapper[7756]: I0220 11:50:41.975611 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ps68j_50084c46-32ff-4e8a-b35e-8e7b1943cc04/extract-content/0.log" Feb 20 11:50:41.976570 master-0 kubenswrapper[7756]: I0220 11:50:41.976471 7756 generic.go:334] "Generic (PLEG): container finished" podID="50084c46-32ff-4e8a-b35e-8e7b1943cc04" containerID="468e279fb52c85842f933592a3b63e523be408db6840f8f285b2d5cee20979c5" exitCode=2 Feb 20 11:50:41.976669 master-0 kubenswrapper[7756]: I0220 11:50:41.976591 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps68j" Feb 20 11:50:41.976748 master-0 kubenswrapper[7756]: I0220 11:50:41.976573 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps68j" event={"ID":"50084c46-32ff-4e8a-b35e-8e7b1943cc04","Type":"ContainerDied","Data":"468e279fb52c85842f933592a3b63e523be408db6840f8f285b2d5cee20979c5"} Feb 20 11:50:41.976824 master-0 kubenswrapper[7756]: I0220 11:50:41.976779 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps68j" event={"ID":"50084c46-32ff-4e8a-b35e-8e7b1943cc04","Type":"ContainerDied","Data":"177018fd0692881df9a10c94ae902b21c26f441b2c1f7434ed6731b2dc1c1347"} Feb 20 11:50:41.976894 master-0 kubenswrapper[7756]: I0220 11:50:41.976832 7756 scope.go:117] "RemoveContainer" containerID="468e279fb52c85842f933592a3b63e523be408db6840f8f285b2d5cee20979c5" Feb 20 11:50:41.978941 master-0 kubenswrapper[7756]: I0220 11:50:41.978884 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4d8l9_c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd/extract-content/0.log" Feb 20 11:50:41.979827 master-0 kubenswrapper[7756]: I0220 11:50:41.979756 7756 generic.go:334] "Generic (PLEG): container finished" podID="c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd" containerID="410b92b4dd25a08c9b5a88bbedeb856333ddffc0db63cdb372b8beaa95d6dfd4" exitCode=2 Feb 20 11:50:41.979930 master-0 kubenswrapper[7756]: I0220 11:50:41.979848 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4d8l9" Feb 20 11:50:41.979930 master-0 kubenswrapper[7756]: I0220 11:50:41.979856 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4d8l9" event={"ID":"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd","Type":"ContainerDied","Data":"410b92b4dd25a08c9b5a88bbedeb856333ddffc0db63cdb372b8beaa95d6dfd4"} Feb 20 11:50:41.979930 master-0 kubenswrapper[7756]: I0220 11:50:41.979886 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4d8l9" event={"ID":"c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd","Type":"ContainerDied","Data":"bed75b7b50642092b0adab8613c8dafc1d6047b83c4e361f7ca29d93aa1af83c"} Feb 20 11:50:41.982426 master-0 kubenswrapper[7756]: I0220 11:50:41.982348 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mpwks" event={"ID":"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51","Type":"ContainerStarted","Data":"23d1e6382c662d0cbd8252d3455965d29b6e655f2fec17296792305e3b76cc10"} Feb 20 11:50:41.986124 master-0 kubenswrapper[7756]: I0220 11:50:41.986003 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkdbv" Feb 20 11:50:41.986124 master-0 kubenswrapper[7756]: I0220 11:50:41.986000 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkdbv" event={"ID":"3733ccb5-2cea-4151-a2a7-d9c089a34cbc","Type":"ContainerDied","Data":"271cc74d3ac13c4272233cef14b9564ed745bec604aeb22abbc28cf2cb340d3b"} Feb 20 11:50:41.988573 master-0 kubenswrapper[7756]: I0220 11:50:41.988407 7756 generic.go:334] "Generic (PLEG): container finished" podID="8df029f2-d0ec-4543-9371-7694b1e85a06" containerID="98f8ace42aab6b9228e43ce90bfe3ea401b8bf607616e8f25c18422bae53c536" exitCode=0 Feb 20 11:50:41.988573 master-0 kubenswrapper[7756]: I0220 11:50:41.988489 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89t2q" event={"ID":"8df029f2-d0ec-4543-9371-7694b1e85a06","Type":"ContainerDied","Data":"98f8ace42aab6b9228e43ce90bfe3ea401b8bf607616e8f25c18422bae53c536"} Feb 20 11:50:41.991675 master-0 kubenswrapper[7756]: I0220 11:50:41.991610 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mf5rz" Feb 20 11:50:41.991675 master-0 kubenswrapper[7756]: I0220 11:50:41.991598 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mf5rz" event={"ID":"339f8487-0d2b-4f4f-9872-c629e7f3e2e1","Type":"ContainerDied","Data":"acab5b54f256c278bb063800e469a049966e6a788255cb6555d858b4efd4df61"} Feb 20 11:50:41.995080 master-0 kubenswrapper[7756]: I0220 11:50:41.994963 7756 generic.go:334] "Generic (PLEG): container finished" podID="11aaad8c-2f25-460f-b4af-f27d8bc682a0" containerID="063c2153ddfff922b46919bbdf5dbe745ed9d91ad8a4df1a43233846341ae431" exitCode=0 Feb 20 11:50:41.995080 master-0 kubenswrapper[7756]: I0220 11:50:41.995025 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q287t" event={"ID":"11aaad8c-2f25-460f-b4af-f27d8bc682a0","Type":"ContainerDied","Data":"063c2153ddfff922b46919bbdf5dbe745ed9d91ad8a4df1a43233846341ae431"} Feb 20 11:50:42.000089 master-0 kubenswrapper[7756]: I0220 11:50:41.998015 7756 scope.go:117] "RemoveContainer" containerID="d3234baa1ee8eded2cc9fe34f7722a82dcfa31c0fd310f1ff2830fbcd0a573d2" Feb 20 11:50:42.000089 master-0 kubenswrapper[7756]: I0220 11:50:41.999322 7756 generic.go:334] "Generic (PLEG): container finished" podID="19cf75ed-6a4e-444d-8975-fa6ecba79f13" containerID="8d7f06e36f50ed12da8519e84b3ce5adfb4b80cd958663f0c41472dd9f14ecbe" exitCode=0 Feb 20 11:50:42.000089 master-0 kubenswrapper[7756]: I0220 11:50:41.999430 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76v4z" event={"ID":"19cf75ed-6a4e-444d-8975-fa6ecba79f13","Type":"ContainerDied","Data":"8d7f06e36f50ed12da8519e84b3ce5adfb4b80cd958663f0c41472dd9f14ecbe"} Feb 20 11:50:42.004109 master-0 kubenswrapper[7756]: I0220 11:50:42.003989 7756 generic.go:334] "Generic (PLEG): container finished" podID="34382460-b2d7-4154-87ba-c0347a4c0f1b" containerID="2584c39a031705eeedc4fb35b529a7825665a56d2f5188033d504f6edec7e39f" exitCode=0 Feb 20 11:50:42.004225 master-0 kubenswrapper[7756]: I0220 11:50:42.004133 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kn5q" event={"ID":"34382460-b2d7-4154-87ba-c0347a4c0f1b","Type":"ContainerDied","Data":"2584c39a031705eeedc4fb35b529a7825665a56d2f5188033d504f6edec7e39f"} Feb 20 11:50:42.391307 master-0 kubenswrapper[7756]: I0220 11:50:42.391282 7756 scope.go:117] "RemoveContainer" containerID="468e279fb52c85842f933592a3b63e523be408db6840f8f285b2d5cee20979c5" Feb 20 11:50:42.392017 master-0 kubenswrapper[7756]: E0220 11:50:42.391941 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"468e279fb52c85842f933592a3b63e523be408db6840f8f285b2d5cee20979c5\": container with ID starting with 468e279fb52c85842f933592a3b63e523be408db6840f8f285b2d5cee20979c5 not found: ID does not exist" containerID="468e279fb52c85842f933592a3b63e523be408db6840f8f285b2d5cee20979c5" Feb 20 11:50:42.392123 master-0 kubenswrapper[7756]: I0220 11:50:42.392025 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468e279fb52c85842f933592a3b63e523be408db6840f8f285b2d5cee20979c5"} err="failed to get container status \"468e279fb52c85842f933592a3b63e523be408db6840f8f285b2d5cee20979c5\": rpc error: code = NotFound desc = could not find container \"468e279fb52c85842f933592a3b63e523be408db6840f8f285b2d5cee20979c5\": container with ID starting with 468e279fb52c85842f933592a3b63e523be408db6840f8f285b2d5cee20979c5 not found: ID does not exist" Feb 20 11:50:42.392123 master-0 kubenswrapper[7756]: I0220 11:50:42.392075 7756 scope.go:117] "RemoveContainer" containerID="d3234baa1ee8eded2cc9fe34f7722a82dcfa31c0fd310f1ff2830fbcd0a573d2" Feb 20 11:50:42.392655 master-0 kubenswrapper[7756]: E0220 11:50:42.392617 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3234baa1ee8eded2cc9fe34f7722a82dcfa31c0fd310f1ff2830fbcd0a573d2\": container with ID starting with d3234baa1ee8eded2cc9fe34f7722a82dcfa31c0fd310f1ff2830fbcd0a573d2 not found: ID does not exist" containerID="d3234baa1ee8eded2cc9fe34f7722a82dcfa31c0fd310f1ff2830fbcd0a573d2" Feb 20 11:50:42.392759 master-0 kubenswrapper[7756]: I0220 11:50:42.392655 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3234baa1ee8eded2cc9fe34f7722a82dcfa31c0fd310f1ff2830fbcd0a573d2"} err="failed to get container status \"d3234baa1ee8eded2cc9fe34f7722a82dcfa31c0fd310f1ff2830fbcd0a573d2\": rpc error: code = NotFound desc = could not find container \"d3234baa1ee8eded2cc9fe34f7722a82dcfa31c0fd310f1ff2830fbcd0a573d2\": container with ID starting with d3234baa1ee8eded2cc9fe34f7722a82dcfa31c0fd310f1ff2830fbcd0a573d2 not found: ID does not exist" Feb 20 11:50:42.392759 master-0 kubenswrapper[7756]: I0220 11:50:42.392681 7756 scope.go:117] "RemoveContainer" containerID="410b92b4dd25a08c9b5a88bbedeb856333ddffc0db63cdb372b8beaa95d6dfd4" Feb 20 11:50:42.446924 master-0 kubenswrapper[7756]: I0220 11:50:42.446876 7756 scope.go:117] "RemoveContainer" containerID="3d30310d5d64e87cb7f50a1908bf38d936e48855ada0342580eebece80d1e3b2" Feb 20 11:50:42.496381 master-0 kubenswrapper[7756]: I0220 11:50:42.496337 7756 scope.go:117] "RemoveContainer" containerID="410b92b4dd25a08c9b5a88bbedeb856333ddffc0db63cdb372b8beaa95d6dfd4" Feb 20 11:50:42.497097 master-0 kubenswrapper[7756]: E0220 11:50:42.496990 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"410b92b4dd25a08c9b5a88bbedeb856333ddffc0db63cdb372b8beaa95d6dfd4\": container with ID starting with 410b92b4dd25a08c9b5a88bbedeb856333ddffc0db63cdb372b8beaa95d6dfd4 not found: ID does not exist" containerID="410b92b4dd25a08c9b5a88bbedeb856333ddffc0db63cdb372b8beaa95d6dfd4" Feb 20 11:50:42.497097 master-0 kubenswrapper[7756]: I0220 11:50:42.497048 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"410b92b4dd25a08c9b5a88bbedeb856333ddffc0db63cdb372b8beaa95d6dfd4"} err="failed to get container status \"410b92b4dd25a08c9b5a88bbedeb856333ddffc0db63cdb372b8beaa95d6dfd4\": rpc error: code = NotFound desc = could not find container \"410b92b4dd25a08c9b5a88bbedeb856333ddffc0db63cdb372b8beaa95d6dfd4\": container with ID starting with 410b92b4dd25a08c9b5a88bbedeb856333ddffc0db63cdb372b8beaa95d6dfd4 not found: ID does not exist" Feb 20 11:50:42.497097 master-0 kubenswrapper[7756]: I0220 11:50:42.497086 7756 scope.go:117] "RemoveContainer" containerID="3d30310d5d64e87cb7f50a1908bf38d936e48855ada0342580eebece80d1e3b2" Feb 20 11:50:42.497699 master-0 kubenswrapper[7756]: E0220 11:50:42.497630 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d30310d5d64e87cb7f50a1908bf38d936e48855ada0342580eebece80d1e3b2\": container with ID starting with 3d30310d5d64e87cb7f50a1908bf38d936e48855ada0342580eebece80d1e3b2 not found: ID does not exist" containerID="3d30310d5d64e87cb7f50a1908bf38d936e48855ada0342580eebece80d1e3b2" Feb 20 11:50:42.497759 master-0 kubenswrapper[7756]: I0220 11:50:42.497712 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d30310d5d64e87cb7f50a1908bf38d936e48855ada0342580eebece80d1e3b2"} err="failed to get container status \"3d30310d5d64e87cb7f50a1908bf38d936e48855ada0342580eebece80d1e3b2\": rpc error: code = NotFound desc = could not find container \"3d30310d5d64e87cb7f50a1908bf38d936e48855ada0342580eebece80d1e3b2\": container with ID starting with 3d30310d5d64e87cb7f50a1908bf38d936e48855ada0342580eebece80d1e3b2 not found: ID does not exist" Feb 20 11:50:42.497811 master-0 kubenswrapper[7756]: I0220 11:50:42.497764 7756 scope.go:117] "RemoveContainer" containerID="ac2a6c5224eacb16ddebb88b66cf9a14c11e9518cb03b6b4a4a9bf567c4e7641" Feb 20 11:50:42.525784 master-0 kubenswrapper[7756]: I0220 11:50:42.525737 7756 scope.go:117] "RemoveContainer" containerID="be445a861a8c4eb01f24b747bb03f34ee72cd75d7bc09a2585ad126da8f250fe" Feb 20 11:50:42.550841 master-0 kubenswrapper[7756]: I0220 11:50:42.550788 7756 scope.go:117] "RemoveContainer" containerID="2780fb76dab8cb4ff20cae7af7ff9e5c1ede2b033bf8f471a79ed57430cd17b2" Feb 20 11:50:43.015763 master-0 kubenswrapper[7756]: I0220 11:50:43.015630 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" event={"ID":"bbfa556b-3986-44b5-bf47-be113d732ad8","Type":"ContainerStarted","Data":"c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749"} Feb 20 11:50:43.463620 master-0 kubenswrapper[7756]: I0220 11:50:43.463411 7756 scope.go:117] "RemoveContainer" containerID="9ba882d3908425079028ab1281244bc811ead996e74f3222e0a28296913d80f5" Feb 20 11:50:44.029709 master-0 kubenswrapper[7756]: I0220 11:50:44.029590 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76v4z" event={"ID":"19cf75ed-6a4e-444d-8975-fa6ecba79f13","Type":"ContainerStarted","Data":"acfe30a428619420a0266d14ed0cafc94b62ee935cd01e2e0118aa7e6f0385ef"} Feb 20 11:50:44.033381 master-0 kubenswrapper[7756]: I0220 11:50:44.033330 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q287t" event={"ID":"11aaad8c-2f25-460f-b4af-f27d8bc682a0","Type":"ContainerStarted","Data":"2ea45f0d09323aa129a5f888d9af00fa2d4cf7a190b285db6e5a14c35f7d4305"} Feb 20 11:50:44.036499 master-0 kubenswrapper[7756]: I0220 11:50:44.035940 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" event={"ID":"5e270e14-6e48-4fd0-bbd6-73e401a88e1d","Type":"ContainerStarted","Data":"170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550"} Feb 20 11:50:44.039160 master-0 kubenswrapper[7756]: I0220 11:50:44.039109 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kn5q" event={"ID":"34382460-b2d7-4154-87ba-c0347a4c0f1b","Type":"ContainerStarted","Data":"5a7637fc67ebaeb468b6119a6dd34b01da746eca4f415f60a3444429e04609f6"} Feb 20 11:50:44.045604 master-0 kubenswrapper[7756]: I0220 11:50:44.045504 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89t2q" event={"ID":"8df029f2-d0ec-4543-9371-7694b1e85a06","Type":"ContainerStarted","Data":"15cbb996fb073219e66e6264a39635d509fc6b0621a7fd9fc75145a72e29d56d"} Feb 20 11:50:45.052246 master-0 kubenswrapper[7756]: I0220 11:50:45.052189 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" event={"ID":"5e270e14-6e48-4fd0-bbd6-73e401a88e1d","Type":"ContainerStarted","Data":"c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f"} Feb 20 11:50:45.053179 master-0 kubenswrapper[7756]: I0220 11:50:45.053111 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" event={"ID":"5e270e14-6e48-4fd0-bbd6-73e401a88e1d","Type":"ContainerStarted","Data":"5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453"} Feb 20 11:50:46.307434 master-0 kubenswrapper[7756]: E0220 11:50:46.307333 7756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:50:46.884381 master-0 kubenswrapper[7756]: I0220 11:50:46.884229 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:50:46.884381 master-0 kubenswrapper[7756]: I0220 11:50:46.884365 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:50:47.067573 master-0 kubenswrapper[7756]: I0220 11:50:47.067464 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:50:47.067573 master-0 kubenswrapper[7756]: I0220 11:50:47.067556 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:50:47.931260 master-0 kubenswrapper[7756]: I0220 11:50:47.931180 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-76v4z" Feb 20 11:50:47.931260 master-0 kubenswrapper[7756]: I0220 11:50:47.931250 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-76v4z" Feb 20 11:50:48.011797 master-0 kubenswrapper[7756]: I0220 11:50:48.011663 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-76v4z" Feb 20 11:50:48.138337 master-0 kubenswrapper[7756]: I0220 11:50:48.138261 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-76v4z" Feb 20 11:50:48.987144 master-0 kubenswrapper[7756]: E0220 11:50:48.987062 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 20 11:50:48.988187 master-0 kubenswrapper[7756]: I0220 11:50:48.987979 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 20 11:50:49.008782 master-0 kubenswrapper[7756]: W0220 11:50:49.008733 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18a83278819db2092fa26d8274eb3f00.slice/crio-738550ece6202ca97e4d4da03bc64c3f4988e4f8109aaa56600eb18e38a2f798 WatchSource:0}: Error finding container 738550ece6202ca97e4d4da03bc64c3f4988e4f8109aaa56600eb18e38a2f798: Status 404 returned error can't find the container with id 738550ece6202ca97e4d4da03bc64c3f4988e4f8109aaa56600eb18e38a2f798 Feb 20 11:50:49.079711 master-0 kubenswrapper[7756]: I0220 11:50:49.079639 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"738550ece6202ca97e4d4da03bc64c3f4988e4f8109aaa56600eb18e38a2f798"} Feb 20 11:50:49.082619 master-0 kubenswrapper[7756]: I0220 11:50:49.082574 7756 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="6d3121ed9f14f1a68a11c14e19a8ba5e47d812ae84b3f62cc56772a81aa8f139" exitCode=1 Feb 20 11:50:49.082730 master-0 kubenswrapper[7756]: I0220 11:50:49.082673 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"6d3121ed9f14f1a68a11c14e19a8ba5e47d812ae84b3f62cc56772a81aa8f139"} Feb 20 11:50:49.082800 master-0 kubenswrapper[7756]: I0220 11:50:49.082758 7756 scope.go:117] "RemoveContainer" containerID="63a4ec3dde8f5a0e5831c20c7c43b03806a786d19e88fcb36793fe30ce83f9e5" Feb 20 11:50:49.083598 master-0 kubenswrapper[7756]: I0220 11:50:49.083517 7756 scope.go:117] "RemoveContainer" containerID="6d3121ed9f14f1a68a11c14e19a8ba5e47d812ae84b3f62cc56772a81aa8f139" Feb 20 11:50:49.247334 master-0 kubenswrapper[7756]: I0220 11:50:49.247245 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7kn5q" Feb 20 11:50:49.247334 master-0 kubenswrapper[7756]: I0220 11:50:49.247336 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7kn5q" Feb 20 11:50:49.307584 master-0 kubenswrapper[7756]: I0220 11:50:49.306880 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7kn5q" Feb 20 11:50:49.885732 master-0 kubenswrapper[7756]: I0220 11:50:49.884758 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:50:49.885732 master-0 kubenswrapper[7756]: I0220 11:50:49.884844 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:50:50.066553 master-0 kubenswrapper[7756]: I0220 11:50:50.066481 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:50:50.067745 master-0 kubenswrapper[7756]: I0220 11:50:50.066596 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:50:50.092477 master-0 kubenswrapper[7756]: I0220 11:50:50.091859 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"407a4490b53b516c4eaa24c4972588c07da9b5f9574f9b35da5b44a438b78bcc"} Feb 20 11:50:50.094177 master-0 kubenswrapper[7756]: I0220 11:50:50.094140 7756 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872" exitCode=0 Feb 20 11:50:50.094240 master-0 kubenswrapper[7756]: I0220 11:50:50.094202 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872"} Feb 20 11:50:50.161968 master-0 kubenswrapper[7756]: I0220 11:50:50.161845 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7kn5q" Feb 20 11:50:50.280373 master-0 kubenswrapper[7756]: I0220 11:50:50.280308 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:50:51.657740 master-0 kubenswrapper[7756]: I0220 11:50:51.657672 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 11:50:51.657740 master-0 kubenswrapper[7756]: I0220 11:50:51.657743 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 11:50:51.703986 master-0 kubenswrapper[7756]: I0220 11:50:51.703909 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 11:50:51.742115 master-0 kubenswrapper[7756]: I0220 11:50:51.742046 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:50:52.050721 master-0 kubenswrapper[7756]: I0220 11:50:52.050620 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q287t" Feb 20 11:50:52.050721 master-0 kubenswrapper[7756]: I0220 11:50:52.050687 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q287t" Feb 20 11:50:52.114167 master-0 kubenswrapper[7756]: I0220 11:50:52.114095 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q287t" Feb 20 11:50:52.169182 master-0 kubenswrapper[7756]: I0220 11:50:52.169114 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 11:50:52.180447 master-0 kubenswrapper[7756]: I0220 11:50:52.180380 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q287t" Feb 20 11:50:52.884490 master-0 kubenswrapper[7756]: I0220 11:50:52.884401 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:50:52.885293 master-0 kubenswrapper[7756]: I0220 11:50:52.884501 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:50:52.885293 master-0 kubenswrapper[7756]: I0220 11:50:52.884616 7756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:50:52.885677 master-0 kubenswrapper[7756]: I0220 11:50:52.885616 7756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"65fc745d32199b41ad554a4c4ed1944167b7da7496dffcee77c17ec0d2f1a51b"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 20 11:50:52.885799 master-0 kubenswrapper[7756]: I0220 11:50:52.885694 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" containerID="cri-o://65fc745d32199b41ad554a4c4ed1944167b7da7496dffcee77c17ec0d2f1a51b" gracePeriod=30 Feb 20 11:50:53.067102 master-0 kubenswrapper[7756]: I0220 11:50:53.067016 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:50:53.067387 master-0 kubenswrapper[7756]: I0220 11:50:53.067110 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:50:53.067387 master-0 kubenswrapper[7756]: I0220 11:50:53.067237 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:50:54.118121 master-0 kubenswrapper[7756]: I0220 11:50:54.118036 7756 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="6f48bf3168ea3ca5cdb5d4b4fe30f40410c99744121d1afe1db8ccea90206a28" exitCode=1 Feb 20 11:50:54.118121 master-0 kubenswrapper[7756]: I0220 11:50:54.118083 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerDied","Data":"6f48bf3168ea3ca5cdb5d4b4fe30f40410c99744121d1afe1db8ccea90206a28"} Feb 20 11:50:54.118942 master-0 kubenswrapper[7756]: I0220 11:50:54.118474 7756 scope.go:117] "RemoveContainer" containerID="6f48bf3168ea3ca5cdb5d4b4fe30f40410c99744121d1afe1db8ccea90206a28" Feb 20 11:50:54.743134 master-0 kubenswrapper[7756]: I0220 11:50:54.742919 7756 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 11:50:55.129802 master-0 kubenswrapper[7756]: I0220 11:50:55.129724 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"842b6aa1bc5c1b0962214c188df26e12d0920e5f8c4d3227e8ab1c9741425d8b"} Feb 20 11:50:55.254295 master-0 kubenswrapper[7756]: I0220 11:50:55.254202 7756 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-vtcnw container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 20 11:50:55.254592 master-0 kubenswrapper[7756]: I0220 11:50:55.254306 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" podUID="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 20 11:50:56.066893 master-0 kubenswrapper[7756]: I0220 11:50:56.066777 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:50:56.066893 master-0 kubenswrapper[7756]: I0220 11:50:56.066868 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:50:56.308157 master-0 kubenswrapper[7756]: E0220 11:50:56.308040 7756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:50:56.663571 master-0 kubenswrapper[7756]: E0220 11:50:56.663131 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:50:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:50:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:50:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:50:46Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015\\\"],\\\"sizeBytes\\\":443170136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86ce6c3977c663ad9ad9a5d627bb08727af38fd3153a0a463a10b534030ee126\\\"],\\\"sizeBytes\\\":438548891},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568\\\"],\\\"sizeBytes\\\":411485245},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0\\\"],\\\"sizeBytes\\\":407241636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229\\\"],\\\"sizeBytes\\\":396420881}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:50:59.067768 master-0 kubenswrapper[7756]: I0220 11:50:59.067621 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:50:59.068980 master-0 kubenswrapper[7756]: I0220 11:50:59.067828 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:02.067100 master-0 kubenswrapper[7756]: I0220 11:51:02.067029 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:02.067978 master-0 kubenswrapper[7756]: I0220 11:51:02.067111 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:03.100000 master-0 kubenswrapper[7756]: E0220 11:51:03.099917 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 20 11:51:04.194392 master-0 kubenswrapper[7756]: I0220 11:51:04.194282 7756 generic.go:334] "Generic (PLEG): container finished" podID="12dab5d350ebc129b0bfa4714d330b15" containerID="96886d8a032fd5d62adc57b52a624b84e10414b0186d56899d96874f35313ca3" exitCode=0 Feb 20 11:51:04.198365 master-0 kubenswrapper[7756]: I0220 11:51:04.198296 7756 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0" exitCode=0 Feb 20 11:51:04.198365 master-0 kubenswrapper[7756]: I0220 11:51:04.198356 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0"} Feb 20 11:51:04.570922 master-0 kubenswrapper[7756]: I0220 11:51:04.570783 7756 patch_prober.go:28] interesting pod/etcd-operator-545bf96f4d-d69w2 container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Feb 20 11:51:04.571225 master-0 kubenswrapper[7756]: I0220 11:51:04.570973 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" podUID="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Feb 20 11:51:04.742963 master-0 kubenswrapper[7756]: I0220 11:51:04.742802 7756 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 11:51:05.067334 master-0 kubenswrapper[7756]: I0220 11:51:05.067242 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:05.067645 master-0 kubenswrapper[7756]: I0220 11:51:05.067352 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:05.253942 master-0 kubenswrapper[7756]: I0220 11:51:05.253817 7756 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-vtcnw container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 20 11:51:05.253942 master-0 kubenswrapper[7756]: I0220 11:51:05.253898 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" podUID="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 20 11:51:06.066134 master-0 kubenswrapper[7756]: I0220 11:51:06.066038 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_12dab5d350ebc129b0bfa4714d330b15/etcdctl/0.log" Feb 20 11:51:06.066318 master-0 kubenswrapper[7756]: I0220 11:51:06.066156 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:51:06.123143 master-0 kubenswrapper[7756]: I0220 11:51:06.123059 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"12dab5d350ebc129b0bfa4714d330b15\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " Feb 20 11:51:06.123143 master-0 kubenswrapper[7756]: I0220 11:51:06.123138 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"12dab5d350ebc129b0bfa4714d330b15\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " Feb 20 11:51:06.123389 master-0 kubenswrapper[7756]: I0220 11:51:06.123142 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs" (OuterVolumeSpecName: "certs") pod "12dab5d350ebc129b0bfa4714d330b15" (UID: "12dab5d350ebc129b0bfa4714d330b15"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:51:06.123389 master-0 kubenswrapper[7756]: I0220 11:51:06.123296 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir" (OuterVolumeSpecName: "data-dir") pod "12dab5d350ebc129b0bfa4714d330b15" (UID: "12dab5d350ebc129b0bfa4714d330b15"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:51:06.123572 master-0 kubenswrapper[7756]: I0220 11:51:06.123485 7756 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:51:06.123572 master-0 kubenswrapper[7756]: I0220 11:51:06.123509 7756 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 11:51:06.212957 master-0 kubenswrapper[7756]: I0220 11:51:06.212807 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_12dab5d350ebc129b0bfa4714d330b15/etcdctl/0.log" Feb 20 11:51:06.212957 master-0 kubenswrapper[7756]: I0220 11:51:06.212899 7756 generic.go:334] "Generic (PLEG): container finished" podID="12dab5d350ebc129b0bfa4714d330b15" containerID="f5f43068fbb5a9da164f8ee835b3b81c0e487b16f18a0855b330e8a595241a1a" exitCode=137 Feb 20 11:51:06.213183 master-0 kubenswrapper[7756]: I0220 11:51:06.212984 7756 scope.go:117] "RemoveContainer" containerID="96886d8a032fd5d62adc57b52a624b84e10414b0186d56899d96874f35313ca3" Feb 20 11:51:06.213183 master-0 kubenswrapper[7756]: I0220 11:51:06.213005 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:51:06.236221 master-0 kubenswrapper[7756]: I0220 11:51:06.236129 7756 scope.go:117] "RemoveContainer" containerID="f5f43068fbb5a9da164f8ee835b3b81c0e487b16f18a0855b330e8a595241a1a" Feb 20 11:51:06.266401 master-0 kubenswrapper[7756]: I0220 11:51:06.266324 7756 scope.go:117] "RemoveContainer" containerID="96886d8a032fd5d62adc57b52a624b84e10414b0186d56899d96874f35313ca3" Feb 20 11:51:06.267163 master-0 kubenswrapper[7756]: E0220 11:51:06.267107 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96886d8a032fd5d62adc57b52a624b84e10414b0186d56899d96874f35313ca3\": container with ID starting with 96886d8a032fd5d62adc57b52a624b84e10414b0186d56899d96874f35313ca3 not found: ID does not exist" containerID="96886d8a032fd5d62adc57b52a624b84e10414b0186d56899d96874f35313ca3" Feb 20 11:51:06.267264 master-0 kubenswrapper[7756]: I0220 11:51:06.267157 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96886d8a032fd5d62adc57b52a624b84e10414b0186d56899d96874f35313ca3"} err="failed to get container status \"96886d8a032fd5d62adc57b52a624b84e10414b0186d56899d96874f35313ca3\": rpc error: code = NotFound desc = could not find container \"96886d8a032fd5d62adc57b52a624b84e10414b0186d56899d96874f35313ca3\": container with ID starting with 96886d8a032fd5d62adc57b52a624b84e10414b0186d56899d96874f35313ca3 not found: ID does not exist" Feb 20 11:51:06.267264 master-0 kubenswrapper[7756]: I0220 11:51:06.267197 7756 scope.go:117] "RemoveContainer" containerID="f5f43068fbb5a9da164f8ee835b3b81c0e487b16f18a0855b330e8a595241a1a" Feb 20 11:51:06.267644 master-0 kubenswrapper[7756]: E0220 11:51:06.267595 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5f43068fbb5a9da164f8ee835b3b81c0e487b16f18a0855b330e8a595241a1a\": container with ID starting with f5f43068fbb5a9da164f8ee835b3b81c0e487b16f18a0855b330e8a595241a1a not found: ID does not exist" containerID="f5f43068fbb5a9da164f8ee835b3b81c0e487b16f18a0855b330e8a595241a1a" Feb 20 11:51:06.267644 master-0 kubenswrapper[7756]: I0220 11:51:06.267627 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5f43068fbb5a9da164f8ee835b3b81c0e487b16f18a0855b330e8a595241a1a"} err="failed to get container status \"f5f43068fbb5a9da164f8ee835b3b81c0e487b16f18a0855b330e8a595241a1a\": rpc error: code = NotFound desc = could not find container \"f5f43068fbb5a9da164f8ee835b3b81c0e487b16f18a0855b330e8a595241a1a\": container with ID starting with f5f43068fbb5a9da164f8ee835b3b81c0e487b16f18a0855b330e8a595241a1a not found: ID does not exist" Feb 20 11:51:06.276783 master-0 kubenswrapper[7756]: E0220 11:51:06.276723 7756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12dab5d350ebc129b0bfa4714d330b15.slice/crio-e0641894015187b3510c96c6c6bd4f01c441dcb52b2dacc02ae9839b7ddf2146\": RecentStats: unable to find data in memory cache]" Feb 20 11:51:06.309315 master-0 kubenswrapper[7756]: E0220 11:51:06.309209 7756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:51:06.591423 master-0 kubenswrapper[7756]: I0220 11:51:06.591328 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12dab5d350ebc129b0bfa4714d330b15" path="/var/lib/kubelet/pods/12dab5d350ebc129b0bfa4714d330b15/volumes" Feb 20 11:51:06.592057 master-0 kubenswrapper[7756]: I0220 11:51:06.592018 7756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 20 11:51:06.664254 master-0 kubenswrapper[7756]: E0220 11:51:06.664180 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:51:08.067263 master-0 kubenswrapper[7756]: I0220 11:51:08.067174 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:08.068136 master-0 kubenswrapper[7756]: I0220 11:51:08.067260 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:10.261502 master-0 kubenswrapper[7756]: E0220 11:51:10.261311 7756 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895f21d4286ddb5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:50:35.924266421 +0000 UTC m=+81.666514469,LastTimestamp:2026-02-20 11:50:35.924266421 +0000 UTC m=+81.666514469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:51:11.067582 master-0 kubenswrapper[7756]: I0220 11:51:11.067450 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:11.067929 master-0 kubenswrapper[7756]: I0220 11:51:11.067588 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:11.250617 master-0 kubenswrapper[7756]: I0220 11:51:11.250363 7756 generic.go:334] "Generic (PLEG): container finished" podID="1df81fcc-f967-4874-ad16-1a89f0e7875a" containerID="5461ac8869ede1ae48aaf443305cec8c0cf9a21a54dc206e103440a3f966bcc9" exitCode=0 Feb 20 11:51:11.253563 master-0 kubenswrapper[7756]: I0220 11:51:11.253480 7756 generic.go:334] "Generic (PLEG): container finished" podID="e0b28c90-d5b6-44f3-867c-020ece32ac7d" containerID="73c4ac8066ad3eb7342716309b7b8a802bf833f8fcd163ad12901b630f6305c2" exitCode=0 Feb 20 11:51:14.067362 master-0 kubenswrapper[7756]: I0220 11:51:14.067241 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:14.067362 master-0 kubenswrapper[7756]: I0220 11:51:14.067355 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:14.277597 master-0 kubenswrapper[7756]: I0220 11:51:14.277506 7756 generic.go:334] "Generic (PLEG): container finished" podID="312ca024-c8f0-4994-8f9a-b707607341fe" containerID="9e91bb7cb260950fd5e975354ec43adcbf694e33c154dd1b679deca6be0b9cfb" exitCode=0 Feb 20 11:51:14.742930 master-0 kubenswrapper[7756]: I0220 11:51:14.742836 7756 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 20 11:51:15.254266 master-0 kubenswrapper[7756]: I0220 11:51:15.254148 7756 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-vtcnw container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 20 11:51:15.255061 master-0 kubenswrapper[7756]: I0220 11:51:15.254269 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" podUID="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 20 11:51:16.292866 master-0 kubenswrapper[7756]: I0220 11:51:16.292802 7756 generic.go:334] "Generic (PLEG): container finished" podID="f1388469-5e55-4c1b-97c3-c88777f29ae7" containerID="b288109ee32770ae0136eb8073a319dc58d7b8d8a7d067c5f9bf71abd12290e4" exitCode=0 Feb 20 11:51:16.309718 master-0 kubenswrapper[7756]: E0220 11:51:16.309651 7756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:51:16.665245 master-0 kubenswrapper[7756]: E0220 11:51:16.664983 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:51:17.066984 master-0 kubenswrapper[7756]: I0220 11:51:17.066879 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:17.067284 master-0 kubenswrapper[7756]: I0220 11:51:17.066995 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:17.208161 master-0 kubenswrapper[7756]: E0220 11:51:17.208082 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 20 11:51:18.315218 master-0 kubenswrapper[7756]: I0220 11:51:18.315139 7756 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27" exitCode=0 Feb 20 11:51:19.327111 master-0 kubenswrapper[7756]: I0220 11:51:19.326894 7756 generic.go:334] "Generic (PLEG): container finished" podID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerID="65fc745d32199b41ad554a4c4ed1944167b7da7496dffcee77c17ec0d2f1a51b" exitCode=0 Feb 20 11:51:21.351194 master-0 kubenswrapper[7756]: I0220 11:51:21.351073 7756 generic.go:334] "Generic (PLEG): container finished" podID="1fca5d50-eb5f-4dbb-bdf6-8e07231406f9" containerID="c3fd58850441274093931c36087d9a8518e8af6cd5182fdb00d74233da8d66da" exitCode=0 Feb 20 11:51:21.352633 master-0 kubenswrapper[7756]: I0220 11:51:21.352577 7756 generic.go:334] "Generic (PLEG): container finished" podID="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" containerID="f4d85100cd0f06816a98689538bc93ed981f60823f3ce37e7c844447bcdb96ee" exitCode=0 Feb 20 11:51:22.884841 master-0 kubenswrapper[7756]: I0220 11:51:22.884719 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:22.884841 master-0 kubenswrapper[7756]: I0220 11:51:22.884823 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:23.067936 master-0 kubenswrapper[7756]: I0220 11:51:23.067811 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:23.067936 master-0 kubenswrapper[7756]: I0220 11:51:23.067920 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:25.884760 master-0 kubenswrapper[7756]: I0220 11:51:25.884691 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:25.886032 master-0 kubenswrapper[7756]: I0220 11:51:25.885905 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:26.067155 master-0 kubenswrapper[7756]: I0220 11:51:26.067047 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:26.067155 master-0 kubenswrapper[7756]: I0220 11:51:26.067125 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:26.310069 master-0 kubenswrapper[7756]: E0220 11:51:26.309946 7756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:51:26.310069 master-0 kubenswrapper[7756]: I0220 11:51:26.310032 7756 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 20 11:51:26.666502 master-0 kubenswrapper[7756]: E0220 11:51:26.666288 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:51:28.884359 master-0 kubenswrapper[7756]: I0220 11:51:28.884268 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:28.885334 master-0 kubenswrapper[7756]: I0220 11:51:28.884375 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:29.067140 master-0 kubenswrapper[7756]: I0220 11:51:29.067084 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:29.067597 master-0 kubenswrapper[7756]: I0220 11:51:29.067516 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:31.323985 master-0 kubenswrapper[7756]: E0220 11:51:31.323900 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 20 11:51:32.066834 master-0 kubenswrapper[7756]: I0220 11:51:32.066757 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:32.067100 master-0 kubenswrapper[7756]: I0220 11:51:32.066836 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:33.457628 master-0 kubenswrapper[7756]: I0220 11:51:33.457516 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-psm4s_836a6d7e-9b26-425f-ae21-00422515d7fe/approver/0.log" Feb 20 11:51:33.458615 master-0 kubenswrapper[7756]: I0220 11:51:33.458140 7756 generic.go:334] "Generic (PLEG): container finished" podID="836a6d7e-9b26-425f-ae21-00422515d7fe" containerID="ace904c5f4a3faa1035b1dcf89c693ce9b93dceae341e4edfb98ee1576eea9b6" exitCode=1 Feb 20 11:51:35.067312 master-0 kubenswrapper[7756]: I0220 11:51:35.067201 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:35.067312 master-0 kubenswrapper[7756]: I0220 11:51:35.067291 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:36.311046 master-0 kubenswrapper[7756]: E0220 11:51:36.310905 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 20 11:51:36.666672 master-0 kubenswrapper[7756]: E0220 11:51:36.666513 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:51:36.666672 master-0 kubenswrapper[7756]: E0220 11:51:36.666572 7756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 20 11:51:36.852031 master-0 kubenswrapper[7756]: I0220 11:51:36.851929 7756 status_manager.go:851] "Failed to get status for pod" podUID="5710eb66-9717-4beb-a8b2-19f6886376b3" pod="openshift-etcd/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Feb 20 11:51:37.489798 master-0 kubenswrapper[7756]: I0220 11:51:37.489706 7756 generic.go:334] "Generic (PLEG): container finished" podID="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" containerID="8d90051cb425dcfb05eea700daacd614186eaabfc560fdf17a2b201fc46c56ad" exitCode=0 Feb 20 11:51:38.067344 master-0 kubenswrapper[7756]: I0220 11:51:38.067228 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:38.067745 master-0 kubenswrapper[7756]: I0220 11:51:38.067333 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:40.595823 master-0 kubenswrapper[7756]: E0220 11:51:40.595725 7756 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:51:40.596773 master-0 kubenswrapper[7756]: E0220 11:51:40.596016 7756 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.018s" Feb 20 11:51:40.608718 master-0 kubenswrapper[7756]: I0220 11:51:40.608648 7756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 20 11:51:40.922123 master-0 kubenswrapper[7756]: E0220 11:51:40.922043 7756 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 20 11:51:40.922123 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-d6bb9bb76-k95mq_openshift-machine-api_bd609bd3-2525-4b88-8f07-94a0418fb582_0(9f72438c172fa46465e2e1b230b1d07f14af9e75cfa3af7efba57bf63c59e740): error adding pod openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k95mq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9f72438c172fa46465e2e1b230b1d07f14af9e75cfa3af7efba57bf63c59e740" Netns:"/var/run/netns/fc150953-3303-4e0c-b7f7-754c4f67a1dd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-d6bb9bb76-k95mq;K8S_POD_INFRA_CONTAINER_ID=9f72438c172fa46465e2e1b230b1d07f14af9e75cfa3af7efba57bf63c59e740;K8S_POD_UID=bd609bd3-2525-4b88-8f07-94a0418fb582" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq/bd609bd3-2525-4b88-8f07-94a0418fb582]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-d6bb9bb76-k95mq in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-d6bb9bb76-k95mq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-k95mq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:40.922123 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:40.922123 master-0 kubenswrapper[7756]: > Feb 20 11:51:40.922479 master-0 kubenswrapper[7756]: E0220 11:51:40.922162 7756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 20 11:51:40.922479 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-d6bb9bb76-k95mq_openshift-machine-api_bd609bd3-2525-4b88-8f07-94a0418fb582_0(9f72438c172fa46465e2e1b230b1d07f14af9e75cfa3af7efba57bf63c59e740): error adding pod openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k95mq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9f72438c172fa46465e2e1b230b1d07f14af9e75cfa3af7efba57bf63c59e740" Netns:"/var/run/netns/fc150953-3303-4e0c-b7f7-754c4f67a1dd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-d6bb9bb76-k95mq;K8S_POD_INFRA_CONTAINER_ID=9f72438c172fa46465e2e1b230b1d07f14af9e75cfa3af7efba57bf63c59e740;K8S_POD_UID=bd609bd3-2525-4b88-8f07-94a0418fb582" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq/bd609bd3-2525-4b88-8f07-94a0418fb582]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-d6bb9bb76-k95mq in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-d6bb9bb76-k95mq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-k95mq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:40.922479 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:40.922479 master-0 kubenswrapper[7756]: > pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:51:40.922479 master-0 kubenswrapper[7756]: E0220 11:51:40.922197 7756 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 20 11:51:40.922479 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-d6bb9bb76-k95mq_openshift-machine-api_bd609bd3-2525-4b88-8f07-94a0418fb582_0(9f72438c172fa46465e2e1b230b1d07f14af9e75cfa3af7efba57bf63c59e740): error adding pod openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k95mq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9f72438c172fa46465e2e1b230b1d07f14af9e75cfa3af7efba57bf63c59e740" Netns:"/var/run/netns/fc150953-3303-4e0c-b7f7-754c4f67a1dd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-d6bb9bb76-k95mq;K8S_POD_INFRA_CONTAINER_ID=9f72438c172fa46465e2e1b230b1d07f14af9e75cfa3af7efba57bf63c59e740;K8S_POD_UID=bd609bd3-2525-4b88-8f07-94a0418fb582" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq/bd609bd3-2525-4b88-8f07-94a0418fb582]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-d6bb9bb76-k95mq in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-d6bb9bb76-k95mq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-k95mq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:40.922479 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:40.922479 master-0 kubenswrapper[7756]: > pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:51:40.922479 master-0 kubenswrapper[7756]: E0220 11:51:40.922298 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-baremetal-operator-d6bb9bb76-k95mq_openshift-machine-api(bd609bd3-2525-4b88-8f07-94a0418fb582)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-baremetal-operator-d6bb9bb76-k95mq_openshift-machine-api(bd609bd3-2525-4b88-8f07-94a0418fb582)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-d6bb9bb76-k95mq_openshift-machine-api_bd609bd3-2525-4b88-8f07-94a0418fb582_0(9f72438c172fa46465e2e1b230b1d07f14af9e75cfa3af7efba57bf63c59e740): error adding pod openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k95mq to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"9f72438c172fa46465e2e1b230b1d07f14af9e75cfa3af7efba57bf63c59e740\\\" Netns:\\\"/var/run/netns/fc150953-3303-4e0c-b7f7-754c4f67a1dd\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-d6bb9bb76-k95mq;K8S_POD_INFRA_CONTAINER_ID=9f72438c172fa46465e2e1b230b1d07f14af9e75cfa3af7efba57bf63c59e740;K8S_POD_UID=bd609bd3-2525-4b88-8f07-94a0418fb582\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq/bd609bd3-2525-4b88-8f07-94a0418fb582]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-d6bb9bb76-k95mq in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-d6bb9bb76-k95mq in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-d6bb9bb76-k95mq?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" podUID="bd609bd3-2525-4b88-8f07-94a0418fb582" Feb 20 11:51:41.066519 master-0 kubenswrapper[7756]: I0220 11:51:41.066414 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:41.066818 master-0 kubenswrapper[7756]: I0220 11:51:41.066516 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:41.103841 master-0 kubenswrapper[7756]: E0220 11:51:41.103492 7756 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 20 11:51:41.103841 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-86b8dc6d6-sksbt_openshift-machine-api_8ab951b1-6898-4357-b813-16365f3f89d5_0(9db4c3eb9219c552906405f8879a4f88ac2a33eea5cd6c4ae32c066a597687c2): error adding pod openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-sksbt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9db4c3eb9219c552906405f8879a4f88ac2a33eea5cd6c4ae32c066a597687c2" Netns:"/var/run/netns/ea41b5ec-3909-480c-8b63-c0b27997a31b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-86b8dc6d6-sksbt;K8S_POD_INFRA_CONTAINER_ID=9db4c3eb9219c552906405f8879a4f88ac2a33eea5cd6c4ae32c066a597687c2;K8S_POD_UID=8ab951b1-6898-4357-b813-16365f3f89d5" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt/8ab951b1-6898-4357-b813-16365f3f89d5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-86b8dc6d6-sksbt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-86b8dc6d6-sksbt in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-86b8dc6d6-sksbt?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.103841 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.103841 master-0 kubenswrapper[7756]: > Feb 20 11:51:41.103841 master-0 kubenswrapper[7756]: E0220 11:51:41.103592 7756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 20 11:51:41.103841 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-86b8dc6d6-sksbt_openshift-machine-api_8ab951b1-6898-4357-b813-16365f3f89d5_0(9db4c3eb9219c552906405f8879a4f88ac2a33eea5cd6c4ae32c066a597687c2): error adding pod openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-sksbt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9db4c3eb9219c552906405f8879a4f88ac2a33eea5cd6c4ae32c066a597687c2" Netns:"/var/run/netns/ea41b5ec-3909-480c-8b63-c0b27997a31b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-86b8dc6d6-sksbt;K8S_POD_INFRA_CONTAINER_ID=9db4c3eb9219c552906405f8879a4f88ac2a33eea5cd6c4ae32c066a597687c2;K8S_POD_UID=8ab951b1-6898-4357-b813-16365f3f89d5" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt/8ab951b1-6898-4357-b813-16365f3f89d5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-86b8dc6d6-sksbt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-86b8dc6d6-sksbt in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-86b8dc6d6-sksbt?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.103841 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.103841 master-0 kubenswrapper[7756]: > pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 11:51:41.103841 master-0 kubenswrapper[7756]: E0220 11:51:41.103615 7756 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 20 11:51:41.103841 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-86b8dc6d6-sksbt_openshift-machine-api_8ab951b1-6898-4357-b813-16365f3f89d5_0(9db4c3eb9219c552906405f8879a4f88ac2a33eea5cd6c4ae32c066a597687c2): error adding pod openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-sksbt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9db4c3eb9219c552906405f8879a4f88ac2a33eea5cd6c4ae32c066a597687c2" Netns:"/var/run/netns/ea41b5ec-3909-480c-8b63-c0b27997a31b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-86b8dc6d6-sksbt;K8S_POD_INFRA_CONTAINER_ID=9db4c3eb9219c552906405f8879a4f88ac2a33eea5cd6c4ae32c066a597687c2;K8S_POD_UID=8ab951b1-6898-4357-b813-16365f3f89d5" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt/8ab951b1-6898-4357-b813-16365f3f89d5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-86b8dc6d6-sksbt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-86b8dc6d6-sksbt in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-86b8dc6d6-sksbt?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.103841 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.103841 master-0 kubenswrapper[7756]: > pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 11:51:41.103841 master-0 kubenswrapper[7756]: E0220 11:51:41.103684 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-autoscaler-operator-86b8dc6d6-sksbt_openshift-machine-api(8ab951b1-6898-4357-b813-16365f3f89d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-autoscaler-operator-86b8dc6d6-sksbt_openshift-machine-api(8ab951b1-6898-4357-b813-16365f3f89d5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-86b8dc6d6-sksbt_openshift-machine-api_8ab951b1-6898-4357-b813-16365f3f89d5_0(9db4c3eb9219c552906405f8879a4f88ac2a33eea5cd6c4ae32c066a597687c2): error adding pod openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-sksbt to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"9db4c3eb9219c552906405f8879a4f88ac2a33eea5cd6c4ae32c066a597687c2\\\" Netns:\\\"/var/run/netns/ea41b5ec-3909-480c-8b63-c0b27997a31b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-86b8dc6d6-sksbt;K8S_POD_INFRA_CONTAINER_ID=9db4c3eb9219c552906405f8879a4f88ac2a33eea5cd6c4ae32c066a597687c2;K8S_POD_UID=8ab951b1-6898-4357-b813-16365f3f89d5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt/8ab951b1-6898-4357-b813-16365f3f89d5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-86b8dc6d6-sksbt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-86b8dc6d6-sksbt in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-86b8dc6d6-sksbt?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" podUID="8ab951b1-6898-4357-b813-16365f3f89d5" Feb 20 11:51:41.220861 master-0 kubenswrapper[7756]: E0220 11:51:41.211263 7756 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 20 11:51:41.220861 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-795fd44d5c-t99pw_openshift-operator-lifecycle-manager_ae1fd116-6f63-4344-b7af-278665649e5a_0(22a40045dc0939ae71fbb3e09773d411e97e664018d6bd5ec5e07d55bea89422): error adding pod openshift-operator-lifecycle-manager_packageserver-795fd44d5c-t99pw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"22a40045dc0939ae71fbb3e09773d411e97e664018d6bd5ec5e07d55bea89422" Netns:"/var/run/netns/57fad02c-31f7-42a3-a31f-dc201e882edb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-795fd44d5c-t99pw;K8S_POD_INFRA_CONTAINER_ID=22a40045dc0939ae71fbb3e09773d411e97e664018d6bd5ec5e07d55bea89422;K8S_POD_UID=ae1fd116-6f63-4344-b7af-278665649e5a" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw/ae1fd116-6f63-4344-b7af-278665649e5a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-795fd44d5c-t99pw in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-795fd44d5c-t99pw in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-795fd44d5c-t99pw?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.220861 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.220861 master-0 kubenswrapper[7756]: > Feb 20 11:51:41.220861 master-0 kubenswrapper[7756]: E0220 11:51:41.211360 7756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 20 11:51:41.220861 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-795fd44d5c-t99pw_openshift-operator-lifecycle-manager_ae1fd116-6f63-4344-b7af-278665649e5a_0(22a40045dc0939ae71fbb3e09773d411e97e664018d6bd5ec5e07d55bea89422): error adding pod openshift-operator-lifecycle-manager_packageserver-795fd44d5c-t99pw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"22a40045dc0939ae71fbb3e09773d411e97e664018d6bd5ec5e07d55bea89422" Netns:"/var/run/netns/57fad02c-31f7-42a3-a31f-dc201e882edb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-795fd44d5c-t99pw;K8S_POD_INFRA_CONTAINER_ID=22a40045dc0939ae71fbb3e09773d411e97e664018d6bd5ec5e07d55bea89422;K8S_POD_UID=ae1fd116-6f63-4344-b7af-278665649e5a" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw/ae1fd116-6f63-4344-b7af-278665649e5a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-795fd44d5c-t99pw in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-795fd44d5c-t99pw in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-795fd44d5c-t99pw?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.220861 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.220861 master-0 kubenswrapper[7756]: > pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:51:41.220861 master-0 kubenswrapper[7756]: E0220 11:51:41.211419 7756 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 20 11:51:41.220861 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-795fd44d5c-t99pw_openshift-operator-lifecycle-manager_ae1fd116-6f63-4344-b7af-278665649e5a_0(22a40045dc0939ae71fbb3e09773d411e97e664018d6bd5ec5e07d55bea89422): error adding pod openshift-operator-lifecycle-manager_packageserver-795fd44d5c-t99pw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"22a40045dc0939ae71fbb3e09773d411e97e664018d6bd5ec5e07d55bea89422" Netns:"/var/run/netns/57fad02c-31f7-42a3-a31f-dc201e882edb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-795fd44d5c-t99pw;K8S_POD_INFRA_CONTAINER_ID=22a40045dc0939ae71fbb3e09773d411e97e664018d6bd5ec5e07d55bea89422;K8S_POD_UID=ae1fd116-6f63-4344-b7af-278665649e5a" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw/ae1fd116-6f63-4344-b7af-278665649e5a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-795fd44d5c-t99pw in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-795fd44d5c-t99pw in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-795fd44d5c-t99pw?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.220861 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.220861 master-0 kubenswrapper[7756]: > pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:51:41.220861 master-0 kubenswrapper[7756]: E0220 11:51:41.211522 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"packageserver-795fd44d5c-t99pw_openshift-operator-lifecycle-manager(ae1fd116-6f63-4344-b7af-278665649e5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"packageserver-795fd44d5c-t99pw_openshift-operator-lifecycle-manager(ae1fd116-6f63-4344-b7af-278665649e5a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-795fd44d5c-t99pw_openshift-operator-lifecycle-manager_ae1fd116-6f63-4344-b7af-278665649e5a_0(22a40045dc0939ae71fbb3e09773d411e97e664018d6bd5ec5e07d55bea89422): error adding pod openshift-operator-lifecycle-manager_packageserver-795fd44d5c-t99pw to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"22a40045dc0939ae71fbb3e09773d411e97e664018d6bd5ec5e07d55bea89422\\\" Netns:\\\"/var/run/netns/57fad02c-31f7-42a3-a31f-dc201e882edb\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-795fd44d5c-t99pw;K8S_POD_INFRA_CONTAINER_ID=22a40045dc0939ae71fbb3e09773d411e97e664018d6bd5ec5e07d55bea89422;K8S_POD_UID=ae1fd116-6f63-4344-b7af-278665649e5a\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw/ae1fd116-6f63-4344-b7af-278665649e5a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-795fd44d5c-t99pw in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-795fd44d5c-t99pw in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-795fd44d5c-t99pw?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" podUID="ae1fd116-6f63-4344-b7af-278665649e5a" Feb 20 11:51:41.332417 master-0 kubenswrapper[7756]: E0220 11:51:41.332342 7756 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 20 11:51:41.332417 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-5c7cf458b4-dmvlr_openshift-machine-api_62fc400b-b3dd-4134-bd27-69dd8369153a_0(e4eb3fad9bf5cf4838e704887511f2459fbe38f56ae0faa28d3048f7ab72b38d): error adding pod openshift-machine-api_machine-api-operator-5c7cf458b4-dmvlr to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e4eb3fad9bf5cf4838e704887511f2459fbe38f56ae0faa28d3048f7ab72b38d" Netns:"/var/run/netns/75d66e66-23b2-4694-9f3a-b3e53b1cc0b3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-5c7cf458b4-dmvlr;K8S_POD_INFRA_CONTAINER_ID=e4eb3fad9bf5cf4838e704887511f2459fbe38f56ae0faa28d3048f7ab72b38d;K8S_POD_UID=62fc400b-b3dd-4134-bd27-69dd8369153a" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr] networking: Multus: [openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr/62fc400b-b3dd-4134-bd27-69dd8369153a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-5c7cf458b4-dmvlr in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-5c7cf458b4-dmvlr in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-5c7cf458b4-dmvlr?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.332417 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.332417 master-0 kubenswrapper[7756]: > Feb 20 11:51:41.332769 master-0 kubenswrapper[7756]: E0220 11:51:41.332427 7756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 20 11:51:41.332769 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-5c7cf458b4-dmvlr_openshift-machine-api_62fc400b-b3dd-4134-bd27-69dd8369153a_0(e4eb3fad9bf5cf4838e704887511f2459fbe38f56ae0faa28d3048f7ab72b38d): error adding pod openshift-machine-api_machine-api-operator-5c7cf458b4-dmvlr to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e4eb3fad9bf5cf4838e704887511f2459fbe38f56ae0faa28d3048f7ab72b38d" Netns:"/var/run/netns/75d66e66-23b2-4694-9f3a-b3e53b1cc0b3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-5c7cf458b4-dmvlr;K8S_POD_INFRA_CONTAINER_ID=e4eb3fad9bf5cf4838e704887511f2459fbe38f56ae0faa28d3048f7ab72b38d;K8S_POD_UID=62fc400b-b3dd-4134-bd27-69dd8369153a" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr] networking: Multus: [openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr/62fc400b-b3dd-4134-bd27-69dd8369153a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-5c7cf458b4-dmvlr in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-5c7cf458b4-dmvlr in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-5c7cf458b4-dmvlr?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.332769 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.332769 master-0 kubenswrapper[7756]: > pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:51:41.332769 master-0 kubenswrapper[7756]: E0220 11:51:41.332452 7756 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 20 11:51:41.332769 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-5c7cf458b4-dmvlr_openshift-machine-api_62fc400b-b3dd-4134-bd27-69dd8369153a_0(e4eb3fad9bf5cf4838e704887511f2459fbe38f56ae0faa28d3048f7ab72b38d): error adding pod openshift-machine-api_machine-api-operator-5c7cf458b4-dmvlr to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e4eb3fad9bf5cf4838e704887511f2459fbe38f56ae0faa28d3048f7ab72b38d" Netns:"/var/run/netns/75d66e66-23b2-4694-9f3a-b3e53b1cc0b3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-5c7cf458b4-dmvlr;K8S_POD_INFRA_CONTAINER_ID=e4eb3fad9bf5cf4838e704887511f2459fbe38f56ae0faa28d3048f7ab72b38d;K8S_POD_UID=62fc400b-b3dd-4134-bd27-69dd8369153a" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr] networking: Multus: [openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr/62fc400b-b3dd-4134-bd27-69dd8369153a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-5c7cf458b4-dmvlr in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-5c7cf458b4-dmvlr in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-5c7cf458b4-dmvlr?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.332769 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.332769 master-0 kubenswrapper[7756]: > pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:51:41.332769 master-0 kubenswrapper[7756]: E0220 11:51:41.332583 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"machine-api-operator-5c7cf458b4-dmvlr_openshift-machine-api(62fc400b-b3dd-4134-bd27-69dd8369153a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"machine-api-operator-5c7cf458b4-dmvlr_openshift-machine-api(62fc400b-b3dd-4134-bd27-69dd8369153a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-5c7cf458b4-dmvlr_openshift-machine-api_62fc400b-b3dd-4134-bd27-69dd8369153a_0(e4eb3fad9bf5cf4838e704887511f2459fbe38f56ae0faa28d3048f7ab72b38d): error adding pod openshift-machine-api_machine-api-operator-5c7cf458b4-dmvlr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"e4eb3fad9bf5cf4838e704887511f2459fbe38f56ae0faa28d3048f7ab72b38d\\\" Netns:\\\"/var/run/netns/75d66e66-23b2-4694-9f3a-b3e53b1cc0b3\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-5c7cf458b4-dmvlr;K8S_POD_INFRA_CONTAINER_ID=e4eb3fad9bf5cf4838e704887511f2459fbe38f56ae0faa28d3048f7ab72b38d;K8S_POD_UID=62fc400b-b3dd-4134-bd27-69dd8369153a\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr] networking: Multus: [openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr/62fc400b-b3dd-4134-bd27-69dd8369153a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-5c7cf458b4-dmvlr in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-5c7cf458b4-dmvlr in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-5c7cf458b4-dmvlr?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" podUID="62fc400b-b3dd-4134-bd27-69dd8369153a" Feb 20 11:51:41.374349 master-0 kubenswrapper[7756]: E0220 11:51:41.374281 7756 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 20 11:51:41.374349 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-59b498fcfb-hsjr7_openshift-insights_daf25ef5-8247-4dbb-bdc1-55104b1015b7_0(1e00fca8587c60bd036a468150614d26614c2ae73089a64d2c2925c8866cfe3e): error adding pod openshift-insights_insights-operator-59b498fcfb-hsjr7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1e00fca8587c60bd036a468150614d26614c2ae73089a64d2c2925c8866cfe3e" Netns:"/var/run/netns/27c0c6a2-560c-40a6-b0c0-e8e1ce19664e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-59b498fcfb-hsjr7;K8S_POD_INFRA_CONTAINER_ID=1e00fca8587c60bd036a468150614d26614c2ae73089a64d2c2925c8866cfe3e;K8S_POD_UID=daf25ef5-8247-4dbb-bdc1-55104b1015b7" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-59b498fcfb-hsjr7] networking: Multus: [openshift-insights/insights-operator-59b498fcfb-hsjr7/daf25ef5-8247-4dbb-bdc1-55104b1015b7]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-59b498fcfb-hsjr7 in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-59b498fcfb-hsjr7 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59b498fcfb-hsjr7?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.374349 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.374349 master-0 kubenswrapper[7756]: > Feb 20 11:51:41.374892 master-0 kubenswrapper[7756]: E0220 11:51:41.374353 7756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 20 11:51:41.374892 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-59b498fcfb-hsjr7_openshift-insights_daf25ef5-8247-4dbb-bdc1-55104b1015b7_0(1e00fca8587c60bd036a468150614d26614c2ae73089a64d2c2925c8866cfe3e): error adding pod openshift-insights_insights-operator-59b498fcfb-hsjr7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1e00fca8587c60bd036a468150614d26614c2ae73089a64d2c2925c8866cfe3e" Netns:"/var/run/netns/27c0c6a2-560c-40a6-b0c0-e8e1ce19664e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-59b498fcfb-hsjr7;K8S_POD_INFRA_CONTAINER_ID=1e00fca8587c60bd036a468150614d26614c2ae73089a64d2c2925c8866cfe3e;K8S_POD_UID=daf25ef5-8247-4dbb-bdc1-55104b1015b7" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-59b498fcfb-hsjr7] networking: Multus: [openshift-insights/insights-operator-59b498fcfb-hsjr7/daf25ef5-8247-4dbb-bdc1-55104b1015b7]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-59b498fcfb-hsjr7 in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-59b498fcfb-hsjr7 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59b498fcfb-hsjr7?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.374892 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.374892 master-0 kubenswrapper[7756]: > pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:51:41.374892 master-0 kubenswrapper[7756]: E0220 11:51:41.374396 7756 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 20 11:51:41.374892 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-59b498fcfb-hsjr7_openshift-insights_daf25ef5-8247-4dbb-bdc1-55104b1015b7_0(1e00fca8587c60bd036a468150614d26614c2ae73089a64d2c2925c8866cfe3e): error adding pod openshift-insights_insights-operator-59b498fcfb-hsjr7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1e00fca8587c60bd036a468150614d26614c2ae73089a64d2c2925c8866cfe3e" Netns:"/var/run/netns/27c0c6a2-560c-40a6-b0c0-e8e1ce19664e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-59b498fcfb-hsjr7;K8S_POD_INFRA_CONTAINER_ID=1e00fca8587c60bd036a468150614d26614c2ae73089a64d2c2925c8866cfe3e;K8S_POD_UID=daf25ef5-8247-4dbb-bdc1-55104b1015b7" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-59b498fcfb-hsjr7] networking: Multus: [openshift-insights/insights-operator-59b498fcfb-hsjr7/daf25ef5-8247-4dbb-bdc1-55104b1015b7]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-59b498fcfb-hsjr7 in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-59b498fcfb-hsjr7 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59b498fcfb-hsjr7?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.374892 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.374892 master-0 kubenswrapper[7756]: > pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:51:41.374892 master-0 kubenswrapper[7756]: E0220 11:51:41.374460 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"insights-operator-59b498fcfb-hsjr7_openshift-insights(daf25ef5-8247-4dbb-bdc1-55104b1015b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"insights-operator-59b498fcfb-hsjr7_openshift-insights(daf25ef5-8247-4dbb-bdc1-55104b1015b7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-59b498fcfb-hsjr7_openshift-insights_daf25ef5-8247-4dbb-bdc1-55104b1015b7_0(1e00fca8587c60bd036a468150614d26614c2ae73089a64d2c2925c8866cfe3e): error adding pod openshift-insights_insights-operator-59b498fcfb-hsjr7 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"1e00fca8587c60bd036a468150614d26614c2ae73089a64d2c2925c8866cfe3e\\\" Netns:\\\"/var/run/netns/27c0c6a2-560c-40a6-b0c0-e8e1ce19664e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-59b498fcfb-hsjr7;K8S_POD_INFRA_CONTAINER_ID=1e00fca8587c60bd036a468150614d26614c2ae73089a64d2c2925c8866cfe3e;K8S_POD_UID=daf25ef5-8247-4dbb-bdc1-55104b1015b7\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-insights/insights-operator-59b498fcfb-hsjr7] networking: Multus: [openshift-insights/insights-operator-59b498fcfb-hsjr7/daf25ef5-8247-4dbb-bdc1-55104b1015b7]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-59b498fcfb-hsjr7 in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-59b498fcfb-hsjr7 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59b498fcfb-hsjr7?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" podUID="daf25ef5-8247-4dbb-bdc1-55104b1015b7" Feb 20 11:51:41.447940 master-0 kubenswrapper[7756]: E0220 11:51:41.447854 7756 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 20 11:51:41.447940 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-689d967cd5-ptpq6_openshift-route-controller-manager_c29fd426-7c89-434e-8332-1ca31075d4bf_0(568ca9797d27b4069f25d13a7967c8b05594ed2a45624c9b0e97ab8e7526d5a7): error adding pod openshift-route-controller-manager_route-controller-manager-689d967cd5-ptpq6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"568ca9797d27b4069f25d13a7967c8b05594ed2a45624c9b0e97ab8e7526d5a7" Netns:"/var/run/netns/b94db180-e2be-43c8-94bd-e1f7626a1d05" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-689d967cd5-ptpq6;K8S_POD_INFRA_CONTAINER_ID=568ca9797d27b4069f25d13a7967c8b05594ed2a45624c9b0e97ab8e7526d5a7;K8S_POD_UID=c29fd426-7c89-434e-8332-1ca31075d4bf" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6] networking: Multus: [openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6/c29fd426-7c89-434e-8332-1ca31075d4bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-689d967cd5-ptpq6 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-689d967cd5-ptpq6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-689d967cd5-ptpq6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.447940 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.447940 master-0 kubenswrapper[7756]: > Feb 20 11:51:41.448465 master-0 kubenswrapper[7756]: E0220 11:51:41.447941 7756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 20 11:51:41.448465 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-689d967cd5-ptpq6_openshift-route-controller-manager_c29fd426-7c89-434e-8332-1ca31075d4bf_0(568ca9797d27b4069f25d13a7967c8b05594ed2a45624c9b0e97ab8e7526d5a7): error adding pod openshift-route-controller-manager_route-controller-manager-689d967cd5-ptpq6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"568ca9797d27b4069f25d13a7967c8b05594ed2a45624c9b0e97ab8e7526d5a7" Netns:"/var/run/netns/b94db180-e2be-43c8-94bd-e1f7626a1d05" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-689d967cd5-ptpq6;K8S_POD_INFRA_CONTAINER_ID=568ca9797d27b4069f25d13a7967c8b05594ed2a45624c9b0e97ab8e7526d5a7;K8S_POD_UID=c29fd426-7c89-434e-8332-1ca31075d4bf" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6] networking: Multus: [openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6/c29fd426-7c89-434e-8332-1ca31075d4bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-689d967cd5-ptpq6 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-689d967cd5-ptpq6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-689d967cd5-ptpq6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.448465 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.448465 master-0 kubenswrapper[7756]: > pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:51:41.448465 master-0 kubenswrapper[7756]: E0220 11:51:41.447967 7756 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 20 11:51:41.448465 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-689d967cd5-ptpq6_openshift-route-controller-manager_c29fd426-7c89-434e-8332-1ca31075d4bf_0(568ca9797d27b4069f25d13a7967c8b05594ed2a45624c9b0e97ab8e7526d5a7): error adding pod openshift-route-controller-manager_route-controller-manager-689d967cd5-ptpq6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"568ca9797d27b4069f25d13a7967c8b05594ed2a45624c9b0e97ab8e7526d5a7" Netns:"/var/run/netns/b94db180-e2be-43c8-94bd-e1f7626a1d05" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-689d967cd5-ptpq6;K8S_POD_INFRA_CONTAINER_ID=568ca9797d27b4069f25d13a7967c8b05594ed2a45624c9b0e97ab8e7526d5a7;K8S_POD_UID=c29fd426-7c89-434e-8332-1ca31075d4bf" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6] networking: Multus: [openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6/c29fd426-7c89-434e-8332-1ca31075d4bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-689d967cd5-ptpq6 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-689d967cd5-ptpq6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-689d967cd5-ptpq6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.448465 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.448465 master-0 kubenswrapper[7756]: > pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:51:41.448465 master-0 kubenswrapper[7756]: E0220 11:51:41.448037 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-689d967cd5-ptpq6_openshift-route-controller-manager(c29fd426-7c89-434e-8332-1ca31075d4bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-689d967cd5-ptpq6_openshift-route-controller-manager(c29fd426-7c89-434e-8332-1ca31075d4bf)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-689d967cd5-ptpq6_openshift-route-controller-manager_c29fd426-7c89-434e-8332-1ca31075d4bf_0(568ca9797d27b4069f25d13a7967c8b05594ed2a45624c9b0e97ab8e7526d5a7): error adding pod openshift-route-controller-manager_route-controller-manager-689d967cd5-ptpq6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"568ca9797d27b4069f25d13a7967c8b05594ed2a45624c9b0e97ab8e7526d5a7\\\" Netns:\\\"/var/run/netns/b94db180-e2be-43c8-94bd-e1f7626a1d05\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-689d967cd5-ptpq6;K8S_POD_INFRA_CONTAINER_ID=568ca9797d27b4069f25d13a7967c8b05594ed2a45624c9b0e97ab8e7526d5a7;K8S_POD_UID=c29fd426-7c89-434e-8332-1ca31075d4bf\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6] networking: Multus: [openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6/c29fd426-7c89-434e-8332-1ca31075d4bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-689d967cd5-ptpq6 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-689d967cd5-ptpq6 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-689d967cd5-ptpq6?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" podUID="c29fd426-7c89-434e-8332-1ca31075d4bf" Feb 20 11:51:41.455842 master-0 kubenswrapper[7756]: E0220 11:51:41.455767 7756 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 20 11:51:41.455842 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_eb93420d-7c5a-4492-bd16-0104104406b4_0(427cd48f1b7ae9afa5f7c2650f4e9538f911e130a2d631a1fd6376edc7e4cbab): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"427cd48f1b7ae9afa5f7c2650f4e9538f911e130a2d631a1fd6376edc7e4cbab" Netns:"/var/run/netns/ea0795e6-6d25-460c-8085-e93776638520" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=427cd48f1b7ae9afa5f7c2650f4e9538f911e130a2d631a1fd6376edc7e4cbab;K8S_POD_UID=eb93420d-7c5a-4492-bd16-0104104406b4" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/eb93420d-7c5a-4492-bd16-0104104406b4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.455842 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.455842 master-0 kubenswrapper[7756]: > Feb 20 11:51:41.456092 master-0 kubenswrapper[7756]: E0220 11:51:41.455871 7756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 20 11:51:41.456092 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_eb93420d-7c5a-4492-bd16-0104104406b4_0(427cd48f1b7ae9afa5f7c2650f4e9538f911e130a2d631a1fd6376edc7e4cbab): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"427cd48f1b7ae9afa5f7c2650f4e9538f911e130a2d631a1fd6376edc7e4cbab" Netns:"/var/run/netns/ea0795e6-6d25-460c-8085-e93776638520" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=427cd48f1b7ae9afa5f7c2650f4e9538f911e130a2d631a1fd6376edc7e4cbab;K8S_POD_UID=eb93420d-7c5a-4492-bd16-0104104406b4" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/eb93420d-7c5a-4492-bd16-0104104406b4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.456092 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.456092 master-0 kubenswrapper[7756]: > pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:51:41.456092 master-0 kubenswrapper[7756]: E0220 11:51:41.455916 7756 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 20 11:51:41.456092 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_eb93420d-7c5a-4492-bd16-0104104406b4_0(427cd48f1b7ae9afa5f7c2650f4e9538f911e130a2d631a1fd6376edc7e4cbab): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"427cd48f1b7ae9afa5f7c2650f4e9538f911e130a2d631a1fd6376edc7e4cbab" Netns:"/var/run/netns/ea0795e6-6d25-460c-8085-e93776638520" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=427cd48f1b7ae9afa5f7c2650f4e9538f911e130a2d631a1fd6376edc7e4cbab;K8S_POD_UID=eb93420d-7c5a-4492-bd16-0104104406b4" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/eb93420d-7c5a-4492-bd16-0104104406b4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.456092 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.456092 master-0 kubenswrapper[7756]: > pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:51:41.456092 master-0 kubenswrapper[7756]: E0220 11:51:41.456030 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-2-master-0_openshift-kube-controller-manager(eb93420d-7c5a-4492-bd16-0104104406b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-2-master-0_openshift-kube-controller-manager(eb93420d-7c5a-4492-bd16-0104104406b4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_eb93420d-7c5a-4492-bd16-0104104406b4_0(427cd48f1b7ae9afa5f7c2650f4e9538f911e130a2d631a1fd6376edc7e4cbab): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"427cd48f1b7ae9afa5f7c2650f4e9538f911e130a2d631a1fd6376edc7e4cbab\\\" Netns:\\\"/var/run/netns/ea0795e6-6d25-460c-8085-e93776638520\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=427cd48f1b7ae9afa5f7c2650f4e9538f911e130a2d631a1fd6376edc7e4cbab;K8S_POD_UID=eb93420d-7c5a-4492-bd16-0104104406b4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/eb93420d-7c5a-4492-bd16-0104104406b4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="eb93420d-7c5a-4492-bd16-0104104406b4" Feb 20 11:51:41.470992 master-0 kubenswrapper[7756]: E0220 11:51:41.470929 7756 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 20 11:51:41.470992 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-6968c58f46-fq68q_openshift-cloud-credential-operator_ef18ace4-7316-4600-9be9-2adc792705e9_0(eef9352399e4672e1afb3bada9fb396cbdd56916a3e844f1937e8232eb2e03f4): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-6968c58f46-fq68q to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"eef9352399e4672e1afb3bada9fb396cbdd56916a3e844f1937e8232eb2e03f4" Netns:"/var/run/netns/07241f2a-c610-455e-bfad-7f992038da57" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-6968c58f46-fq68q;K8S_POD_INFRA_CONTAINER_ID=eef9352399e4672e1afb3bada9fb396cbdd56916a3e844f1937e8232eb2e03f4;K8S_POD_UID=ef18ace4-7316-4600-9be9-2adc792705e9" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q/ef18ace4-7316-4600-9be9-2adc792705e9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-6968c58f46-fq68q in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-6968c58f46-fq68q in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-6968c58f46-fq68q?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.470992 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.470992 master-0 kubenswrapper[7756]: > Feb 20 11:51:41.471255 master-0 kubenswrapper[7756]: E0220 11:51:41.471010 7756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 20 11:51:41.471255 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-6968c58f46-fq68q_openshift-cloud-credential-operator_ef18ace4-7316-4600-9be9-2adc792705e9_0(eef9352399e4672e1afb3bada9fb396cbdd56916a3e844f1937e8232eb2e03f4): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-6968c58f46-fq68q to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"eef9352399e4672e1afb3bada9fb396cbdd56916a3e844f1937e8232eb2e03f4" Netns:"/var/run/netns/07241f2a-c610-455e-bfad-7f992038da57" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-6968c58f46-fq68q;K8S_POD_INFRA_CONTAINER_ID=eef9352399e4672e1afb3bada9fb396cbdd56916a3e844f1937e8232eb2e03f4;K8S_POD_UID=ef18ace4-7316-4600-9be9-2adc792705e9" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q/ef18ace4-7316-4600-9be9-2adc792705e9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-6968c58f46-fq68q in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-6968c58f46-fq68q in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-6968c58f46-fq68q?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.471255 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.471255 master-0 kubenswrapper[7756]: > pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 11:51:41.471255 master-0 kubenswrapper[7756]: E0220 11:51:41.471040 7756 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 20 11:51:41.471255 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-6968c58f46-fq68q_openshift-cloud-credential-operator_ef18ace4-7316-4600-9be9-2adc792705e9_0(eef9352399e4672e1afb3bada9fb396cbdd56916a3e844f1937e8232eb2e03f4): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-6968c58f46-fq68q to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"eef9352399e4672e1afb3bada9fb396cbdd56916a3e844f1937e8232eb2e03f4" Netns:"/var/run/netns/07241f2a-c610-455e-bfad-7f992038da57" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-6968c58f46-fq68q;K8S_POD_INFRA_CONTAINER_ID=eef9352399e4672e1afb3bada9fb396cbdd56916a3e844f1937e8232eb2e03f4;K8S_POD_UID=ef18ace4-7316-4600-9be9-2adc792705e9" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q/ef18ace4-7316-4600-9be9-2adc792705e9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-6968c58f46-fq68q in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-6968c58f46-fq68q in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-6968c58f46-fq68q?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.471255 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.471255 master-0 kubenswrapper[7756]: > pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 11:51:41.471255 master-0 kubenswrapper[7756]: E0220 11:51:41.471127 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cloud-credential-operator-6968c58f46-fq68q_openshift-cloud-credential-operator(ef18ace4-7316-4600-9be9-2adc792705e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cloud-credential-operator-6968c58f46-fq68q_openshift-cloud-credential-operator(ef18ace4-7316-4600-9be9-2adc792705e9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-6968c58f46-fq68q_openshift-cloud-credential-operator_ef18ace4-7316-4600-9be9-2adc792705e9_0(eef9352399e4672e1afb3bada9fb396cbdd56916a3e844f1937e8232eb2e03f4): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-6968c58f46-fq68q to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"eef9352399e4672e1afb3bada9fb396cbdd56916a3e844f1937e8232eb2e03f4\\\" Netns:\\\"/var/run/netns/07241f2a-c610-455e-bfad-7f992038da57\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-6968c58f46-fq68q;K8S_POD_INFRA_CONTAINER_ID=eef9352399e4672e1afb3bada9fb396cbdd56916a3e844f1937e8232eb2e03f4;K8S_POD_UID=ef18ace4-7316-4600-9be9-2adc792705e9\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q/ef18ace4-7316-4600-9be9-2adc792705e9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-6968c58f46-fq68q in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-6968c58f46-fq68q in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-6968c58f46-fq68q?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" podUID="ef18ace4-7316-4600-9be9-2adc792705e9" Feb 20 11:51:41.515877 master-0 kubenswrapper[7756]: I0220 11:51:41.515825 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:51:41.516009 master-0 kubenswrapper[7756]: I0220 11:51:41.515884 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:51:41.516009 master-0 kubenswrapper[7756]: I0220 11:51:41.515914 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:51:41.516009 master-0 kubenswrapper[7756]: I0220 11:51:41.515965 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 11:51:41.516189 master-0 kubenswrapper[7756]: I0220 11:51:41.516040 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:51:41.516189 master-0 kubenswrapper[7756]: I0220 11:51:41.516032 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:51:41.516303 master-0 kubenswrapper[7756]: I0220 11:51:41.516287 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:51:41.516683 master-0 kubenswrapper[7756]: I0220 11:51:41.515814 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 11:51:41.516851 master-0 kubenswrapper[7756]: I0220 11:51:41.516700 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 11:51:41.516851 master-0 kubenswrapper[7756]: I0220 11:51:41.516814 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 11:51:41.517156 master-0 kubenswrapper[7756]: I0220 11:51:41.517077 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 11:51:41.517637 master-0 kubenswrapper[7756]: I0220 11:51:41.517337 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 11:51:41.517637 master-0 kubenswrapper[7756]: I0220 11:51:41.517432 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:51:41.517637 master-0 kubenswrapper[7756]: I0220 11:51:41.517594 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:51:41.599429 master-0 kubenswrapper[7756]: E0220 11:51:41.599349 7756 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 20 11:51:41.599429 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-f94476f49-d9vsg_openshift-cluster-storage-operator_bbdbadd9-eeaa-46ef-936e-5db8d395c118_0(b734d05569bfdb6dc6d8a9b39a087412fdcd09e0cd4b6ebc5dae7cd326c8ecd2): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-d9vsg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b734d05569bfdb6dc6d8a9b39a087412fdcd09e0cd4b6ebc5dae7cd326c8ecd2" Netns:"/var/run/netns/8235c708-9f39-4fd2-b947-82c881360067" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-f94476f49-d9vsg;K8S_POD_INFRA_CONTAINER_ID=b734d05569bfdb6dc6d8a9b39a087412fdcd09e0cd4b6ebc5dae7cd326c8ecd2;K8S_POD_UID=bbdbadd9-eeaa-46ef-936e-5db8d395c118" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg/bbdbadd9-eeaa-46ef-936e-5db8d395c118]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-f94476f49-d9vsg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-f94476f49-d9vsg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-d9vsg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.599429 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.599429 master-0 kubenswrapper[7756]: > Feb 20 11:51:41.600297 master-0 kubenswrapper[7756]: E0220 11:51:41.599460 7756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 20 11:51:41.600297 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-f94476f49-d9vsg_openshift-cluster-storage-operator_bbdbadd9-eeaa-46ef-936e-5db8d395c118_0(b734d05569bfdb6dc6d8a9b39a087412fdcd09e0cd4b6ebc5dae7cd326c8ecd2): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-d9vsg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b734d05569bfdb6dc6d8a9b39a087412fdcd09e0cd4b6ebc5dae7cd326c8ecd2" Netns:"/var/run/netns/8235c708-9f39-4fd2-b947-82c881360067" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-f94476f49-d9vsg;K8S_POD_INFRA_CONTAINER_ID=b734d05569bfdb6dc6d8a9b39a087412fdcd09e0cd4b6ebc5dae7cd326c8ecd2;K8S_POD_UID=bbdbadd9-eeaa-46ef-936e-5db8d395c118" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg/bbdbadd9-eeaa-46ef-936e-5db8d395c118]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-f94476f49-d9vsg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-f94476f49-d9vsg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-d9vsg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.600297 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.600297 master-0 kubenswrapper[7756]: > pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 11:51:41.600297 master-0 kubenswrapper[7756]: E0220 11:51:41.599497 7756 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 20 11:51:41.600297 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-f94476f49-d9vsg_openshift-cluster-storage-operator_bbdbadd9-eeaa-46ef-936e-5db8d395c118_0(b734d05569bfdb6dc6d8a9b39a087412fdcd09e0cd4b6ebc5dae7cd326c8ecd2): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-d9vsg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b734d05569bfdb6dc6d8a9b39a087412fdcd09e0cd4b6ebc5dae7cd326c8ecd2" Netns:"/var/run/netns/8235c708-9f39-4fd2-b947-82c881360067" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-f94476f49-d9vsg;K8S_POD_INFRA_CONTAINER_ID=b734d05569bfdb6dc6d8a9b39a087412fdcd09e0cd4b6ebc5dae7cd326c8ecd2;K8S_POD_UID=bbdbadd9-eeaa-46ef-936e-5db8d395c118" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg/bbdbadd9-eeaa-46ef-936e-5db8d395c118]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-f94476f49-d9vsg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-f94476f49-d9vsg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-d9vsg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.600297 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.600297 master-0 kubenswrapper[7756]: > pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 11:51:41.600297 master-0 kubenswrapper[7756]: E0220 11:51:41.599626 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-storage-operator-f94476f49-d9vsg_openshift-cluster-storage-operator(bbdbadd9-eeaa-46ef-936e-5db8d395c118)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-storage-operator-f94476f49-d9vsg_openshift-cluster-storage-operator(bbdbadd9-eeaa-46ef-936e-5db8d395c118)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-f94476f49-d9vsg_openshift-cluster-storage-operator_bbdbadd9-eeaa-46ef-936e-5db8d395c118_0(b734d05569bfdb6dc6d8a9b39a087412fdcd09e0cd4b6ebc5dae7cd326c8ecd2): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-d9vsg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"b734d05569bfdb6dc6d8a9b39a087412fdcd09e0cd4b6ebc5dae7cd326c8ecd2\\\" Netns:\\\"/var/run/netns/8235c708-9f39-4fd2-b947-82c881360067\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-f94476f49-d9vsg;K8S_POD_INFRA_CONTAINER_ID=b734d05569bfdb6dc6d8a9b39a087412fdcd09e0cd4b6ebc5dae7cd326c8ecd2;K8S_POD_UID=bbdbadd9-eeaa-46ef-936e-5db8d395c118\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg/bbdbadd9-eeaa-46ef-936e-5db8d395c118]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-f94476f49-d9vsg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-f94476f49-d9vsg in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f94476f49-d9vsg?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" podUID="bbdbadd9-eeaa-46ef-936e-5db8d395c118" Feb 20 11:51:41.614308 master-0 kubenswrapper[7756]: E0220 11:51:41.614252 7756 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 20 11:51:41.614308 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-65c5c48b9b-2k7xj_openshift-cluster-samples-operator_5c104245-d078-4856-9a60-207bb6efcfe8_0(c7031929da002fb854062b56cef8e41d3fd790334e54b19a2b39361227ed42cd): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-65c5c48b9b-2k7xj to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c7031929da002fb854062b56cef8e41d3fd790334e54b19a2b39361227ed42cd" Netns:"/var/run/netns/06757537-efc9-467e-91d6-5a6b33503e2d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-65c5c48b9b-2k7xj;K8S_POD_INFRA_CONTAINER_ID=c7031929da002fb854062b56cef8e41d3fd790334e54b19a2b39361227ed42cd;K8S_POD_UID=5c104245-d078-4856-9a60-207bb6efcfe8" Path:"" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj/5c104245-d078-4856-9a60-207bb6efcfe8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-65c5c48b9b-2k7xj in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-65c5c48b9b-2k7xj in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-65c5c48b9b-2k7xj?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.614308 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.614308 master-0 kubenswrapper[7756]: > Feb 20 11:51:41.614638 master-0 kubenswrapper[7756]: E0220 11:51:41.614335 7756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 20 11:51:41.614638 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-65c5c48b9b-2k7xj_openshift-cluster-samples-operator_5c104245-d078-4856-9a60-207bb6efcfe8_0(c7031929da002fb854062b56cef8e41d3fd790334e54b19a2b39361227ed42cd): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-65c5c48b9b-2k7xj to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c7031929da002fb854062b56cef8e41d3fd790334e54b19a2b39361227ed42cd" Netns:"/var/run/netns/06757537-efc9-467e-91d6-5a6b33503e2d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-65c5c48b9b-2k7xj;K8S_POD_INFRA_CONTAINER_ID=c7031929da002fb854062b56cef8e41d3fd790334e54b19a2b39361227ed42cd;K8S_POD_UID=5c104245-d078-4856-9a60-207bb6efcfe8" Path:"" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj/5c104245-d078-4856-9a60-207bb6efcfe8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-65c5c48b9b-2k7xj in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-65c5c48b9b-2k7xj in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-65c5c48b9b-2k7xj?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.614638 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.614638 master-0 kubenswrapper[7756]: > pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 11:51:41.614638 master-0 kubenswrapper[7756]: E0220 11:51:41.614390 7756 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 20 11:51:41.614638 master-0 kubenswrapper[7756]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-65c5c48b9b-2k7xj_openshift-cluster-samples-operator_5c104245-d078-4856-9a60-207bb6efcfe8_0(c7031929da002fb854062b56cef8e41d3fd790334e54b19a2b39361227ed42cd): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-65c5c48b9b-2k7xj to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c7031929da002fb854062b56cef8e41d3fd790334e54b19a2b39361227ed42cd" Netns:"/var/run/netns/06757537-efc9-467e-91d6-5a6b33503e2d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-65c5c48b9b-2k7xj;K8S_POD_INFRA_CONTAINER_ID=c7031929da002fb854062b56cef8e41d3fd790334e54b19a2b39361227ed42cd;K8S_POD_UID=5c104245-d078-4856-9a60-207bb6efcfe8" Path:"" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj/5c104245-d078-4856-9a60-207bb6efcfe8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-65c5c48b9b-2k7xj in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-65c5c48b9b-2k7xj in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-65c5c48b9b-2k7xj?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 20 11:51:41.614638 master-0 kubenswrapper[7756]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 11:51:41.614638 master-0 kubenswrapper[7756]: > pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 11:51:41.614638 master-0 kubenswrapper[7756]: E0220 11:51:41.614492 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-samples-operator-65c5c48b9b-2k7xj_openshift-cluster-samples-operator(5c104245-d078-4856-9a60-207bb6efcfe8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-samples-operator-65c5c48b9b-2k7xj_openshift-cluster-samples-operator(5c104245-d078-4856-9a60-207bb6efcfe8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-65c5c48b9b-2k7xj_openshift-cluster-samples-operator_5c104245-d078-4856-9a60-207bb6efcfe8_0(c7031929da002fb854062b56cef8e41d3fd790334e54b19a2b39361227ed42cd): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-65c5c48b9b-2k7xj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"c7031929da002fb854062b56cef8e41d3fd790334e54b19a2b39361227ed42cd\\\" Netns:\\\"/var/run/netns/06757537-efc9-467e-91d6-5a6b33503e2d\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-65c5c48b9b-2k7xj;K8S_POD_INFRA_CONTAINER_ID=c7031929da002fb854062b56cef8e41d3fd790334e54b19a2b39361227ed42cd;K8S_POD_UID=5c104245-d078-4856-9a60-207bb6efcfe8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj/5c104245-d078-4856-9a60-207bb6efcfe8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-65c5c48b9b-2k7xj in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-65c5c48b9b-2k7xj in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-65c5c48b9b-2k7xj?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" podUID="5c104245-d078-4856-9a60-207bb6efcfe8" Feb 20 11:51:42.521372 master-0 kubenswrapper[7756]: I0220 11:51:42.521295 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 11:51:42.521372 master-0 kubenswrapper[7756]: I0220 11:51:42.521347 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 11:51:42.522060 master-0 kubenswrapper[7756]: I0220 11:51:42.522017 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 11:51:42.522302 master-0 kubenswrapper[7756]: I0220 11:51:42.522260 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 11:51:44.066733 master-0 kubenswrapper[7756]: I0220 11:51:44.066672 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:44.067344 master-0 kubenswrapper[7756]: I0220 11:51:44.066754 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:44.265111 master-0 kubenswrapper[7756]: E0220 11:51:44.264912 7756 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{community-operators-tkdbv.1895f21dbcf236b0 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-tkdbv,UID:3733ccb5-2cea-4151-a2a7-d9c089a34cbc,APIVersion:v1,ResourceVersion:7204,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/community-operator-index:v4.18\" in 32.577s (32.577s including waiting). Image size: 1210258627 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:50:37.978121904 +0000 UTC m=+83.720369912,LastTimestamp:2026-02-20 11:50:37.978121904 +0000 UTC m=+83.720369912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:51:44.541786 master-0 kubenswrapper[7756]: I0220 11:51:44.541716 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_7bd4430b-8dbc-46df-9efe-49d520a7c75a/installer/0.log" Feb 20 11:51:44.541786 master-0 kubenswrapper[7756]: I0220 11:51:44.541779 7756 generic.go:334] "Generic (PLEG): container finished" podID="7bd4430b-8dbc-46df-9efe-49d520a7c75a" containerID="e80cac2721cbb0873c9a56ecbcc2ab13f0cf0ddd137a7458a4798813bbf93c32" exitCode=1 Feb 20 11:51:46.512796 master-0 kubenswrapper[7756]: E0220 11:51:46.512656 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 20 11:51:47.067682 master-0 kubenswrapper[7756]: I0220 11:51:47.067610 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:47.067895 master-0 kubenswrapper[7756]: I0220 11:51:47.067695 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:47.564263 master-0 kubenswrapper[7756]: I0220 11:51:47.564167 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_74e9ba02-39d0-41fb-aed1-39923698bc0b/installer/0.log" Feb 20 11:51:47.564263 master-0 kubenswrapper[7756]: I0220 11:51:47.564252 7756 generic.go:334] "Generic (PLEG): container finished" podID="74e9ba02-39d0-41fb-aed1-39923698bc0b" containerID="ca54dfc79fe363224f0633dc3e9a5365e79752aa92793a430f4511b5aeb939dc" exitCode=1 Feb 20 11:51:50.066890 master-0 kubenswrapper[7756]: I0220 11:51:50.066765 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:50.066890 master-0 kubenswrapper[7756]: I0220 11:51:50.066867 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:50.586820 master-0 kubenswrapper[7756]: I0220 11:51:50.586707 7756 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="407a4490b53b516c4eaa24c4972588c07da9b5f9574f9b35da5b44a438b78bcc" exitCode=1 Feb 20 11:51:53.067027 master-0 kubenswrapper[7756]: I0220 11:51:53.066895 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:53.067027 master-0 kubenswrapper[7756]: I0220 11:51:53.066996 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:56.066978 master-0 kubenswrapper[7756]: I0220 11:51:56.066887 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:56.067772 master-0 kubenswrapper[7756]: I0220 11:51:56.066975 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:51:56.913963 master-0 kubenswrapper[7756]: E0220 11:51:56.913834 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 20 11:51:56.938702 master-0 kubenswrapper[7756]: E0220 11:51:56.938463 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:51:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:51:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:51:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:51:46Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:584b5d125dad1fa4f8d03e6ace2e4901c173569ff1ed9536da6915c56fa52bc0\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8124eb3839b25af23303e9fdde35728bfd24d7c0c47530e77852cba1dd9d1ffb\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702755272},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:18622d3875e4a2dd9fde1633a737ae82af1df960d3bbcbda22c44df6cea6aa74\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:2d4cac2da3d445443ee7ac3918878091ebecdaadbd2742424bb1a02391a1c5b3\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1235965143},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2458acf77e6551a99656a2a1643e7ef4bf008f6bf792157614710eb9b28e0e64\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:3c45f047394ebd29a640afe4c1e96739e5155ec608b61170a2274911bdf56a3d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210258627},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa\\\"],\\\"sizeBytes\\\":467133839},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:51:59.067554 master-0 kubenswrapper[7756]: I0220 11:51:59.067453 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:51:59.068224 master-0 kubenswrapper[7756]: I0220 11:51:59.067622 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:52:02.066718 master-0 kubenswrapper[7756]: I0220 11:52:02.066599 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:52:02.067491 master-0 kubenswrapper[7756]: I0220 11:52:02.066717 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:52:04.570051 master-0 kubenswrapper[7756]: I0220 11:52:04.569985 7756 patch_prober.go:28] interesting pod/etcd-operator-545bf96f4d-d69w2 container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Feb 20 11:52:04.571156 master-0 kubenswrapper[7756]: I0220 11:52:04.571105 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" podUID="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Feb 20 11:52:05.066865 master-0 kubenswrapper[7756]: I0220 11:52:05.066779 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:52:05.067143 master-0 kubenswrapper[7756]: I0220 11:52:05.066866 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:52:06.939458 master-0 kubenswrapper[7756]: E0220 11:52:06.939385 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:52:07.715312 master-0 kubenswrapper[7756]: E0220 11:52:07.715185 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 20 11:52:08.067476 master-0 kubenswrapper[7756]: I0220 11:52:08.067377 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:52:08.067476 master-0 kubenswrapper[7756]: I0220 11:52:08.067469 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:52:10.713770 master-0 kubenswrapper[7756]: I0220 11:52:10.713661 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-qwwbk_01e90033-9ddf-41b4-ab61-e89add6c2fde/service-ca-operator/1.log" Feb 20 11:52:10.714723 master-0 kubenswrapper[7756]: I0220 11:52:10.714300 7756 generic.go:334] "Generic (PLEG): container finished" podID="01e90033-9ddf-41b4-ab61-e89add6c2fde" containerID="5602fcf86766ef7d0d60953da5d2c52d3e2681c284b76402a701dd6648958446" exitCode=255 Feb 20 11:52:11.067491 master-0 kubenswrapper[7756]: I0220 11:52:11.067407 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:52:11.067862 master-0 kubenswrapper[7756]: I0220 11:52:11.067524 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:52:12.731660 master-0 kubenswrapper[7756]: I0220 11:52:12.731449 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/0.log" Feb 20 11:52:12.731660 master-0 kubenswrapper[7756]: I0220 11:52:12.731599 7756 generic.go:334] "Generic (PLEG): container finished" podID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" containerID="3d07e9c592eed7a379f55e981ead57df10fdecdbcdadc7facb3720be20c537af" exitCode=1 Feb 20 11:52:14.067572 master-0 kubenswrapper[7756]: I0220 11:52:14.067406 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:52:14.067572 master-0 kubenswrapper[7756]: I0220 11:52:14.067493 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:52:14.611490 master-0 kubenswrapper[7756]: E0220 11:52:14.611384 7756 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 20 11:52:14.611813 master-0 kubenswrapper[7756]: E0220 11:52:14.611724 7756 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Feb 20 11:52:14.611813 master-0 kubenswrapper[7756]: I0220 11:52:14.611778 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:52:14.612012 master-0 kubenswrapper[7756]: I0220 11:52:14.611829 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:52:14.612798 master-0 kubenswrapper[7756]: I0220 11:52:14.612735 7756 scope.go:117] "RemoveContainer" containerID="407a4490b53b516c4eaa24c4972588c07da9b5f9574f9b35da5b44a438b78bcc" Feb 20 11:52:14.613090 master-0 kubenswrapper[7756]: I0220 11:52:14.613018 7756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"435d301867d3455ab1264a913824bd1f0d3bfd3f13750e2fec9b286b064d3e49"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 20 11:52:14.613230 master-0 kubenswrapper[7756]: I0220 11:52:14.613090 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" containerID="cri-o://435d301867d3455ab1264a913824bd1f0d3bfd3f13750e2fec9b286b064d3e49" gracePeriod=30 Feb 20 11:52:14.614673 master-0 kubenswrapper[7756]: I0220 11:52:14.614282 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:52:14.614673 master-0 kubenswrapper[7756]: I0220 11:52:14.614430 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:52:14.623581 master-0 kubenswrapper[7756]: I0220 11:52:14.623480 7756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 20 11:52:15.762128 master-0 kubenswrapper[7756]: I0220 11:52:15.762033 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-mk9fd_02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/openshift-config-operator/1.log" Feb 20 11:52:15.763729 master-0 kubenswrapper[7756]: I0220 11:52:15.763637 7756 generic.go:334] "Generic (PLEG): container finished" podID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerID="435d301867d3455ab1264a913824bd1f0d3bfd3f13750e2fec9b286b064d3e49" exitCode=255 Feb 20 11:52:16.940405 master-0 kubenswrapper[7756]: E0220 11:52:16.940296 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:52:17.067361 master-0 kubenswrapper[7756]: I0220 11:52:17.067294 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:52:17.067662 master-0 kubenswrapper[7756]: I0220 11:52:17.067380 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:52:18.267749 master-0 kubenswrapper[7756]: E0220 11:52:18.267494 7756 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{certified-operators-mf5rz.1895f21dd4314679 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-mf5rz,UID:339f8487-0d2b-4f4f-9872-c629e7f3e2e1,APIVersion:v1,ResourceVersion:7178,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 32.974s (32.974s including waiting). Image size: 1235965143 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:50:38.368130681 +0000 UTC m=+84.110378689,LastTimestamp:2026-02-20 11:50:38.368130681 +0000 UTC m=+84.110378689,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:52:19.316749 master-0 kubenswrapper[7756]: E0220 11:52:19.316670 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 20 11:52:20.066568 master-0 kubenswrapper[7756]: I0220 11:52:20.066471 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:52:20.069041 master-0 kubenswrapper[7756]: I0220 11:52:20.066597 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:52:23.068088 master-0 kubenswrapper[7756]: I0220 11:52:23.067967 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:52:23.069164 master-0 kubenswrapper[7756]: I0220 11:52:23.068115 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:52:26.067383 master-0 kubenswrapper[7756]: I0220 11:52:26.067309 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:52:26.068786 master-0 kubenswrapper[7756]: I0220 11:52:26.068703 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:52:27.626554 master-0 kubenswrapper[7756]: E0220 11:52:27.626417 7756 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.015s" Feb 20 11:52:27.626554 master-0 kubenswrapper[7756]: I0220 11:52:27.626501 7756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 11:52:27.630828 master-0 kubenswrapper[7756]: I0220 11:52:27.630759 7756 scope.go:117] "RemoveContainer" containerID="f4d85100cd0f06816a98689538bc93ed981f60823f3ce37e7c844447bcdb96ee" Feb 20 11:52:27.654603 master-0 kubenswrapper[7756]: I0220 11:52:27.653590 7756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.664934 7756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665113 7756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665130 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665143 7756 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="0df88f5c-0fe8-484b-9970-570f4c259a6c" Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665163 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665174 7756 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="0df88f5c-0fe8-484b-9970-570f4c259a6c" Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665185 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" event={"ID":"1df81fcc-f967-4874-ad16-1a89f0e7875a","Type":"ContainerDied","Data":"5461ac8869ede1ae48aaf443305cec8c0cf9a21a54dc206e103440a3f966bcc9"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665204 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" event={"ID":"e0b28c90-d5b6-44f3-867c-020ece32ac7d","Type":"ContainerDied","Data":"73c4ac8066ad3eb7342716309b7b8a802bf833f8fcd163ad12901b630f6305c2"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665222 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-fv598" event={"ID":"312ca024-c8f0-4994-8f9a-b707607341fe","Type":"ContainerDied","Data":"9e91bb7cb260950fd5e975354ec43adcbf694e33c154dd1b679deca6be0b9cfb"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665248 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" event={"ID":"f1388469-5e55-4c1b-97c3-c88777f29ae7","Type":"ContainerDied","Data":"b288109ee32770ae0136eb8073a319dc58d7b8d8a7d067c5f9bf71abd12290e4"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665260 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665273 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" event={"ID":"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145","Type":"ContainerDied","Data":"65fc745d32199b41ad554a4c4ed1944167b7da7496dffcee77c17ec0d2f1a51b"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665288 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" event={"ID":"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145","Type":"ContainerStarted","Data":"435d301867d3455ab1264a913824bd1f0d3bfd3f13750e2fec9b286b064d3e49"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665299 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" event={"ID":"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9","Type":"ContainerDied","Data":"c3fd58850441274093931c36087d9a8518e8af6cd5182fdb00d74233da8d66da"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665313 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" event={"ID":"6c3aa45a-44cc-48fb-a478-ce01a70c4b02","Type":"ContainerDied","Data":"f4d85100cd0f06816a98689538bc93ed981f60823f3ce37e7c844447bcdb96ee"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665326 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665338 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665348 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665359 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665371 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665383 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-psm4s" event={"ID":"836a6d7e-9b26-425f-ae21-00422515d7fe","Type":"ContainerDied","Data":"ace904c5f4a3faa1035b1dcf89c693ce9b93dceae341e4edfb98ee1576eea9b6"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665397 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" event={"ID":"1d3a36bb-9d11-48b3-a3b5-07b47738ef97","Type":"ContainerDied","Data":"8d90051cb425dcfb05eea700daacd614186eaabfc560fdf17a2b201fc46c56ad"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665410 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7bd4430b-8dbc-46df-9efe-49d520a7c75a","Type":"ContainerDied","Data":"e80cac2721cbb0873c9a56ecbcc2ab13f0cf0ddd137a7458a4798813bbf93c32"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665422 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"74e9ba02-39d0-41fb-aed1-39923698bc0b","Type":"ContainerDied","Data":"ca54dfc79fe363224f0633dc3e9a5365e79752aa92793a430f4511b5aeb939dc"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665434 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"407a4490b53b516c4eaa24c4972588c07da9b5f9574f9b35da5b44a438b78bcc"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665447 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" event={"ID":"01e90033-9ddf-41b4-ab61-e89add6c2fde","Type":"ContainerDied","Data":"5602fcf86766ef7d0d60953da5d2c52d3e2681c284b76402a701dd6648958446"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665459 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" event={"ID":"db2a7cb1-1d05-4b24-86ed-f823fad5013e","Type":"ContainerDied","Data":"3d07e9c592eed7a379f55e981ead57df10fdecdbcdadc7facb3720be20c537af"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665472 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"91bf4bc38d2da6c505ee04354464ef749c6984385a6a3cb062fc7393534e0bd7"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665483 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" event={"ID":"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145","Type":"ContainerDied","Data":"435d301867d3455ab1264a913824bd1f0d3bfd3f13750e2fec9b286b064d3e49"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.665494 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" event={"ID":"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145","Type":"ContainerStarted","Data":"31ff1e117529b9aa438962fcdb3c5051bf53ab61f9540449f696309f0c076076"} Feb 20 11:52:27.666435 master-0 kubenswrapper[7756]: I0220 11:52:27.666027 7756 scope.go:117] "RemoveContainer" containerID="3d07e9c592eed7a379f55e981ead57df10fdecdbcdadc7facb3720be20c537af" Feb 20 11:52:27.674929 master-0 kubenswrapper[7756]: I0220 11:52:27.674911 7756 scope.go:117] "RemoveContainer" containerID="c3fd58850441274093931c36087d9a8518e8af6cd5182fdb00d74233da8d66da" Feb 20 11:52:27.675868 master-0 kubenswrapper[7756]: I0220 11:52:27.675832 7756 scope.go:117] "RemoveContainer" containerID="73c4ac8066ad3eb7342716309b7b8a802bf833f8fcd163ad12901b630f6305c2" Feb 20 11:52:27.676441 master-0 kubenswrapper[7756]: I0220 11:52:27.676379 7756 scope.go:117] "RemoveContainer" containerID="65fc745d32199b41ad554a4c4ed1944167b7da7496dffcee77c17ec0d2f1a51b" Feb 20 11:52:27.676557 master-0 kubenswrapper[7756]: I0220 11:52:27.676432 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:52:27.678369 master-0 kubenswrapper[7756]: I0220 11:52:27.677874 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:52:27.678887 master-0 kubenswrapper[7756]: I0220 11:52:27.678838 7756 scope.go:117] "RemoveContainer" containerID="b288109ee32770ae0136eb8073a319dc58d7b8d8a7d067c5f9bf71abd12290e4" Feb 20 11:52:27.679180 master-0 kubenswrapper[7756]: I0220 11:52:27.679133 7756 scope.go:117] "RemoveContainer" containerID="8d90051cb425dcfb05eea700daacd614186eaabfc560fdf17a2b201fc46c56ad" Feb 20 11:52:27.679262 master-0 kubenswrapper[7756]: I0220 11:52:27.679247 7756 scope.go:117] "RemoveContainer" containerID="9e91bb7cb260950fd5e975354ec43adcbf694e33c154dd1b679deca6be0b9cfb" Feb 20 11:52:27.682098 master-0 kubenswrapper[7756]: I0220 11:52:27.679795 7756 scope.go:117] "RemoveContainer" containerID="5602fcf86766ef7d0d60953da5d2c52d3e2681c284b76402a701dd6648958446" Feb 20 11:52:27.682098 master-0 kubenswrapper[7756]: I0220 11:52:27.679914 7756 scope.go:117] "RemoveContainer" containerID="5461ac8869ede1ae48aaf443305cec8c0cf9a21a54dc206e103440a3f966bcc9" Feb 20 11:52:27.701786 master-0 kubenswrapper[7756]: I0220 11:52:27.701715 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr"] Feb 20 11:52:27.711103 master-0 kubenswrapper[7756]: I0220 11:52:27.711045 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6"] Feb 20 11:52:27.717412 master-0 kubenswrapper[7756]: I0220 11:52:27.716040 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt"] Feb 20 11:52:27.720268 master-0 kubenswrapper[7756]: I0220 11:52:27.719591 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 11:52:27.720268 master-0 kubenswrapper[7756]: I0220 11:52:27.719695 7756 scope.go:117] "RemoveContainer" containerID="ace904c5f4a3faa1035b1dcf89c693ce9b93dceae341e4edfb98ee1576eea9b6" Feb 20 11:52:27.720713 master-0 kubenswrapper[7756]: I0220 11:52:27.720677 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 20 11:52:27.758755 master-0 kubenswrapper[7756]: I0220 11:52:27.758619 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj"] Feb 20 11:52:27.784714 master-0 kubenswrapper[7756]: I0220 11:52:27.778952 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q"] Feb 20 11:52:27.784714 master-0 kubenswrapper[7756]: I0220 11:52:27.780191 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-89t2q" podStartSLOduration=107.758853905 podStartE2EDuration="2m17.780168195s" podCreationTimestamp="2026-02-20 11:50:10 +0000 UTC" firstStartedPulling="2026-02-20 11:50:12.490808725 +0000 UTC m=+58.233056723" lastFinishedPulling="2026-02-20 11:50:42.512122995 +0000 UTC m=+88.254371013" observedRunningTime="2026-02-20 11:52:27.655718528 +0000 UTC m=+193.397966546" watchObservedRunningTime="2026-02-20 11:52:27.780168195 +0000 UTC m=+193.522416203" Feb 20 11:52:27.802945 master-0 kubenswrapper[7756]: I0220 11:52:27.785118 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw"] Feb 20 11:52:27.802945 master-0 kubenswrapper[7756]: I0220 11:52:27.800752 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-hsjr7"] Feb 20 11:52:27.806363 master-0 kubenswrapper[7756]: I0220 11:52:27.806326 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg"] Feb 20 11:52:27.830864 master-0 kubenswrapper[7756]: I0220 11:52:27.830771 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 20 11:52:27.842548 master-0 kubenswrapper[7756]: I0220 11:52:27.835027 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkdbv"] Feb 20 11:52:27.842548 master-0 kubenswrapper[7756]: I0220 11:52:27.838291 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q287t" podStartSLOduration=106.818335059 podStartE2EDuration="2m16.83826782s" podCreationTimestamp="2026-02-20 11:50:11 +0000 UTC" firstStartedPulling="2026-02-20 11:50:12.492982697 +0000 UTC m=+58.235230705" lastFinishedPulling="2026-02-20 11:50:42.512915448 +0000 UTC m=+88.255163466" observedRunningTime="2026-02-20 11:52:27.803808138 +0000 UTC m=+193.546056146" watchObservedRunningTime="2026-02-20 11:52:27.83826782 +0000 UTC m=+193.580515828" Feb 20 11:52:27.842548 master-0 kubenswrapper[7756]: I0220 11:52:27.838515 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tkdbv"] Feb 20 11:52:27.844728 master-0 kubenswrapper[7756]: I0220 11:52:27.844697 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 20 11:52:27.847579 master-0 kubenswrapper[7756]: I0220 11:52:27.847369 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 20 11:52:27.852602 master-0 kubenswrapper[7756]: I0220 11:52:27.851048 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"eb93420d-7c5a-4492-bd16-0104104406b4","Type":"ContainerStarted","Data":"2afa2b7ebc56f1b83ba6eea0931272420c7f296c9bd03931d27ab411eab9454b"} Feb 20 11:52:27.858851 master-0 kubenswrapper[7756]: I0220 11:52:27.858786 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" event={"ID":"bbdbadd9-eeaa-46ef-936e-5db8d395c118","Type":"ContainerStarted","Data":"441065bea23c74396afef0b5e83785e19b00c76012695c20dcc42243f3f809f3"} Feb 20 11:52:27.860340 master-0 kubenswrapper[7756]: I0220 11:52:27.860304 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" event={"ID":"c29fd426-7c89-434e-8332-1ca31075d4bf","Type":"ContainerStarted","Data":"8341254b8ef7faec187b8fe415e34b54bbc9e2b3da20b0d37f8005ee126bc089"} Feb 20 11:52:27.863387 master-0 kubenswrapper[7756]: I0220 11:52:27.863258 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" event={"ID":"daf25ef5-8247-4dbb-bdc1-55104b1015b7","Type":"ContainerStarted","Data":"9a135ef2bf0cea92e0e6d6c962da99bd4bf9e44e47304e0bce9ab97fa97ad55c"} Feb 20 11:52:27.877624 master-0 kubenswrapper[7756]: I0220 11:52:27.870916 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4d8l9"] Feb 20 11:52:27.877624 master-0 kubenswrapper[7756]: I0220 11:52:27.873123 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" event={"ID":"ae1fd116-6f63-4344-b7af-278665649e5a","Type":"ContainerStarted","Data":"d72373aa995597c762385fce3b659d1483668a485ad494b6b7d7dd517099e857"} Feb 20 11:52:27.877624 master-0 kubenswrapper[7756]: I0220 11:52:27.873748 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4d8l9"] Feb 20 11:52:27.878072 master-0 kubenswrapper[7756]: I0220 11:52:27.878037 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" event={"ID":"62fc400b-b3dd-4134-bd27-69dd8369153a","Type":"ContainerStarted","Data":"327d8b93a0b8136db5fa70fbc964d1cbd5cf33fa512a27f0f0cf22df8db25f21"} Feb 20 11:52:27.904589 master-0 kubenswrapper[7756]: I0220 11:52:27.884663 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" event={"ID":"8ab951b1-6898-4357-b813-16365f3f89d5","Type":"ContainerStarted","Data":"529a8813da1db26a89da6c06d3a8fcc3afc05b6c872a6a5a2b9fb3ceb4df9687"} Feb 20 11:52:27.904589 master-0 kubenswrapper[7756]: I0220 11:52:27.887906 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" event={"ID":"ef18ace4-7316-4600-9be9-2adc792705e9","Type":"ContainerStarted","Data":"6a7bfbfdcb0537291cfa1b372b6f031e0ea91896123e7787ed049d4ad28854cc"} Feb 20 11:52:27.904589 master-0 kubenswrapper[7756]: I0220 11:52:27.900894 7756 scope.go:117] "RemoveContainer" containerID="6d3121ed9f14f1a68a11c14e19a8ba5e47d812ae84b3f62cc56772a81aa8f139" Feb 20 11:52:27.973272 master-0 kubenswrapper[7756]: I0220 11:52:27.973210 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-76v4z" podStartSLOduration=107.898485666 podStartE2EDuration="2m20.973194367s" podCreationTimestamp="2026-02-20 11:50:07 +0000 UTC" firstStartedPulling="2026-02-20 11:50:09.453618187 +0000 UTC m=+55.195866185" lastFinishedPulling="2026-02-20 11:50:42.528326878 +0000 UTC m=+88.270574886" observedRunningTime="2026-02-20 11:52:27.97148115 +0000 UTC m=+193.713729158" watchObservedRunningTime="2026-02-20 11:52:27.973194367 +0000 UTC m=+193.715442375" Feb 20 11:52:27.998549 master-0 kubenswrapper[7756]: I0220 11:52:27.996561 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mpwks" podStartSLOduration=121.996541442 podStartE2EDuration="2m1.996541442s" podCreationTimestamp="2026-02-20 11:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:52:27.994728342 +0000 UTC m=+193.736976350" watchObservedRunningTime="2026-02-20 11:52:27.996541442 +0000 UTC m=+193.738789450" Feb 20 11:52:28.018071 master-0 kubenswrapper[7756]: I0220 11:52:28.016228 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" podStartSLOduration=119.710927464 podStartE2EDuration="2m3.016208086s" podCreationTimestamp="2026-02-20 11:50:25 +0000 UTC" firstStartedPulling="2026-02-20 11:50:40.222633687 +0000 UTC m=+85.964881705" lastFinishedPulling="2026-02-20 11:50:43.527914279 +0000 UTC m=+89.270162327" observedRunningTime="2026-02-20 11:52:28.015390053 +0000 UTC m=+193.757638061" watchObservedRunningTime="2026-02-20 11:52:28.016208086 +0000 UTC m=+193.758456104" Feb 20 11:52:28.071263 master-0 kubenswrapper[7756]: I0220 11:52:28.071200 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" podStartSLOduration=129.391621987 podStartE2EDuration="2m11.071182853s" podCreationTimestamp="2026-02-20 11:50:17 +0000 UTC" firstStartedPulling="2026-02-20 11:50:40.824358715 +0000 UTC m=+86.566606743" lastFinishedPulling="2026-02-20 11:50:42.503919591 +0000 UTC m=+88.246167609" observedRunningTime="2026-02-20 11:52:28.069195018 +0000 UTC m=+193.811443026" watchObservedRunningTime="2026-02-20 11:52:28.071182853 +0000 UTC m=+193.813430861" Feb 20 11:52:28.103178 master-0 kubenswrapper[7756]: I0220 11:52:28.102771 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 20 11:52:28.110890 master-0 kubenswrapper[7756]: I0220 11:52:28.109538 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 20 11:52:28.125438 master-0 kubenswrapper[7756]: I0220 11:52:28.125378 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7kn5q" podStartSLOduration=108.07442797 podStartE2EDuration="2m20.12536153s" podCreationTimestamp="2026-02-20 11:50:08 +0000 UTC" firstStartedPulling="2026-02-20 11:50:10.461260407 +0000 UTC m=+56.203508425" lastFinishedPulling="2026-02-20 11:50:42.512193967 +0000 UTC m=+88.254441985" observedRunningTime="2026-02-20 11:52:28.122904822 +0000 UTC m=+193.865152830" watchObservedRunningTime="2026-02-20 11:52:28.12536153 +0000 UTC m=+193.867609538" Feb 20 11:52:28.240417 master-0 kubenswrapper[7756]: I0220 11:52:28.234047 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mf5rz"] Feb 20 11:52:28.245381 master-0 kubenswrapper[7756]: I0220 11:52:28.245303 7756 scope.go:117] "RemoveContainer" containerID="731cb148dbfdffc2b55c2372adae7ffe3b1128ca5f50a9d64465c2aba12d6905" Feb 20 11:52:28.250867 master-0 kubenswrapper[7756]: I0220 11:52:28.250823 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mf5rz"] Feb 20 11:52:28.257504 master-0 kubenswrapper[7756]: I0220 11:52:28.257223 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" podStartSLOduration=110.421086717 podStartE2EDuration="2m14.257197731s" podCreationTimestamp="2026-02-20 11:50:14 +0000 UTC" firstStartedPulling="2026-02-20 11:50:16.440467955 +0000 UTC m=+62.182715963" lastFinishedPulling="2026-02-20 11:50:40.276578969 +0000 UTC m=+86.018826977" observedRunningTime="2026-02-20 11:52:28.253768897 +0000 UTC m=+193.996016935" watchObservedRunningTime="2026-02-20 11:52:28.257197731 +0000 UTC m=+193.999445749" Feb 20 11:52:28.308476 master-0 kubenswrapper[7756]: I0220 11:52:28.307383 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ps68j"] Feb 20 11:52:28.312089 master-0 kubenswrapper[7756]: I0220 11:52:28.312041 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ps68j"] Feb 20 11:52:28.351325 master-0 kubenswrapper[7756]: I0220 11:52:28.340216 7756 scope.go:117] "RemoveContainer" containerID="65fc745d32199b41ad554a4c4ed1944167b7da7496dffcee77c17ec0d2f1a51b" Feb 20 11:52:28.351325 master-0 kubenswrapper[7756]: E0220 11:52:28.340846 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65fc745d32199b41ad554a4c4ed1944167b7da7496dffcee77c17ec0d2f1a51b\": container with ID starting with 65fc745d32199b41ad554a4c4ed1944167b7da7496dffcee77c17ec0d2f1a51b not found: ID does not exist" containerID="65fc745d32199b41ad554a4c4ed1944167b7da7496dffcee77c17ec0d2f1a51b" Feb 20 11:52:28.351325 master-0 kubenswrapper[7756]: I0220 11:52:28.340873 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65fc745d32199b41ad554a4c4ed1944167b7da7496dffcee77c17ec0d2f1a51b"} err="failed to get container status \"65fc745d32199b41ad554a4c4ed1944167b7da7496dffcee77c17ec0d2f1a51b\": rpc error: code = NotFound desc = could not find container \"65fc745d32199b41ad554a4c4ed1944167b7da7496dffcee77c17ec0d2f1a51b\": container with ID starting with 65fc745d32199b41ad554a4c4ed1944167b7da7496dffcee77c17ec0d2f1a51b not found: ID does not exist" Feb 20 11:52:28.468661 master-0 kubenswrapper[7756]: I0220 11:52:28.467995 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_74e9ba02-39d0-41fb-aed1-39923698bc0b/installer/0.log" Feb 20 11:52:28.468661 master-0 kubenswrapper[7756]: I0220 11:52:28.468057 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 11:52:28.473600 master-0 kubenswrapper[7756]: I0220 11:52:28.473478 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq"] Feb 20 11:52:28.549438 master-0 kubenswrapper[7756]: I0220 11:52:28.549354 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.5493377910000001 podStartE2EDuration="1.549337791s" podCreationTimestamp="2026-02-20 11:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:52:28.542282926 +0000 UTC m=+194.284530954" watchObservedRunningTime="2026-02-20 11:52:28.549337791 +0000 UTC m=+194.291585799" Feb 20 11:52:28.589446 master-0 kubenswrapper[7756]: I0220 11:52:28.589403 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_7bd4430b-8dbc-46df-9efe-49d520a7c75a/installer/0.log" Feb 20 11:52:28.589566 master-0 kubenswrapper[7756]: I0220 11:52:28.589485 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 11:52:28.593740 master-0 kubenswrapper[7756]: I0220 11:52:28.593702 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="339f8487-0d2b-4f4f-9872-c629e7f3e2e1" path="/var/lib/kubelet/pods/339f8487-0d2b-4f4f-9872-c629e7f3e2e1/volumes" Feb 20 11:52:28.594357 master-0 kubenswrapper[7756]: I0220 11:52:28.594305 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3733ccb5-2cea-4151-a2a7-d9c089a34cbc" path="/var/lib/kubelet/pods/3733ccb5-2cea-4151-a2a7-d9c089a34cbc/volumes" Feb 20 11:52:28.594881 master-0 kubenswrapper[7756]: I0220 11:52:28.594863 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50084c46-32ff-4e8a-b35e-8e7b1943cc04" path="/var/lib/kubelet/pods/50084c46-32ff-4e8a-b35e-8e7b1943cc04/volumes" Feb 20 11:52:28.595826 master-0 kubenswrapper[7756]: I0220 11:52:28.595801 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52c9c322-a0d1-4d27-b3bc-aaa8bd25beec" path="/var/lib/kubelet/pods/52c9c322-a0d1-4d27-b3bc-aaa8bd25beec/volumes" Feb 20 11:52:28.596368 master-0 kubenswrapper[7756]: I0220 11:52:28.596342 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a827d746-cfd3-48a2-a20b-2ff1526986b9" path="/var/lib/kubelet/pods/a827d746-cfd3-48a2-a20b-2ff1526986b9/volumes" Feb 20 11:52:28.596954 master-0 kubenswrapper[7756]: I0220 11:52:28.596931 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd" path="/var/lib/kubelet/pods/c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd/volumes" Feb 20 11:52:28.608916 master-0 kubenswrapper[7756]: I0220 11:52:28.608888 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/74e9ba02-39d0-41fb-aed1-39923698bc0b-var-lock\") pod \"74e9ba02-39d0-41fb-aed1-39923698bc0b\" (UID: \"74e9ba02-39d0-41fb-aed1-39923698bc0b\") " Feb 20 11:52:28.609000 master-0 kubenswrapper[7756]: I0220 11:52:28.608960 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74e9ba02-39d0-41fb-aed1-39923698bc0b-kube-api-access\") pod \"74e9ba02-39d0-41fb-aed1-39923698bc0b\" (UID: \"74e9ba02-39d0-41fb-aed1-39923698bc0b\") " Feb 20 11:52:28.609000 master-0 kubenswrapper[7756]: I0220 11:52:28.608989 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74e9ba02-39d0-41fb-aed1-39923698bc0b-kubelet-dir\") pod \"74e9ba02-39d0-41fb-aed1-39923698bc0b\" (UID: \"74e9ba02-39d0-41fb-aed1-39923698bc0b\") " Feb 20 11:52:28.609074 master-0 kubenswrapper[7756]: I0220 11:52:28.609050 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74e9ba02-39d0-41fb-aed1-39923698bc0b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "74e9ba02-39d0-41fb-aed1-39923698bc0b" (UID: "74e9ba02-39d0-41fb-aed1-39923698bc0b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:52:28.609108 master-0 kubenswrapper[7756]: I0220 11:52:28.609079 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74e9ba02-39d0-41fb-aed1-39923698bc0b-var-lock" (OuterVolumeSpecName: "var-lock") pod "74e9ba02-39d0-41fb-aed1-39923698bc0b" (UID: "74e9ba02-39d0-41fb-aed1-39923698bc0b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:52:28.609837 master-0 kubenswrapper[7756]: I0220 11:52:28.609815 7756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74e9ba02-39d0-41fb-aed1-39923698bc0b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:52:28.609918 master-0 kubenswrapper[7756]: I0220 11:52:28.609907 7756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/74e9ba02-39d0-41fb-aed1-39923698bc0b-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 11:52:28.621778 master-0 kubenswrapper[7756]: I0220 11:52:28.621735 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e9ba02-39d0-41fb-aed1-39923698bc0b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "74e9ba02-39d0-41fb-aed1-39923698bc0b" (UID: "74e9ba02-39d0-41fb-aed1-39923698bc0b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:52:28.670712 master-0 kubenswrapper[7756]: I0220 11:52:28.670670 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 11:52:28.671153 master-0 kubenswrapper[7756]: I0220 11:52:28.670728 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 11:52:28.710803 master-0 kubenswrapper[7756]: I0220 11:52:28.710653 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7bd4430b-8dbc-46df-9efe-49d520a7c75a-kubelet-dir\") pod \"7bd4430b-8dbc-46df-9efe-49d520a7c75a\" (UID: \"7bd4430b-8dbc-46df-9efe-49d520a7c75a\") " Feb 20 11:52:28.710803 master-0 kubenswrapper[7756]: I0220 11:52:28.710703 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7bd4430b-8dbc-46df-9efe-49d520a7c75a-var-lock\") pod \"7bd4430b-8dbc-46df-9efe-49d520a7c75a\" (UID: \"7bd4430b-8dbc-46df-9efe-49d520a7c75a\") " Feb 20 11:52:28.710803 master-0 kubenswrapper[7756]: I0220 11:52:28.710725 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7bd4430b-8dbc-46df-9efe-49d520a7c75a-kube-api-access\") pod \"7bd4430b-8dbc-46df-9efe-49d520a7c75a\" (UID: \"7bd4430b-8dbc-46df-9efe-49d520a7c75a\") " Feb 20 11:52:28.711090 master-0 kubenswrapper[7756]: I0220 11:52:28.710983 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74e9ba02-39d0-41fb-aed1-39923698bc0b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 11:52:28.711391 master-0 kubenswrapper[7756]: I0220 11:52:28.711368 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd4430b-8dbc-46df-9efe-49d520a7c75a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7bd4430b-8dbc-46df-9efe-49d520a7c75a" (UID: "7bd4430b-8dbc-46df-9efe-49d520a7c75a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:52:28.711435 master-0 kubenswrapper[7756]: I0220 11:52:28.711401 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd4430b-8dbc-46df-9efe-49d520a7c75a-var-lock" (OuterVolumeSpecName: "var-lock") pod "7bd4430b-8dbc-46df-9efe-49d520a7c75a" (UID: "7bd4430b-8dbc-46df-9efe-49d520a7c75a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:52:28.714596 master-0 kubenswrapper[7756]: I0220 11:52:28.714571 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd4430b-8dbc-46df-9efe-49d520a7c75a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7bd4430b-8dbc-46df-9efe-49d520a7c75a" (UID: "7bd4430b-8dbc-46df-9efe-49d520a7c75a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:52:28.812449 master-0 kubenswrapper[7756]: I0220 11:52:28.812366 7756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7bd4430b-8dbc-46df-9efe-49d520a7c75a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:52:28.812449 master-0 kubenswrapper[7756]: I0220 11:52:28.812416 7756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7bd4430b-8dbc-46df-9efe-49d520a7c75a-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 11:52:28.812449 master-0 kubenswrapper[7756]: I0220 11:52:28.812426 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7bd4430b-8dbc-46df-9efe-49d520a7c75a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 11:52:28.898555 master-0 kubenswrapper[7756]: I0220 11:52:28.898462 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" event={"ID":"c29fd426-7c89-434e-8332-1ca31075d4bf","Type":"ContainerStarted","Data":"b4292dccd690e9143e933dee29f59d01786a2f035fd7b57469d300f2f8a55365"} Feb 20 11:52:28.900274 master-0 kubenswrapper[7756]: I0220 11:52:28.899231 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:52:28.945020 master-0 kubenswrapper[7756]: I0220 11:52:28.944158 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_7bd4430b-8dbc-46df-9efe-49d520a7c75a/installer/0.log" Feb 20 11:52:28.945020 master-0 kubenswrapper[7756]: I0220 11:52:28.944488 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 11:52:28.945692 master-0 kubenswrapper[7756]: I0220 11:52:28.945636 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7bd4430b-8dbc-46df-9efe-49d520a7c75a","Type":"ContainerDied","Data":"55661699f170197933eff4a7d62dfa673dfa4d47667b396a98f1b608289f577a"} Feb 20 11:52:28.945754 master-0 kubenswrapper[7756]: I0220 11:52:28.945700 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55661699f170197933eff4a7d62dfa673dfa4d47667b396a98f1b608289f577a" Feb 20 11:52:28.967064 master-0 kubenswrapper[7756]: I0220 11:52:28.967005 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" event={"ID":"8ab951b1-6898-4357-b813-16365f3f89d5","Type":"ContainerStarted","Data":"af8b342efbbe9d54cc223046ff30388902ac88ae9934ae255f20adc3d9b1a9e6"} Feb 20 11:52:28.971567 master-0 kubenswrapper[7756]: I0220 11:52:28.971503 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" event={"ID":"f1388469-5e55-4c1b-97c3-c88777f29ae7","Type":"ContainerStarted","Data":"88cc61da7ab6cb75fdb835e40a94626015dfbde6fc3408dad868f6e9db7703ee"} Feb 20 11:52:28.981225 master-0 kubenswrapper[7756]: I0220 11:52:28.981187 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" event={"ID":"1d3a36bb-9d11-48b3-a3b5-07b47738ef97","Type":"ContainerStarted","Data":"c6679863b5436d03c685416538ec6a0c239b8d55dfa6ed45b92990d366d1cd74"} Feb 20 11:52:28.985591 master-0 kubenswrapper[7756]: I0220 11:52:28.983930 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-qwwbk_01e90033-9ddf-41b4-ab61-e89add6c2fde/service-ca-operator/1.log" Feb 20 11:52:28.985591 master-0 kubenswrapper[7756]: I0220 11:52:28.983993 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" event={"ID":"01e90033-9ddf-41b4-ab61-e89add6c2fde","Type":"ContainerStarted","Data":"f9528f6d61bdc5d1282c2d9d2f6d9758a8e04364c9337158e14aef2c2ffff6b4"} Feb 20 11:52:28.989792 master-0 kubenswrapper[7756]: I0220 11:52:28.989768 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 20 11:52:28.989958 master-0 kubenswrapper[7756]: I0220 11:52:28.989931 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 20 11:52:28.994842 master-0 kubenswrapper[7756]: I0220 11:52:28.994815 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" event={"ID":"62fc400b-b3dd-4134-bd27-69dd8369153a","Type":"ContainerStarted","Data":"fd2c8af561e27d18a0686fcdd7532006d498e33008bf3db8efeb4fee359764ee"} Feb 20 11:52:28.998072 master-0 kubenswrapper[7756]: I0220 11:52:28.998053 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" event={"ID":"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9","Type":"ContainerStarted","Data":"74b4edd626e209801e3786cc1dc29bf2a950a730269d6de5ed8a28d1b435f9b4"} Feb 20 11:52:28.999831 master-0 kubenswrapper[7756]: I0220 11:52:28.999809 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" event={"ID":"ef18ace4-7316-4600-9be9-2adc792705e9","Type":"ContainerStarted","Data":"26af0c48ed535d570697a634ddeca00f43fded3a46906b300c51504210bc184d"} Feb 20 11:52:29.013871 master-0 kubenswrapper[7756]: I0220 11:52:29.013846 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 20 11:52:29.014540 master-0 kubenswrapper[7756]: I0220 11:52:29.014509 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-fv598" event={"ID":"312ca024-c8f0-4994-8f9a-b707607341fe","Type":"ContainerStarted","Data":"a2f57d0cbbd57b5325ad0aac9713219f739036a6acc3195c5bbfa570326dbcd4"} Feb 20 11:52:29.017314 master-0 kubenswrapper[7756]: I0220 11:52:29.017297 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" event={"ID":"5c104245-d078-4856-9a60-207bb6efcfe8","Type":"ContainerStarted","Data":"b89fd2b72c95ae892c409ef90ceca60361969c1db213c09131f13705c3334986"} Feb 20 11:52:29.019682 master-0 kubenswrapper[7756]: I0220 11:52:29.019669 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-mk9fd_02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/openshift-config-operator/1.log" Feb 20 11:52:29.022098 master-0 kubenswrapper[7756]: I0220 11:52:29.022086 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_74e9ba02-39d0-41fb-aed1-39923698bc0b/installer/0.log" Feb 20 11:52:29.022236 master-0 kubenswrapper[7756]: I0220 11:52:29.022221 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"74e9ba02-39d0-41fb-aed1-39923698bc0b","Type":"ContainerDied","Data":"79f70b0ba5af48f333359cfd6f71307155a704d196b35bf91b2237ea4c31acbc"} Feb 20 11:52:29.022332 master-0 kubenswrapper[7756]: I0220 11:52:29.022321 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79f70b0ba5af48f333359cfd6f71307155a704d196b35bf91b2237ea4c31acbc" Feb 20 11:52:29.022475 master-0 kubenswrapper[7756]: I0220 11:52:29.022464 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 11:52:29.031488 master-0 kubenswrapper[7756]: I0220 11:52:29.031446 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"eb93420d-7c5a-4492-bd16-0104104406b4","Type":"ContainerStarted","Data":"76e85ab561cbad6abc6fb8fe1c91c7b03e4b40963c9f88e69d0121b220aa047b"} Feb 20 11:52:29.033685 master-0 kubenswrapper[7756]: I0220 11:52:29.033649 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" event={"ID":"1df81fcc-f967-4874-ad16-1a89f0e7875a","Type":"ContainerStarted","Data":"f658812d3a0840e273c061153c1646fa88e6e4617da166e0ff391ed3c4a82be1"} Feb 20 11:52:29.036137 master-0 kubenswrapper[7756]: I0220 11:52:29.036037 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" event={"ID":"6c3aa45a-44cc-48fb-a478-ce01a70c4b02","Type":"ContainerStarted","Data":"de3cf90976c88f94ee4890bd56c7f0488152bb4020f300dabbcd987cd8523183"} Feb 20 11:52:29.038928 master-0 kubenswrapper[7756]: I0220 11:52:29.038909 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/0.log" Feb 20 11:52:29.038990 master-0 kubenswrapper[7756]: I0220 11:52:29.038942 7756 generic.go:334] "Generic (PLEG): container finished" podID="bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4" containerID="dda80c885f92b57bca602a3a57fe7a72f775d424964427877643f5139f187abf" exitCode=1 Feb 20 11:52:29.038990 master-0 kubenswrapper[7756]: I0220 11:52:29.038980 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" event={"ID":"bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4","Type":"ContainerDied","Data":"dda80c885f92b57bca602a3a57fe7a72f775d424964427877643f5139f187abf"} Feb 20 11:52:29.039395 master-0 kubenswrapper[7756]: I0220 11:52:29.039380 7756 scope.go:117] "RemoveContainer" containerID="dda80c885f92b57bca602a3a57fe7a72f775d424964427877643f5139f187abf" Feb 20 11:52:29.043601 master-0 kubenswrapper[7756]: I0220 11:52:29.042209 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-psm4s_836a6d7e-9b26-425f-ae21-00422515d7fe/approver/0.log" Feb 20 11:52:29.044190 master-0 kubenswrapper[7756]: I0220 11:52:29.044148 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-psm4s" event={"ID":"836a6d7e-9b26-425f-ae21-00422515d7fe","Type":"ContainerStarted","Data":"8ee62624db1bf28c038634c2f6ef81ccfdeef3084369265ba22b099552cdd3a8"} Feb 20 11:52:29.054283 master-0 kubenswrapper[7756]: I0220 11:52:29.053973 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" event={"ID":"e0b28c90-d5b6-44f3-867c-020ece32ac7d","Type":"ContainerStarted","Data":"77890d6705292359843e6d71e469ce5d5c4b9d196554afc0ee3e0617dea2273f"} Feb 20 11:52:29.058160 master-0 kubenswrapper[7756]: I0220 11:52:29.058144 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" event={"ID":"ae1fd116-6f63-4344-b7af-278665649e5a","Type":"ContainerStarted","Data":"442d700eda6265520452c6c8dcf79b6f4628232810e21ea7d7fb40076188bc56"} Feb 20 11:52:29.060274 master-0 kubenswrapper[7756]: I0220 11:52:29.060260 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:52:29.063377 master-0 kubenswrapper[7756]: I0220 11:52:29.063349 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 11:52:29.065122 master-0 kubenswrapper[7756]: I0220 11:52:29.065109 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/0.log" Feb 20 11:52:29.065260 master-0 kubenswrapper[7756]: I0220 11:52:29.065244 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" event={"ID":"db2a7cb1-1d05-4b24-86ed-f823fad5013e","Type":"ContainerStarted","Data":"92faf490ce07d81111f5c9023da3d201553d13fd825a7918d8a229dadf38bba3"} Feb 20 11:52:29.071654 master-0 kubenswrapper[7756]: I0220 11:52:29.071611 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" event={"ID":"bd609bd3-2525-4b88-8f07-94a0418fb582","Type":"ContainerStarted","Data":"c5552d51223ad679691154e2dedf71641b800849a05e120dc501f6840be1e99e"} Feb 20 11:52:29.087754 master-0 kubenswrapper[7756]: I0220 11:52:29.087716 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 20 11:52:29.115741 master-0 kubenswrapper[7756]: I0220 11:52:29.115676 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" podStartSLOduration=136.115658525 podStartE2EDuration="2m16.115658525s" podCreationTimestamp="2026-02-20 11:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:52:29.095750335 +0000 UTC m=+194.837998343" watchObservedRunningTime="2026-02-20 11:52:29.115658525 +0000 UTC m=+194.857906523" Feb 20 11:52:29.269543 master-0 kubenswrapper[7756]: I0220 11:52:29.262989 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" podStartSLOduration=124.262971664 podStartE2EDuration="2m4.262971664s" podCreationTimestamp="2026-02-20 11:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:52:29.259616952 +0000 UTC m=+195.001864960" watchObservedRunningTime="2026-02-20 11:52:29.262971664 +0000 UTC m=+195.005219662" Feb 20 11:52:29.368592 master-0 kubenswrapper[7756]: I0220 11:52:29.368000 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=131.367979945 podStartE2EDuration="2m11.367979945s" podCreationTimestamp="2026-02-20 11:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:52:29.366285048 +0000 UTC m=+195.108533056" watchObservedRunningTime="2026-02-20 11:52:29.367979945 +0000 UTC m=+195.110227953" Feb 20 11:52:29.672020 master-0 kubenswrapper[7756]: I0220 11:52:29.671858 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 11:52:29.672020 master-0 kubenswrapper[7756]: I0220 11:52:29.671961 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 11:52:29.887242 master-0 kubenswrapper[7756]: I0220 11:52:29.887159 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 11:52:29.887434 master-0 kubenswrapper[7756]: I0220 11:52:29.887263 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 11:52:29.899566 master-0 kubenswrapper[7756]: I0220 11:52:29.899520 7756 patch_prober.go:28] interesting pod/route-controller-manager-689d967cd5-ptpq6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 11:52:29.899730 master-0 kubenswrapper[7756]: I0220 11:52:29.899704 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" podUID="c29fd426-7c89-434e-8332-1ca31075d4bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 11:52:30.086193 master-0 kubenswrapper[7756]: I0220 11:52:30.086137 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/0.log" Feb 20 11:52:30.086500 master-0 kubenswrapper[7756]: I0220 11:52:30.086439 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" event={"ID":"bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4","Type":"ContainerStarted","Data":"60fcf3fcd8aaaa40b7dfe96f543b72ba8310165975661dc288cd77f2a4374875"} Feb 20 11:52:30.280203 master-0 kubenswrapper[7756]: I0220 11:52:30.280159 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:52:30.672300 master-0 kubenswrapper[7756]: I0220 11:52:30.672136 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 11:52:30.672300 master-0 kubenswrapper[7756]: I0220 11:52:30.672217 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 11:52:31.087075 master-0 kubenswrapper[7756]: I0220 11:52:31.087008 7756 patch_prober.go:28] interesting pod/route-controller-manager-689d967cd5-ptpq6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 11:52:31.087279 master-0 kubenswrapper[7756]: I0220 11:52:31.087108 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" podUID="c29fd426-7c89-434e-8332-1ca31075d4bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 11:52:31.093639 master-0 kubenswrapper[7756]: I0220 11:52:31.093604 7756 generic.go:334] "Generic (PLEG): container finished" podID="6dfca740-0387-428a-b957-3e8a09c6e352" containerID="3d84b64b15cc0bdfd81208f0d2d2402b1dd43fcf0c81056aa6b599a33f0ef14d" exitCode=0 Feb 20 11:52:31.093818 master-0 kubenswrapper[7756]: I0220 11:52:31.093755 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" event={"ID":"6dfca740-0387-428a-b957-3e8a09c6e352","Type":"ContainerDied","Data":"3d84b64b15cc0bdfd81208f0d2d2402b1dd43fcf0c81056aa6b599a33f0ef14d"} Feb 20 11:52:31.094742 master-0 kubenswrapper[7756]: I0220 11:52:31.094727 7756 scope.go:117] "RemoveContainer" containerID="3d84b64b15cc0bdfd81208f0d2d2402b1dd43fcf0c81056aa6b599a33f0ef14d" Feb 20 11:52:31.743193 master-0 kubenswrapper[7756]: I0220 11:52:31.743043 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:52:31.749218 master-0 kubenswrapper[7756]: I0220 11:52:31.749190 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:52:32.094581 master-0 kubenswrapper[7756]: I0220 11:52:32.094512 7756 patch_prober.go:28] interesting pod/route-controller-manager-689d967cd5-ptpq6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 11:52:32.094820 master-0 kubenswrapper[7756]: I0220 11:52:32.094589 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" podUID="c29fd426-7c89-434e-8332-1ca31075d4bf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 11:52:32.108687 master-0 kubenswrapper[7756]: I0220 11:52:32.108637 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:52:32.518937 master-0 kubenswrapper[7756]: E0220 11:52:32.518860 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 20 11:52:32.886027 master-0 kubenswrapper[7756]: I0220 11:52:32.885315 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 11:52:32.886027 master-0 kubenswrapper[7756]: I0220 11:52:32.885402 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 11:52:33.067519 master-0 kubenswrapper[7756]: I0220 11:52:33.067461 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 11:52:33.067719 master-0 kubenswrapper[7756]: I0220 11:52:33.067539 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 11:52:35.126012 master-0 kubenswrapper[7756]: I0220 11:52:35.125939 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" event={"ID":"8ab951b1-6898-4357-b813-16365f3f89d5","Type":"ContainerStarted","Data":"9a057bcbfd065697f6b207a64f408c746a9bea8b73ae774c709e37560f5635da"} Feb 20 11:52:35.129234 master-0 kubenswrapper[7756]: I0220 11:52:35.129129 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" event={"ID":"bd609bd3-2525-4b88-8f07-94a0418fb582","Type":"ContainerStarted","Data":"f9e7a58cf124cb2ea74470d7f4d2441a6b37983e716fe9ac61ec9468f4bc2da4"} Feb 20 11:52:35.129234 master-0 kubenswrapper[7756]: I0220 11:52:35.129205 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" event={"ID":"bd609bd3-2525-4b88-8f07-94a0418fb582","Type":"ContainerStarted","Data":"7e35d0d46d086257733502e192ec247382f7e26c3f0f6b4f8392900b3f91657b"} Feb 20 11:52:35.131706 master-0 kubenswrapper[7756]: I0220 11:52:35.131639 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" event={"ID":"6dfca740-0387-428a-b957-3e8a09c6e352","Type":"ContainerStarted","Data":"30cc0163534ef05cf8f1af8016be6ca5a9410b7c83b47a06334775bed42b37ab"} Feb 20 11:52:35.131947 master-0 kubenswrapper[7756]: I0220 11:52:35.131903 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:52:35.133820 master-0 kubenswrapper[7756]: I0220 11:52:35.133763 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 11:52:35.136692 master-0 kubenswrapper[7756]: I0220 11:52:35.135836 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" event={"ID":"bbdbadd9-eeaa-46ef-936e-5db8d395c118","Type":"ContainerStarted","Data":"6e11d702e4faa3980c4584f7fbbe0edd61d03b400f537710d4a26da3248d5efc"} Feb 20 11:52:35.140348 master-0 kubenswrapper[7756]: I0220 11:52:35.140279 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" event={"ID":"daf25ef5-8247-4dbb-bdc1-55104b1015b7","Type":"ContainerStarted","Data":"180751a4cb1956975be660779736132f1c86a187ec23ebdfea0f247e5fc546f5"} Feb 20 11:52:35.143010 master-0 kubenswrapper[7756]: I0220 11:52:35.142935 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" event={"ID":"5c104245-d078-4856-9a60-207bb6efcfe8","Type":"ContainerStarted","Data":"260c08382dea6b09e376dc9dd8b0af504cd7a62ae258245c12c72fc06c823b75"} Feb 20 11:52:35.143191 master-0 kubenswrapper[7756]: I0220 11:52:35.142976 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" event={"ID":"5c104245-d078-4856-9a60-207bb6efcfe8","Type":"ContainerStarted","Data":"678dcabeedbfae524b8ddd915c1ce5eb2da984f089910e50d42ac2c7cc6c2cbf"} Feb 20 11:52:35.149888 master-0 kubenswrapper[7756]: I0220 11:52:35.149819 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" podStartSLOduration=127.15175246 podStartE2EDuration="2m12.149804743s" podCreationTimestamp="2026-02-20 11:50:23 +0000 UTC" firstStartedPulling="2026-02-20 11:52:28.421834249 +0000 UTC m=+194.164082247" lastFinishedPulling="2026-02-20 11:52:33.419886522 +0000 UTC m=+199.162134530" observedRunningTime="2026-02-20 11:52:35.146157672 +0000 UTC m=+200.888405720" watchObservedRunningTime="2026-02-20 11:52:35.149804743 +0000 UTC m=+200.892052751" Feb 20 11:52:35.198257 master-0 kubenswrapper[7756]: I0220 11:52:35.198198 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" podStartSLOduration=125.535383707 podStartE2EDuration="2m11.198184831s" podCreationTimestamp="2026-02-20 11:50:24 +0000 UTC" firstStartedPulling="2026-02-20 11:52:27.757581951 +0000 UTC m=+193.499829949" lastFinishedPulling="2026-02-20 11:52:33.420383065 +0000 UTC m=+199.162631073" observedRunningTime="2026-02-20 11:52:35.19595149 +0000 UTC m=+200.938199528" watchObservedRunningTime="2026-02-20 11:52:35.198184831 +0000 UTC m=+200.940432829" Feb 20 11:52:35.223522 master-0 kubenswrapper[7756]: I0220 11:52:35.222095 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" podStartSLOduration=128.35079017 podStartE2EDuration="2m13.222084451s" podCreationTimestamp="2026-02-20 11:50:22 +0000 UTC" firstStartedPulling="2026-02-20 11:52:28.5797068 +0000 UTC m=+194.321954808" lastFinishedPulling="2026-02-20 11:52:33.451001051 +0000 UTC m=+199.193249089" observedRunningTime="2026-02-20 11:52:35.220979391 +0000 UTC m=+200.963227399" watchObservedRunningTime="2026-02-20 11:52:35.222084451 +0000 UTC m=+200.964332459" Feb 20 11:52:35.249231 master-0 kubenswrapper[7756]: I0220 11:52:35.249125 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" podStartSLOduration=125.553193359 podStartE2EDuration="2m11.249098178s" podCreationTimestamp="2026-02-20 11:50:24 +0000 UTC" firstStartedPulling="2026-02-20 11:52:27.723583102 +0000 UTC m=+193.465831110" lastFinishedPulling="2026-02-20 11:52:33.419487921 +0000 UTC m=+199.161735929" observedRunningTime="2026-02-20 11:52:35.245596472 +0000 UTC m=+200.987844480" watchObservedRunningTime="2026-02-20 11:52:35.249098178 +0000 UTC m=+200.991346186" Feb 20 11:52:35.272619 master-0 kubenswrapper[7756]: I0220 11:52:35.269606 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" podStartSLOduration=128.852448608 podStartE2EDuration="2m14.269581565s" podCreationTimestamp="2026-02-20 11:50:21 +0000 UTC" firstStartedPulling="2026-02-20 11:52:28.002306422 +0000 UTC m=+193.744554430" lastFinishedPulling="2026-02-20 11:52:33.419439359 +0000 UTC m=+199.161687387" observedRunningTime="2026-02-20 11:52:35.266931522 +0000 UTC m=+201.009179570" watchObservedRunningTime="2026-02-20 11:52:35.269581565 +0000 UTC m=+201.011829573" Feb 20 11:52:35.886578 master-0 kubenswrapper[7756]: I0220 11:52:35.885791 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 11:52:35.886578 master-0 kubenswrapper[7756]: I0220 11:52:35.885963 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 11:52:35.886578 master-0 kubenswrapper[7756]: I0220 11:52:35.886005 7756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:52:35.886578 master-0 kubenswrapper[7756]: I0220 11:52:35.886588 7756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"31ff1e117529b9aa438962fcdb3c5051bf53ab61f9540449f696309f0c076076"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 20 11:52:35.887109 master-0 kubenswrapper[7756]: I0220 11:52:35.886618 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" containerID="cri-o://31ff1e117529b9aa438962fcdb3c5051bf53ab61f9540449f696309f0c076076" gracePeriod=30 Feb 20 11:52:35.901708 master-0 kubenswrapper[7756]: I0220 11:52:35.901638 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": read tcp 10.128.0.2:56756->10.128.0.16:8443: read: connection reset by peer" start-of-body= Feb 20 11:52:35.901708 master-0 kubenswrapper[7756]: I0220 11:52:35.901705 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": read tcp 10.128.0.2:56756->10.128.0.16:8443: read: connection reset by peer" Feb 20 11:52:35.904000 master-0 kubenswrapper[7756]: I0220 11:52:35.903435 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:52:35.904000 master-0 kubenswrapper[7756]: I0220 11:52:35.903569 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:52:36.151763 master-0 kubenswrapper[7756]: I0220 11:52:36.151630 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-mk9fd_02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/openshift-config-operator/2.log" Feb 20 11:52:36.153511 master-0 kubenswrapper[7756]: I0220 11:52:36.153460 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-mk9fd_02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/openshift-config-operator/1.log" Feb 20 11:52:36.154027 master-0 kubenswrapper[7756]: I0220 11:52:36.153988 7756 generic.go:334] "Generic (PLEG): container finished" podID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerID="31ff1e117529b9aa438962fcdb3c5051bf53ab61f9540449f696309f0c076076" exitCode=255 Feb 20 11:52:36.154247 master-0 kubenswrapper[7756]: I0220 11:52:36.154202 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" event={"ID":"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145","Type":"ContainerDied","Data":"31ff1e117529b9aa438962fcdb3c5051bf53ab61f9540449f696309f0c076076"} Feb 20 11:52:36.154486 master-0 kubenswrapper[7756]: I0220 11:52:36.154446 7756 scope.go:117] "RemoveContainer" containerID="435d301867d3455ab1264a913824bd1f0d3bfd3f13750e2fec9b286b064d3e49" Feb 20 11:52:37.510582 master-0 kubenswrapper[7756]: I0220 11:52:37.507566 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 11:52:38.067078 master-0 kubenswrapper[7756]: I0220 11:52:38.067009 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:52:38.067286 master-0 kubenswrapper[7756]: I0220 11:52:38.067082 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:52:38.169098 master-0 kubenswrapper[7756]: I0220 11:52:38.168971 7756 generic.go:334] "Generic (PLEG): container finished" podID="98226a59-5234-48f3-a9cd-21de305810dc" containerID="c5857fd0f578f323286023fc24db8dcdefabd0753d52c557c0cb0421ff06a92f" exitCode=0 Feb 20 11:52:38.169098 master-0 kubenswrapper[7756]: I0220 11:52:38.169063 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" event={"ID":"98226a59-5234-48f3-a9cd-21de305810dc","Type":"ContainerDied","Data":"c5857fd0f578f323286023fc24db8dcdefabd0753d52c557c0cb0421ff06a92f"} Feb 20 11:52:38.169713 master-0 kubenswrapper[7756]: I0220 11:52:38.169655 7756 scope.go:117] "RemoveContainer" containerID="c5857fd0f578f323286023fc24db8dcdefabd0753d52c557c0cb0421ff06a92f" Feb 20 11:52:41.066989 master-0 kubenswrapper[7756]: I0220 11:52:41.066895 7756 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-mk9fd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Feb 20 11:52:41.066989 master-0 kubenswrapper[7756]: I0220 11:52:41.066981 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" podUID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Feb 20 11:52:41.194466 master-0 kubenswrapper[7756]: I0220 11:52:41.194285 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-vs87f_b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1/manager/0.log" Feb 20 11:52:41.194466 master-0 kubenswrapper[7756]: I0220 11:52:41.194371 7756 generic.go:334] "Generic (PLEG): container finished" podID="b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1" containerID="5bf57c12fc70c17e6a09a820bf2ab5c2dd4edbb89e20cced0e4474b7e6ce7231" exitCode=1 Feb 20 11:52:41.194466 master-0 kubenswrapper[7756]: I0220 11:52:41.194461 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" event={"ID":"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1","Type":"ContainerDied","Data":"5bf57c12fc70c17e6a09a820bf2ab5c2dd4edbb89e20cced0e4474b7e6ce7231"} Feb 20 11:52:41.195187 master-0 kubenswrapper[7756]: I0220 11:52:41.195130 7756 scope.go:117] "RemoveContainer" containerID="5bf57c12fc70c17e6a09a820bf2ab5c2dd4edbb89e20cced0e4474b7e6ce7231" Feb 20 11:52:41.197872 master-0 kubenswrapper[7756]: I0220 11:52:41.197805 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-k8vs5_d9f9442b-25b9-420f-b748-bb13423809fe/manager/0.log" Feb 20 11:52:41.198707 master-0 kubenswrapper[7756]: I0220 11:52:41.198638 7756 generic.go:334] "Generic (PLEG): container finished" podID="d9f9442b-25b9-420f-b748-bb13423809fe" containerID="fd156bc7a5466d6b67b1239ac8613c9df410e89cc9c884ed83f3394a7c8ae304" exitCode=1 Feb 20 11:52:41.198707 master-0 kubenswrapper[7756]: I0220 11:52:41.198698 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" event={"ID":"d9f9442b-25b9-420f-b748-bb13423809fe","Type":"ContainerDied","Data":"fd156bc7a5466d6b67b1239ac8613c9df410e89cc9c884ed83f3394a7c8ae304"} Feb 20 11:52:41.199702 master-0 kubenswrapper[7756]: I0220 11:52:41.199449 7756 scope.go:117] "RemoveContainer" containerID="fd156bc7a5466d6b67b1239ac8613c9df410e89cc9c884ed83f3394a7c8ae304" Feb 20 11:52:42.207922 master-0 kubenswrapper[7756]: I0220 11:52:42.207855 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-vs87f_b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1/manager/0.log" Feb 20 11:52:42.208853 master-0 kubenswrapper[7756]: I0220 11:52:42.207989 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" event={"ID":"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1","Type":"ContainerStarted","Data":"c71d66a4b93651a9ca77699b6ac7544e90310b6a6968e997721a5f52319085ac"} Feb 20 11:52:42.208853 master-0 kubenswrapper[7756]: I0220 11:52:42.208424 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:52:42.210435 master-0 kubenswrapper[7756]: I0220 11:52:42.210366 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" event={"ID":"62fc400b-b3dd-4134-bd27-69dd8369153a","Type":"ContainerStarted","Data":"175d65ef058d19df8765023bf60abee81ca3af2e1c89be3d588c68672b336027"} Feb 20 11:52:42.212689 master-0 kubenswrapper[7756]: I0220 11:52:42.212638 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-mk9fd_02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/openshift-config-operator/2.log" Feb 20 11:52:42.213140 master-0 kubenswrapper[7756]: I0220 11:52:42.213074 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" event={"ID":"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145","Type":"ContainerStarted","Data":"54ddc8e6d8cd639450f5e2ea3092a692309131bb3889d6645f816e9ec5ece2c4"} Feb 20 11:52:42.213277 master-0 kubenswrapper[7756]: I0220 11:52:42.213198 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:52:42.215424 master-0 kubenswrapper[7756]: I0220 11:52:42.215368 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" event={"ID":"ef18ace4-7316-4600-9be9-2adc792705e9","Type":"ContainerStarted","Data":"aaa2bc667b8e1ef611ebd89d15ca25a4ef93b299d9ea456045ea48cff68ed50c"} Feb 20 11:52:42.217662 master-0 kubenswrapper[7756]: I0220 11:52:42.217610 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" event={"ID":"98226a59-5234-48f3-a9cd-21de305810dc","Type":"ContainerStarted","Data":"1b7b0cda43f9601273f5b828026cbdd290a92a99bdd94b1cd74e1268067e317e"} Feb 20 11:52:42.217979 master-0 kubenswrapper[7756]: I0220 11:52:42.217942 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:52:42.223248 master-0 kubenswrapper[7756]: I0220 11:52:42.223198 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 11:52:42.223711 master-0 kubenswrapper[7756]: I0220 11:52:42.223678 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-k8vs5_d9f9442b-25b9-420f-b748-bb13423809fe/manager/0.log" Feb 20 11:52:42.224172 master-0 kubenswrapper[7756]: I0220 11:52:42.224111 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" event={"ID":"d9f9442b-25b9-420f-b748-bb13423809fe","Type":"ContainerStarted","Data":"84ef230cc54cd476fcc604e2b0f1b7222d22839f67c943242d5c00ce3857fed6"} Feb 20 11:52:42.224507 master-0 kubenswrapper[7756]: I0220 11:52:42.224454 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:52:42.274667 master-0 kubenswrapper[7756]: I0220 11:52:42.274547 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" podStartSLOduration=124.304534182 podStartE2EDuration="2m17.274518945s" podCreationTimestamp="2026-02-20 11:50:25 +0000 UTC" firstStartedPulling="2026-02-20 11:52:28.358666174 +0000 UTC m=+194.100914182" lastFinishedPulling="2026-02-20 11:52:41.328650917 +0000 UTC m=+207.070898945" observedRunningTime="2026-02-20 11:52:42.271503291 +0000 UTC m=+208.013751309" watchObservedRunningTime="2026-02-20 11:52:42.274518945 +0000 UTC m=+208.016766953" Feb 20 11:52:42.295970 master-0 kubenswrapper[7756]: I0220 11:52:42.295904 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" podStartSLOduration=129.663749955 podStartE2EDuration="2m22.295885935s" podCreationTimestamp="2026-02-20 11:50:20 +0000 UTC" firstStartedPulling="2026-02-20 11:52:28.732078799 +0000 UTC m=+194.474326807" lastFinishedPulling="2026-02-20 11:52:41.364214779 +0000 UTC m=+207.106462787" observedRunningTime="2026-02-20 11:52:42.295379191 +0000 UTC m=+208.037627229" watchObservedRunningTime="2026-02-20 11:52:42.295885935 +0000 UTC m=+208.038133943" Feb 20 11:52:44.082805 master-0 kubenswrapper[7756]: I0220 11:52:44.082725 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 11:52:44.893919 master-0 kubenswrapper[7756]: I0220 11:52:44.893708 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-vtcnw_6c3aa45a-44cc-48fb-a478-ce01a70c4b02/authentication-operator/0.log" Feb 20 11:52:45.084166 master-0 kubenswrapper[7756]: I0220 11:52:45.084068 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-vtcnw_6c3aa45a-44cc-48fb-a478-ce01a70c4b02/authentication-operator/1.log" Feb 20 11:52:45.480467 master-0 kubenswrapper[7756]: I0220 11:52:45.480398 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-69fc79b84-rr6rh_fca213c3-42ca-4341-a2e6-a143b9389f9e/fix-audit-permissions/0.log" Feb 20 11:52:45.688966 master-0 kubenswrapper[7756]: I0220 11:52:45.688877 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-69fc79b84-rr6rh_fca213c3-42ca-4341-a2e6-a143b9389f9e/oauth-apiserver/0.log" Feb 20 11:52:45.890926 master-0 kubenswrapper[7756]: I0220 11:52:45.890861 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-d69w2_1d3a36bb-9d11-48b3-a3b5-07b47738ef97/etcd-operator/0.log" Feb 20 11:52:46.081200 master-0 kubenswrapper[7756]: I0220 11:52:46.081141 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-d69w2_1d3a36bb-9d11-48b3-a3b5-07b47738ef97/etcd-operator/1.log" Feb 20 11:52:46.280969 master-0 kubenswrapper[7756]: I0220 11:52:46.280883 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/setup/0.log" Feb 20 11:52:46.479633 master-0 kubenswrapper[7756]: I0220 11:52:46.479570 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-ensure-env-vars/0.log" Feb 20 11:52:46.643289 master-0 kubenswrapper[7756]: I0220 11:52:46.642968 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 11:52:46.685476 master-0 kubenswrapper[7756]: I0220 11:52:46.685425 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-resources-copy/0.log" Feb 20 11:52:46.826296 master-0 kubenswrapper[7756]: I0220 11:52:46.826199 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 11:52:46.880588 master-0 kubenswrapper[7756]: I0220 11:52:46.880481 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 20 11:52:47.089102 master-0 kubenswrapper[7756]: I0220 11:52:47.089017 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd/0.log" Feb 20 11:52:47.286404 master-0 kubenswrapper[7756]: I0220 11:52:47.286319 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 20 11:52:47.481551 master-0 kubenswrapper[7756]: I0220 11:52:47.481370 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-readyz/0.log" Feb 20 11:52:47.682022 master-0 kubenswrapper[7756]: I0220 11:52:47.681960 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 20 11:52:47.892225 master-0 kubenswrapper[7756]: I0220 11:52:47.892165 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_5710eb66-9717-4beb-a8b2-19f6886376b3/installer/0.log" Feb 20 11:52:48.092857 master-0 kubenswrapper[7756]: I0220 11:52:48.092775 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-kg75v_e0b28c90-d5b6-44f3-867c-020ece32ac7d/kube-apiserver-operator/0.log" Feb 20 11:52:48.279661 master-0 kubenswrapper[7756]: I0220 11:52:48.279585 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-kg75v_e0b28c90-d5b6-44f3-867c-020ece32ac7d/kube-apiserver-operator/1.log" Feb 20 11:52:48.479403 master-0 kubenswrapper[7756]: I0220 11:52:48.479318 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/setup/0.log" Feb 20 11:52:48.684493 master-0 kubenswrapper[7756]: I0220 11:52:48.684382 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver/0.log" Feb 20 11:52:48.880203 master-0 kubenswrapper[7756]: I0220 11:52:48.880087 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver-insecure-readyz/0.log" Feb 20 11:52:49.026488 master-0 kubenswrapper[7756]: I0220 11:52:49.026415 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-retry-1-master-0"] Feb 20 11:52:49.026788 master-0 kubenswrapper[7756]: E0220 11:52:49.026704 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3733ccb5-2cea-4151-a2a7-d9c089a34cbc" containerName="extract-utilities" Feb 20 11:52:49.026788 master-0 kubenswrapper[7756]: I0220 11:52:49.026720 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3733ccb5-2cea-4151-a2a7-d9c089a34cbc" containerName="extract-utilities" Feb 20 11:52:49.026788 master-0 kubenswrapper[7756]: E0220 11:52:49.026734 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50084c46-32ff-4e8a-b35e-8e7b1943cc04" containerName="extract-utilities" Feb 20 11:52:49.026788 master-0 kubenswrapper[7756]: I0220 11:52:49.026742 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="50084c46-32ff-4e8a-b35e-8e7b1943cc04" containerName="extract-utilities" Feb 20 11:52:49.026788 master-0 kubenswrapper[7756]: E0220 11:52:49.026758 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5710eb66-9717-4beb-a8b2-19f6886376b3" containerName="installer" Feb 20 11:52:49.026788 master-0 kubenswrapper[7756]: I0220 11:52:49.026768 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5710eb66-9717-4beb-a8b2-19f6886376b3" containerName="installer" Feb 20 11:52:49.026788 master-0 kubenswrapper[7756]: E0220 11:52:49.026781 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a827d746-cfd3-48a2-a20b-2ff1526986b9" containerName="installer" Feb 20 11:52:49.026788 master-0 kubenswrapper[7756]: I0220 11:52:49.026788 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a827d746-cfd3-48a2-a20b-2ff1526986b9" containerName="installer" Feb 20 11:52:49.026788 master-0 kubenswrapper[7756]: E0220 11:52:49.026799 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339f8487-0d2b-4f4f-9872-c629e7f3e2e1" containerName="extract-content" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.026808 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="339f8487-0d2b-4f4f-9872-c629e7f3e2e1" containerName="extract-content" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: E0220 11:52:49.026823 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52c9c322-a0d1-4d27-b3bc-aaa8bd25beec" containerName="installer" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.026831 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="52c9c322-a0d1-4d27-b3bc-aaa8bd25beec" containerName="installer" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: E0220 11:52:49.026845 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339f8487-0d2b-4f4f-9872-c629e7f3e2e1" containerName="extract-utilities" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.026853 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="339f8487-0d2b-4f4f-9872-c629e7f3e2e1" containerName="extract-utilities" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: E0220 11:52:49.026861 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd" containerName="extract-content" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.026869 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd" containerName="extract-content" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: E0220 11:52:49.026883 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd" containerName="extract-utilities" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.026891 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd" containerName="extract-utilities" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: E0220 11:52:49.026901 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd4430b-8dbc-46df-9efe-49d520a7c75a" containerName="installer" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.026908 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd4430b-8dbc-46df-9efe-49d520a7c75a" containerName="installer" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: E0220 11:52:49.026919 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50084c46-32ff-4e8a-b35e-8e7b1943cc04" containerName="extract-content" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.026926 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="50084c46-32ff-4e8a-b35e-8e7b1943cc04" containerName="extract-content" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: E0220 11:52:49.026942 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e9ba02-39d0-41fb-aed1-39923698bc0b" containerName="installer" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.026949 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e9ba02-39d0-41fb-aed1-39923698bc0b" containerName="installer" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: E0220 11:52:49.026960 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3733ccb5-2cea-4151-a2a7-d9c089a34cbc" containerName="extract-content" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.026967 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3733ccb5-2cea-4151-a2a7-d9c089a34cbc" containerName="extract-content" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.027090 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="52c9c322-a0d1-4d27-b3bc-aaa8bd25beec" containerName="installer" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.027105 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a827d746-cfd3-48a2-a20b-2ff1526986b9" containerName="installer" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.027118 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3733ccb5-2cea-4151-a2a7-d9c089a34cbc" containerName="extract-content" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.027129 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c95e3bf8-5bc8-4e23-b83a-21a691fe3fcd" containerName="extract-content" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.027144 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="5710eb66-9717-4beb-a8b2-19f6886376b3" containerName="installer" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.027155 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="50084c46-32ff-4e8a-b35e-8e7b1943cc04" containerName="extract-content" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.027164 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="339f8487-0d2b-4f4f-9872-c629e7f3e2e1" containerName="extract-content" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.027176 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e9ba02-39d0-41fb-aed1-39923698bc0b" containerName="installer" Feb 20 11:52:49.027307 master-0 kubenswrapper[7756]: I0220 11:52:49.027192 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd4430b-8dbc-46df-9efe-49d520a7c75a" containerName="installer" Feb 20 11:52:49.028827 master-0 kubenswrapper[7756]: I0220 11:52:49.027628 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-retry-1-master-0" Feb 20 11:52:49.031295 master-0 kubenswrapper[7756]: I0220 11:52:49.031225 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-zgwqj" Feb 20 11:52:49.031666 master-0 kubenswrapper[7756]: I0220 11:52:49.031625 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 20 11:52:49.048362 master-0 kubenswrapper[7756]: I0220 11:52:49.048297 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-retry-1-master-0"] Feb 20 11:52:49.063586 master-0 kubenswrapper[7756]: I0220 11:52:49.063509 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/35310285-fff9-43d6-ad9a-5d959ef116ec-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"35310285-fff9-43d6-ad9a-5d959ef116ec\") " pod="openshift-kube-scheduler/installer-2-retry-1-master-0" Feb 20 11:52:49.063742 master-0 kubenswrapper[7756]: I0220 11:52:49.063659 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35310285-fff9-43d6-ad9a-5d959ef116ec-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"35310285-fff9-43d6-ad9a-5d959ef116ec\") " pod="openshift-kube-scheduler/installer-2-retry-1-master-0" Feb 20 11:52:49.063844 master-0 kubenswrapper[7756]: I0220 11:52:49.063773 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35310285-fff9-43d6-ad9a-5d959ef116ec-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"35310285-fff9-43d6-ad9a-5d959ef116ec\") " pod="openshift-kube-scheduler/installer-2-retry-1-master-0" Feb 20 11:52:49.088254 master-0 kubenswrapper[7756]: I0220 11:52:49.088164 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_74e9ba02-39d0-41fb-aed1-39923698bc0b/installer/0.log" Feb 20 11:52:49.166011 master-0 kubenswrapper[7756]: I0220 11:52:49.165913 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35310285-fff9-43d6-ad9a-5d959ef116ec-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"35310285-fff9-43d6-ad9a-5d959ef116ec\") " pod="openshift-kube-scheduler/installer-2-retry-1-master-0" Feb 20 11:52:49.166167 master-0 kubenswrapper[7756]: I0220 11:52:49.166072 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/35310285-fff9-43d6-ad9a-5d959ef116ec-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"35310285-fff9-43d6-ad9a-5d959ef116ec\") " pod="openshift-kube-scheduler/installer-2-retry-1-master-0" Feb 20 11:52:49.166167 master-0 kubenswrapper[7756]: I0220 11:52:49.166110 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35310285-fff9-43d6-ad9a-5d959ef116ec-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"35310285-fff9-43d6-ad9a-5d959ef116ec\") " pod="openshift-kube-scheduler/installer-2-retry-1-master-0" Feb 20 11:52:49.166419 master-0 kubenswrapper[7756]: I0220 11:52:49.166307 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/35310285-fff9-43d6-ad9a-5d959ef116ec-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"35310285-fff9-43d6-ad9a-5d959ef116ec\") " pod="openshift-kube-scheduler/installer-2-retry-1-master-0" Feb 20 11:52:49.166516 master-0 kubenswrapper[7756]: I0220 11:52:49.166471 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35310285-fff9-43d6-ad9a-5d959ef116ec-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"35310285-fff9-43d6-ad9a-5d959ef116ec\") " pod="openshift-kube-scheduler/installer-2-retry-1-master-0" Feb 20 11:52:49.201606 master-0 kubenswrapper[7756]: I0220 11:52:49.198871 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35310285-fff9-43d6-ad9a-5d959ef116ec-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"35310285-fff9-43d6-ad9a-5d959ef116ec\") " pod="openshift-kube-scheduler/installer-2-retry-1-master-0" Feb 20 11:52:49.287658 master-0 kubenswrapper[7756]: I0220 11:52:49.287299 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_eb93420d-7c5a-4492-bd16-0104104406b4/installer/0.log" Feb 20 11:52:49.363812 master-0 kubenswrapper[7756]: I0220 11:52:49.363711 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-retry-1-master-0" Feb 20 11:52:49.487249 master-0 kubenswrapper[7756]: I0220 11:52:49.487181 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-lsxtj_f1388469-5e55-4c1b-97c3-c88777f29ae7/kube-controller-manager-operator/0.log" Feb 20 11:52:49.681370 master-0 kubenswrapper[7756]: I0220 11:52:49.681183 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-lsxtj_f1388469-5e55-4c1b-97c3-c88777f29ae7/kube-controller-manager-operator/1.log" Feb 20 11:52:49.797247 master-0 kubenswrapper[7756]: I0220 11:52:49.797151 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-retry-1-master-0"] Feb 20 11:52:49.804520 master-0 kubenswrapper[7756]: W0220 11:52:49.804456 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod35310285_fff9_43d6_ad9a_5d959ef116ec.slice/crio-4736b5e4686f09c0b07f8d18c3b19a3ccd55085c207b7cd94523bcb6efbbf4ee WatchSource:0}: Error finding container 4736b5e4686f09c0b07f8d18c3b19a3ccd55085c207b7cd94523bcb6efbbf4ee: Status 404 returned error can't find the container with id 4736b5e4686f09c0b07f8d18c3b19a3ccd55085c207b7cd94523bcb6efbbf4ee Feb 20 11:52:49.892875 master-0 kubenswrapper[7756]: I0220 11:52:49.892796 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/2.log" Feb 20 11:52:50.290431 master-0 kubenswrapper[7756]: I0220 11:52:50.290230 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-retry-1-master-0" event={"ID":"35310285-fff9-43d6-ad9a-5d959ef116ec","Type":"ContainerStarted","Data":"4736b5e4686f09c0b07f8d18c3b19a3ccd55085c207b7cd94523bcb6efbbf4ee"} Feb 20 11:52:50.292051 master-0 kubenswrapper[7756]: I0220 11:52:50.291993 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/3.log" Feb 20 11:52:50.492514 master-0 kubenswrapper[7756]: I0220 11:52:50.492415 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/cluster-policy-controller/0.log" Feb 20 11:52:50.693841 master-0 kubenswrapper[7756]: I0220 11:52:50.689083 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/0.log" Feb 20 11:52:50.890235 master-0 kubenswrapper[7756]: I0220 11:52:50.890159 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/1.log" Feb 20 11:52:51.084076 master-0 kubenswrapper[7756]: I0220 11:52:51.083993 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_7bd4430b-8dbc-46df-9efe-49d520a7c75a/installer/0.log" Feb 20 11:52:51.291317 master-0 kubenswrapper[7756]: I0220 11:52:51.291201 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-9zp85_f98aeaf7-bf1a-46af-bf1b-85713baa4c67/kube-scheduler-operator-container/0.log" Feb 20 11:52:51.298647 master-0 kubenswrapper[7756]: I0220 11:52:51.298571 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-retry-1-master-0" event={"ID":"35310285-fff9-43d6-ad9a-5d959ef116ec","Type":"ContainerStarted","Data":"359ae664c23ea8eeb6016bf515179345b86f7e1a68413d3d25df9e81032b59ac"} Feb 20 11:52:51.325515 master-0 kubenswrapper[7756]: I0220 11:52:51.324382 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-retry-1-master-0" podStartSLOduration=2.324351164 podStartE2EDuration="2.324351164s" podCreationTimestamp="2026-02-20 11:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:52:51.322733879 +0000 UTC m=+217.064981897" watchObservedRunningTime="2026-02-20 11:52:51.324351164 +0000 UTC m=+217.066599242" Feb 20 11:52:51.486539 master-0 kubenswrapper[7756]: I0220 11:52:51.486339 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-9zp85_f98aeaf7-bf1a-46af-bf1b-85713baa4c67/kube-scheduler-operator-container/1.log" Feb 20 11:52:51.685982 master-0 kubenswrapper[7756]: I0220 11:52:51.685914 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-lfdtx_1df81fcc-f967-4874-ad16-1a89f0e7875a/openshift-apiserver-operator/0.log" Feb 20 11:52:51.882555 master-0 kubenswrapper[7756]: I0220 11:52:51.882439 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-lfdtx_1df81fcc-f967-4874-ad16-1a89f0e7875a/openshift-apiserver-operator/1.log" Feb 20 11:52:52.081595 master-0 kubenswrapper[7756]: I0220 11:52:52.080122 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-7666bb78cc-jxswr_59c1cc61-8692-4a35-83fc-6bbef7086117/fix-audit-permissions/0.log" Feb 20 11:52:52.287699 master-0 kubenswrapper[7756]: I0220 11:52:52.287643 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-7666bb78cc-jxswr_59c1cc61-8692-4a35-83fc-6bbef7086117/openshift-apiserver/0.log" Feb 20 11:52:52.484682 master-0 kubenswrapper[7756]: I0220 11:52:52.484604 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-7666bb78cc-jxswr_59c1cc61-8692-4a35-83fc-6bbef7086117/openshift-apiserver-check-endpoints/0.log" Feb 20 11:52:52.689807 master-0 kubenswrapper[7756]: I0220 11:52:52.689643 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-d69w2_1d3a36bb-9d11-48b3-a3b5-07b47738ef97/etcd-operator/0.log" Feb 20 11:52:52.881879 master-0 kubenswrapper[7756]: I0220 11:52:52.881708 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-d69w2_1d3a36bb-9d11-48b3-a3b5-07b47738ef97/etcd-operator/1.log" Feb 20 11:52:53.089885 master-0 kubenswrapper[7756]: I0220 11:52:53.089794 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-qdb75_5360f3f5-2d07-432f-af45-22659538c55e/openshift-controller-manager-operator/0.log" Feb 20 11:52:53.288332 master-0 kubenswrapper[7756]: I0220 11:52:53.288204 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-599c7886f5-zltnd_98226a59-5234-48f3-a9cd-21de305810dc/controller-manager/0.log" Feb 20 11:52:53.489003 master-0 kubenswrapper[7756]: I0220 11:52:53.488811 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-599c7886f5-zltnd_98226a59-5234-48f3-a9cd-21de305810dc/controller-manager/1.log" Feb 20 11:52:53.688624 master-0 kubenswrapper[7756]: I0220 11:52:53.688551 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-689d967cd5-ptpq6_c29fd426-7c89-434e-8332-1ca31075d4bf/route-controller-manager/0.log" Feb 20 11:52:53.886628 master-0 kubenswrapper[7756]: I0220 11:52:53.886496 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-596f79dd6f-bjxbt_4d060bff-3c25-4eeb-bdd3-e20fb2687645/catalog-operator/0.log" Feb 20 11:52:54.289307 master-0 kubenswrapper[7756]: I0220 11:52:54.289060 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-5499d7f7bb-6qtzc_d65a0af4-c96f-44f8-9384-6bae4585983b/olm-operator/0.log" Feb 20 11:52:54.482800 master-0 kubenswrapper[7756]: I0220 11:52:54.482695 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-mr99g_dbce6cdc-040a-48e1-8a81-b6ff9c180eba/kube-rbac-proxy/0.log" Feb 20 11:52:54.686813 master-0 kubenswrapper[7756]: I0220 11:52:54.686744 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-mr99g_dbce6cdc-040a-48e1-8a81-b6ff9c180eba/package-server-manager/0.log" Feb 20 11:52:54.884974 master-0 kubenswrapper[7756]: I0220 11:52:54.884839 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-795fd44d5c-t99pw_ae1fd116-6f63-4344-b7af-278665649e5a/packageserver/0.log" Feb 20 11:52:57.196999 master-0 kubenswrapper[7756]: I0220 11:52:57.196935 7756 patch_prober.go:28] interesting pod/machine-config-daemon-mpwks container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 11:52:57.199070 master-0 kubenswrapper[7756]: I0220 11:52:57.197692 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mpwks" podUID="37cb3bb1-f5ba-4b7b-9af9-55bf61906a51" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 11:52:59.575595 master-0 kubenswrapper[7756]: I0220 11:52:59.573651 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq"] Feb 20 11:52:59.575595 master-0 kubenswrapper[7756]: I0220 11:52:59.573961 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" podUID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" containerName="cluster-cloud-controller-manager" containerID="cri-o://170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550" gracePeriod=30 Feb 20 11:52:59.575595 master-0 kubenswrapper[7756]: I0220 11:52:59.574370 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" podUID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" containerName="kube-rbac-proxy" containerID="cri-o://c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f" gracePeriod=30 Feb 20 11:52:59.575595 master-0 kubenswrapper[7756]: I0220 11:52:59.574435 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" podUID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" containerName="config-sync-controllers" containerID="cri-o://5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453" gracePeriod=30 Feb 20 11:52:59.575595 master-0 kubenswrapper[7756]: I0220 11:52:59.574508 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-cll9p"] Feb 20 11:52:59.575595 master-0 kubenswrapper[7756]: I0220 11:52:59.574687 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" podUID="bbfa556b-3986-44b5-bf47-be113d732ad8" containerName="kube-rbac-proxy" containerID="cri-o://9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea" gracePeriod=30 Feb 20 11:52:59.575595 master-0 kubenswrapper[7756]: I0220 11:52:59.574728 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" podUID="bbfa556b-3986-44b5-bf47-be113d732ad8" containerName="machine-approver-controller" containerID="cri-o://c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749" gracePeriod=30 Feb 20 11:52:59.837505 master-0 kubenswrapper[7756]: I0220 11:52:59.837140 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:52:59.843981 master-0 kubenswrapper[7756]: I0220 11:52:59.843940 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:52:59.938091 master-0 kubenswrapper[7756]: I0220 11:52:59.938031 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzfwx\" (UniqueName: \"kubernetes.io/projected/bbfa556b-3986-44b5-bf47-be113d732ad8-kube-api-access-gzfwx\") pod \"bbfa556b-3986-44b5-bf47-be113d732ad8\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " Feb 20 11:52:59.938326 master-0 kubenswrapper[7756]: I0220 11:52:59.938141 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bbfa556b-3986-44b5-bf47-be113d732ad8-machine-approver-tls\") pod \"bbfa556b-3986-44b5-bf47-be113d732ad8\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " Feb 20 11:52:59.938326 master-0 kubenswrapper[7756]: I0220 11:52:59.938186 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-auth-proxy-config\") pod \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " Feb 20 11:52:59.938326 master-0 kubenswrapper[7756]: I0220 11:52:59.938265 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-host-etc-kube\") pod \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " Feb 20 11:52:59.938326 master-0 kubenswrapper[7756]: I0220 11:52:59.938303 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rngh2\" (UniqueName: \"kubernetes.io/projected/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-kube-api-access-rngh2\") pod \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " Feb 20 11:52:59.938447 master-0 kubenswrapper[7756]: I0220 11:52:59.938353 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-cloud-controller-manager-operator-tls\") pod \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " Feb 20 11:52:59.938447 master-0 kubenswrapper[7756]: I0220 11:52:59.938392 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbfa556b-3986-44b5-bf47-be113d732ad8-config\") pod \"bbfa556b-3986-44b5-bf47-be113d732ad8\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " Feb 20 11:52:59.938507 master-0 kubenswrapper[7756]: I0220 11:52:59.938447 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-images\") pod \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\" (UID: \"5e270e14-6e48-4fd0-bbd6-73e401a88e1d\") " Feb 20 11:52:59.938557 master-0 kubenswrapper[7756]: I0220 11:52:59.938503 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbfa556b-3986-44b5-bf47-be113d732ad8-auth-proxy-config\") pod \"bbfa556b-3986-44b5-bf47-be113d732ad8\" (UID: \"bbfa556b-3986-44b5-bf47-be113d732ad8\") " Feb 20 11:52:59.939289 master-0 kubenswrapper[7756]: I0220 11:52:59.938967 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbfa556b-3986-44b5-bf47-be113d732ad8-config" (OuterVolumeSpecName: "config") pod "bbfa556b-3986-44b5-bf47-be113d732ad8" (UID: "bbfa556b-3986-44b5-bf47-be113d732ad8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:52:59.939289 master-0 kubenswrapper[7756]: I0220 11:52:59.939155 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-images" (OuterVolumeSpecName: "images") pod "5e270e14-6e48-4fd0-bbd6-73e401a88e1d" (UID: "5e270e14-6e48-4fd0-bbd6-73e401a88e1d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:52:59.942598 master-0 kubenswrapper[7756]: I0220 11:52:59.939607 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbfa556b-3986-44b5-bf47-be113d732ad8-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "bbfa556b-3986-44b5-bf47-be113d732ad8" (UID: "bbfa556b-3986-44b5-bf47-be113d732ad8"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:52:59.942598 master-0 kubenswrapper[7756]: I0220 11:52:59.939648 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "5e270e14-6e48-4fd0-bbd6-73e401a88e1d" (UID: "5e270e14-6e48-4fd0-bbd6-73e401a88e1d"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:52:59.942598 master-0 kubenswrapper[7756]: I0220 11:52:59.939700 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "5e270e14-6e48-4fd0-bbd6-73e401a88e1d" (UID: "5e270e14-6e48-4fd0-bbd6-73e401a88e1d"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:52:59.945587 master-0 kubenswrapper[7756]: I0220 11:52:59.945453 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbfa556b-3986-44b5-bf47-be113d732ad8-kube-api-access-gzfwx" (OuterVolumeSpecName: "kube-api-access-gzfwx") pod "bbfa556b-3986-44b5-bf47-be113d732ad8" (UID: "bbfa556b-3986-44b5-bf47-be113d732ad8"). InnerVolumeSpecName "kube-api-access-gzfwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:52:59.945716 master-0 kubenswrapper[7756]: I0220 11:52:59.945671 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "5e270e14-6e48-4fd0-bbd6-73e401a88e1d" (UID: "5e270e14-6e48-4fd0-bbd6-73e401a88e1d"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:52:59.949171 master-0 kubenswrapper[7756]: I0220 11:52:59.949116 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfa556b-3986-44b5-bf47-be113d732ad8-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "bbfa556b-3986-44b5-bf47-be113d732ad8" (UID: "bbfa556b-3986-44b5-bf47-be113d732ad8"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:52:59.949545 master-0 kubenswrapper[7756]: I0220 11:52:59.949480 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-kube-api-access-rngh2" (OuterVolumeSpecName: "kube-api-access-rngh2") pod "5e270e14-6e48-4fd0-bbd6-73e401a88e1d" (UID: "5e270e14-6e48-4fd0-bbd6-73e401a88e1d"). InnerVolumeSpecName "kube-api-access-rngh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:53:00.039797 master-0 kubenswrapper[7756]: I0220 11:53:00.039751 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzfwx\" (UniqueName: \"kubernetes.io/projected/bbfa556b-3986-44b5-bf47-be113d732ad8-kube-api-access-gzfwx\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:00.039797 master-0 kubenswrapper[7756]: I0220 11:53:00.039783 7756 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bbfa556b-3986-44b5-bf47-be113d732ad8-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:00.039797 master-0 kubenswrapper[7756]: I0220 11:53:00.039793 7756 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:00.039797 master-0 kubenswrapper[7756]: I0220 11:53:00.039802 7756 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:00.040079 master-0 kubenswrapper[7756]: I0220 11:53:00.039834 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rngh2\" (UniqueName: \"kubernetes.io/projected/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-kube-api-access-rngh2\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:00.040079 master-0 kubenswrapper[7756]: I0220 11:53:00.039845 7756 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:00.040079 master-0 kubenswrapper[7756]: I0220 11:53:00.039856 7756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbfa556b-3986-44b5-bf47-be113d732ad8-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:00.040079 master-0 kubenswrapper[7756]: I0220 11:53:00.039866 7756 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5e270e14-6e48-4fd0-bbd6-73e401a88e1d-images\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:00.040079 master-0 kubenswrapper[7756]: I0220 11:53:00.039875 7756 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbfa556b-3986-44b5-bf47-be113d732ad8-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:00.372798 master-0 kubenswrapper[7756]: I0220 11:53:00.372723 7756 generic.go:334] "Generic (PLEG): container finished" podID="bbfa556b-3986-44b5-bf47-be113d732ad8" containerID="c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749" exitCode=0 Feb 20 11:53:00.372798 master-0 kubenswrapper[7756]: I0220 11:53:00.372792 7756 generic.go:334] "Generic (PLEG): container finished" podID="bbfa556b-3986-44b5-bf47-be113d732ad8" containerID="9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea" exitCode=0 Feb 20 11:53:00.373116 master-0 kubenswrapper[7756]: I0220 11:53:00.372877 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" event={"ID":"bbfa556b-3986-44b5-bf47-be113d732ad8","Type":"ContainerDied","Data":"c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749"} Feb 20 11:53:00.373116 master-0 kubenswrapper[7756]: I0220 11:53:00.372951 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" event={"ID":"bbfa556b-3986-44b5-bf47-be113d732ad8","Type":"ContainerDied","Data":"9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea"} Feb 20 11:53:00.373116 master-0 kubenswrapper[7756]: I0220 11:53:00.372997 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" event={"ID":"bbfa556b-3986-44b5-bf47-be113d732ad8","Type":"ContainerDied","Data":"11c133e52238e057b93eaa645207b925b45693009945d2a2b0773bd924046bd2"} Feb 20 11:53:00.373116 master-0 kubenswrapper[7756]: I0220 11:53:00.373036 7756 scope.go:117] "RemoveContainer" containerID="c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749" Feb 20 11:53:00.373338 master-0 kubenswrapper[7756]: I0220 11:53:00.373277 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-cll9p" Feb 20 11:53:00.379892 master-0 kubenswrapper[7756]: I0220 11:53:00.379829 7756 generic.go:334] "Generic (PLEG): container finished" podID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" containerID="c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f" exitCode=0 Feb 20 11:53:00.379892 master-0 kubenswrapper[7756]: I0220 11:53:00.379875 7756 generic.go:334] "Generic (PLEG): container finished" podID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" containerID="5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453" exitCode=0 Feb 20 11:53:00.379892 master-0 kubenswrapper[7756]: I0220 11:53:00.379886 7756 generic.go:334] "Generic (PLEG): container finished" podID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" containerID="170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550" exitCode=0 Feb 20 11:53:00.380051 master-0 kubenswrapper[7756]: I0220 11:53:00.379910 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" event={"ID":"5e270e14-6e48-4fd0-bbd6-73e401a88e1d","Type":"ContainerDied","Data":"c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f"} Feb 20 11:53:00.380051 master-0 kubenswrapper[7756]: I0220 11:53:00.379951 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" event={"ID":"5e270e14-6e48-4fd0-bbd6-73e401a88e1d","Type":"ContainerDied","Data":"5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453"} Feb 20 11:53:00.380051 master-0 kubenswrapper[7756]: I0220 11:53:00.379966 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" event={"ID":"5e270e14-6e48-4fd0-bbd6-73e401a88e1d","Type":"ContainerDied","Data":"170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550"} Feb 20 11:53:00.380051 master-0 kubenswrapper[7756]: I0220 11:53:00.379979 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" event={"ID":"5e270e14-6e48-4fd0-bbd6-73e401a88e1d","Type":"ContainerDied","Data":"33702da26d6f4dc7caebef6e36ac571d9d9f35d8ceadf09833d286f6ffd2ab74"} Feb 20 11:53:00.380051 master-0 kubenswrapper[7756]: I0220 11:53:00.380048 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq" Feb 20 11:53:00.415740 master-0 kubenswrapper[7756]: I0220 11:53:00.415697 7756 scope.go:117] "RemoveContainer" containerID="9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea" Feb 20 11:53:00.436984 master-0 kubenswrapper[7756]: I0220 11:53:00.436918 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq"] Feb 20 11:53:00.442658 master-0 kubenswrapper[7756]: I0220 11:53:00.442620 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-x8zgq"] Feb 20 11:53:00.444119 master-0 kubenswrapper[7756]: I0220 11:53:00.444083 7756 scope.go:117] "RemoveContainer" containerID="c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749" Feb 20 11:53:00.444684 master-0 kubenswrapper[7756]: E0220 11:53:00.444646 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749\": container with ID starting with c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749 not found: ID does not exist" containerID="c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749" Feb 20 11:53:00.444776 master-0 kubenswrapper[7756]: I0220 11:53:00.444687 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749"} err="failed to get container status \"c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749\": rpc error: code = NotFound desc = could not find container \"c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749\": container with ID starting with c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749 not found: ID does not exist" Feb 20 11:53:00.444776 master-0 kubenswrapper[7756]: I0220 11:53:00.444715 7756 scope.go:117] "RemoveContainer" containerID="9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea" Feb 20 11:53:00.445679 master-0 kubenswrapper[7756]: E0220 11:53:00.445329 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea\": container with ID starting with 9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea not found: ID does not exist" containerID="9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea" Feb 20 11:53:00.445679 master-0 kubenswrapper[7756]: I0220 11:53:00.445398 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea"} err="failed to get container status \"9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea\": rpc error: code = NotFound desc = could not find container \"9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea\": container with ID starting with 9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea not found: ID does not exist" Feb 20 11:53:00.445679 master-0 kubenswrapper[7756]: I0220 11:53:00.445426 7756 scope.go:117] "RemoveContainer" containerID="c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749" Feb 20 11:53:00.446177 master-0 kubenswrapper[7756]: I0220 11:53:00.446127 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749"} err="failed to get container status \"c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749\": rpc error: code = NotFound desc = could not find container \"c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749\": container with ID starting with c37d5480e69992982d3d98ccc655de7f2c879c087c4528b949b025e944996749 not found: ID does not exist" Feb 20 11:53:00.446177 master-0 kubenswrapper[7756]: I0220 11:53:00.446161 7756 scope.go:117] "RemoveContainer" containerID="9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea" Feb 20 11:53:00.446564 master-0 kubenswrapper[7756]: I0220 11:53:00.446515 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea"} err="failed to get container status \"9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea\": rpc error: code = NotFound desc = could not find container \"9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea\": container with ID starting with 9791ee90a0f7a9fc85f2acfd528a2cf50e7c85395ae902c42b8872fa0acf48ea not found: ID does not exist" Feb 20 11:53:00.446564 master-0 kubenswrapper[7756]: I0220 11:53:00.446563 7756 scope.go:117] "RemoveContainer" containerID="c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f" Feb 20 11:53:00.446735 master-0 kubenswrapper[7756]: I0220 11:53:00.446706 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-cll9p"] Feb 20 11:53:00.449679 master-0 kubenswrapper[7756]: I0220 11:53:00.449635 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-cll9p"] Feb 20 11:53:00.467121 master-0 kubenswrapper[7756]: I0220 11:53:00.467058 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg"] Feb 20 11:53:00.467314 master-0 kubenswrapper[7756]: E0220 11:53:00.467291 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfa556b-3986-44b5-bf47-be113d732ad8" containerName="machine-approver-controller" Feb 20 11:53:00.467314 master-0 kubenswrapper[7756]: I0220 11:53:00.467308 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfa556b-3986-44b5-bf47-be113d732ad8" containerName="machine-approver-controller" Feb 20 11:53:00.467430 master-0 kubenswrapper[7756]: E0220 11:53:00.467326 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" containerName="config-sync-controllers" Feb 20 11:53:00.467430 master-0 kubenswrapper[7756]: I0220 11:53:00.467335 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" containerName="config-sync-controllers" Feb 20 11:53:00.467430 master-0 kubenswrapper[7756]: E0220 11:53:00.467351 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfa556b-3986-44b5-bf47-be113d732ad8" containerName="kube-rbac-proxy" Feb 20 11:53:00.467430 master-0 kubenswrapper[7756]: I0220 11:53:00.467358 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfa556b-3986-44b5-bf47-be113d732ad8" containerName="kube-rbac-proxy" Feb 20 11:53:00.467430 master-0 kubenswrapper[7756]: E0220 11:53:00.467378 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" containerName="cluster-cloud-controller-manager" Feb 20 11:53:00.467430 master-0 kubenswrapper[7756]: I0220 11:53:00.467386 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" containerName="cluster-cloud-controller-manager" Feb 20 11:53:00.467430 master-0 kubenswrapper[7756]: E0220 11:53:00.467421 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" containerName="kube-rbac-proxy" Feb 20 11:53:00.467430 master-0 kubenswrapper[7756]: I0220 11:53:00.467434 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" containerName="kube-rbac-proxy" Feb 20 11:53:00.467752 master-0 kubenswrapper[7756]: I0220 11:53:00.467574 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfa556b-3986-44b5-bf47-be113d732ad8" containerName="kube-rbac-proxy" Feb 20 11:53:00.467752 master-0 kubenswrapper[7756]: I0220 11:53:00.467594 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" containerName="cluster-cloud-controller-manager" Feb 20 11:53:00.467752 master-0 kubenswrapper[7756]: I0220 11:53:00.467609 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfa556b-3986-44b5-bf47-be113d732ad8" containerName="machine-approver-controller" Feb 20 11:53:00.467752 master-0 kubenswrapper[7756]: I0220 11:53:00.467620 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" containerName="kube-rbac-proxy" Feb 20 11:53:00.467752 master-0 kubenswrapper[7756]: I0220 11:53:00.467633 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" containerName="config-sync-controllers" Feb 20 11:53:00.468476 master-0 kubenswrapper[7756]: I0220 11:53:00.468455 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:00.470442 master-0 kubenswrapper[7756]: I0220 11:53:00.470420 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 20 11:53:00.470706 master-0 kubenswrapper[7756]: I0220 11:53:00.470683 7756 scope.go:117] "RemoveContainer" containerID="5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453" Feb 20 11:53:00.471108 master-0 kubenswrapper[7756]: I0220 11:53:00.470922 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-g5hcq" Feb 20 11:53:00.471749 master-0 kubenswrapper[7756]: I0220 11:53:00.471727 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 20 11:53:00.471899 master-0 kubenswrapper[7756]: I0220 11:53:00.471776 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 20 11:53:00.471899 master-0 kubenswrapper[7756]: I0220 11:53:00.471817 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 20 11:53:00.471899 master-0 kubenswrapper[7756]: I0220 11:53:00.471855 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 11:53:00.501107 master-0 kubenswrapper[7756]: I0220 11:53:00.501053 7756 scope.go:117] "RemoveContainer" containerID="170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550" Feb 20 11:53:00.521970 master-0 kubenswrapper[7756]: I0220 11:53:00.521757 7756 scope.go:117] "RemoveContainer" containerID="c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f" Feb 20 11:53:00.522323 master-0 kubenswrapper[7756]: E0220 11:53:00.522278 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f\": container with ID starting with c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f not found: ID does not exist" containerID="c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f" Feb 20 11:53:00.522404 master-0 kubenswrapper[7756]: I0220 11:53:00.522324 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f"} err="failed to get container status \"c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f\": rpc error: code = NotFound desc = could not find container \"c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f\": container with ID starting with c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f not found: ID does not exist" Feb 20 11:53:00.522404 master-0 kubenswrapper[7756]: I0220 11:53:00.522352 7756 scope.go:117] "RemoveContainer" containerID="5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453" Feb 20 11:53:00.522802 master-0 kubenswrapper[7756]: E0220 11:53:00.522767 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453\": container with ID starting with 5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453 not found: ID does not exist" containerID="5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453" Feb 20 11:53:00.522867 master-0 kubenswrapper[7756]: I0220 11:53:00.522807 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453"} err="failed to get container status \"5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453\": rpc error: code = NotFound desc = could not find container \"5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453\": container with ID starting with 5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453 not found: ID does not exist" Feb 20 11:53:00.522867 master-0 kubenswrapper[7756]: I0220 11:53:00.522834 7756 scope.go:117] "RemoveContainer" containerID="170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550" Feb 20 11:53:00.523698 master-0 kubenswrapper[7756]: E0220 11:53:00.523640 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550\": container with ID starting with 170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550 not found: ID does not exist" containerID="170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550" Feb 20 11:53:00.523698 master-0 kubenswrapper[7756]: I0220 11:53:00.523686 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550"} err="failed to get container status \"170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550\": rpc error: code = NotFound desc = could not find container \"170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550\": container with ID starting with 170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550 not found: ID does not exist" Feb 20 11:53:00.523698 master-0 kubenswrapper[7756]: I0220 11:53:00.523702 7756 scope.go:117] "RemoveContainer" containerID="c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f" Feb 20 11:53:00.524225 master-0 kubenswrapper[7756]: I0220 11:53:00.524181 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f"} err="failed to get container status \"c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f\": rpc error: code = NotFound desc = could not find container \"c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f\": container with ID starting with c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f not found: ID does not exist" Feb 20 11:53:00.524225 master-0 kubenswrapper[7756]: I0220 11:53:00.524212 7756 scope.go:117] "RemoveContainer" containerID="5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453" Feb 20 11:53:00.524595 master-0 kubenswrapper[7756]: I0220 11:53:00.524565 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453"} err="failed to get container status \"5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453\": rpc error: code = NotFound desc = could not find container \"5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453\": container with ID starting with 5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453 not found: ID does not exist" Feb 20 11:53:00.524595 master-0 kubenswrapper[7756]: I0220 11:53:00.524592 7756 scope.go:117] "RemoveContainer" containerID="170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550" Feb 20 11:53:00.525016 master-0 kubenswrapper[7756]: I0220 11:53:00.524989 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550"} err="failed to get container status \"170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550\": rpc error: code = NotFound desc = could not find container \"170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550\": container with ID starting with 170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550 not found: ID does not exist" Feb 20 11:53:00.525016 master-0 kubenswrapper[7756]: I0220 11:53:00.525010 7756 scope.go:117] "RemoveContainer" containerID="c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f" Feb 20 11:53:00.525430 master-0 kubenswrapper[7756]: I0220 11:53:00.525402 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f"} err="failed to get container status \"c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f\": rpc error: code = NotFound desc = could not find container \"c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f\": container with ID starting with c8d0fcdae0a4e25712d7989f5915e631db415dd8844a40cbdf2728532dbcc02f not found: ID does not exist" Feb 20 11:53:00.525430 master-0 kubenswrapper[7756]: I0220 11:53:00.525424 7756 scope.go:117] "RemoveContainer" containerID="5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453" Feb 20 11:53:00.525782 master-0 kubenswrapper[7756]: I0220 11:53:00.525753 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453"} err="failed to get container status \"5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453\": rpc error: code = NotFound desc = could not find container \"5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453\": container with ID starting with 5746aafb443fd75c5a4040c7e5a3c079ccd3e882f0f4b16a3a8e7dc380a8c453 not found: ID does not exist" Feb 20 11:53:00.525782 master-0 kubenswrapper[7756]: I0220 11:53:00.525774 7756 scope.go:117] "RemoveContainer" containerID="170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550" Feb 20 11:53:00.526922 master-0 kubenswrapper[7756]: I0220 11:53:00.526141 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550"} err="failed to get container status \"170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550\": rpc error: code = NotFound desc = could not find container \"170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550\": container with ID starting with 170be49547a93cfadbec842229e7f4997d7617a299daafd63c637b40cfab6550 not found: ID does not exist" Feb 20 11:53:00.590997 master-0 kubenswrapper[7756]: I0220 11:53:00.590938 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e270e14-6e48-4fd0-bbd6-73e401a88e1d" path="/var/lib/kubelet/pods/5e270e14-6e48-4fd0-bbd6-73e401a88e1d/volumes" Feb 20 11:53:00.592331 master-0 kubenswrapper[7756]: I0220 11:53:00.592287 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbfa556b-3986-44b5-bf47-be113d732ad8" path="/var/lib/kubelet/pods/bbfa556b-3986-44b5-bf47-be113d732ad8/volumes" Feb 20 11:53:00.645858 master-0 kubenswrapper[7756]: I0220 11:53:00.645696 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:00.645858 master-0 kubenswrapper[7756]: I0220 11:53:00.645836 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:00.646138 master-0 kubenswrapper[7756]: I0220 11:53:00.645883 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:00.646138 master-0 kubenswrapper[7756]: I0220 11:53:00.645924 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksx6l\" (UniqueName: \"kubernetes.io/projected/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-kube-api-access-ksx6l\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:00.646138 master-0 kubenswrapper[7756]: I0220 11:53:00.645977 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:00.747519 master-0 kubenswrapper[7756]: I0220 11:53:00.747440 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksx6l\" (UniqueName: \"kubernetes.io/projected/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-kube-api-access-ksx6l\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:00.747771 master-0 kubenswrapper[7756]: I0220 11:53:00.747590 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:00.747771 master-0 kubenswrapper[7756]: I0220 11:53:00.747668 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:00.747872 master-0 kubenswrapper[7756]: I0220 11:53:00.747809 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:00.747958 master-0 kubenswrapper[7756]: I0220 11:53:00.747867 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:00.749951 master-0 kubenswrapper[7756]: I0220 11:53:00.749899 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:00.750639 master-0 kubenswrapper[7756]: I0220 11:53:00.750368 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:00.751295 master-0 kubenswrapper[7756]: I0220 11:53:00.751243 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:00.756055 master-0 kubenswrapper[7756]: I0220 11:53:00.755999 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:01.428870 master-0 kubenswrapper[7756]: I0220 11:53:01.428793 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l"] Feb 20 11:53:01.430269 master-0 kubenswrapper[7756]: I0220 11:53:01.430215 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 11:53:01.434362 master-0 kubenswrapper[7756]: I0220 11:53:01.434305 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 20 11:53:01.434560 master-0 kubenswrapper[7756]: I0220 11:53:01.434485 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 20 11:53:01.434647 master-0 kubenswrapper[7756]: I0220 11:53:01.434548 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 20 11:53:01.434747 master-0 kubenswrapper[7756]: I0220 11:53:01.434705 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 20 11:53:01.434972 master-0 kubenswrapper[7756]: I0220 11:53:01.434912 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-l5hc4" Feb 20 11:53:01.437603 master-0 kubenswrapper[7756]: I0220 11:53:01.437555 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 20 11:53:01.448837 master-0 kubenswrapper[7756]: I0220 11:53:01.448456 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksx6l\" (UniqueName: \"kubernetes.io/projected/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-kube-api-access-ksx6l\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:01.561298 master-0 kubenswrapper[7756]: I0220 11:53:01.561225 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-config\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 11:53:01.561571 master-0 kubenswrapper[7756]: I0220 11:53:01.561309 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5m78\" (UniqueName: \"kubernetes.io/projected/7635c0ff-4d40-4310-8187-230323e504e0-kube-api-access-p5m78\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 11:53:01.561571 master-0 kubenswrapper[7756]: I0220 11:53:01.561478 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7635c0ff-4d40-4310-8187-230323e504e0-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 11:53:01.561677 master-0 kubenswrapper[7756]: I0220 11:53:01.561590 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 11:53:01.629885 master-0 kubenswrapper[7756]: I0220 11:53:01.629794 7756 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 20 11:53:01.630588 master-0 kubenswrapper[7756]: I0220 11:53:01.630156 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" containerID="cri-o://f1682d7b4b37ab8ab7b0e93abba0b5ee3a264e78978d6dc34d6d434f13d2a6ae" gracePeriod=30 Feb 20 11:53:01.630588 master-0 kubenswrapper[7756]: I0220 11:53:01.630406 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://91bf4bc38d2da6c505ee04354464ef749c6984385a6a3cb062fc7393534e0bd7" gracePeriod=30 Feb 20 11:53:01.631760 master-0 kubenswrapper[7756]: I0220 11:53:01.631394 7756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 11:53:01.631760 master-0 kubenswrapper[7756]: E0220 11:53:01.631724 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 11:53:01.631760 master-0 kubenswrapper[7756]: I0220 11:53:01.631748 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 11:53:01.631892 master-0 kubenswrapper[7756]: E0220 11:53:01.631769 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 11:53:01.631892 master-0 kubenswrapper[7756]: I0220 11:53:01.631788 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 11:53:01.631892 master-0 kubenswrapper[7756]: E0220 11:53:01.631815 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 11:53:01.631892 master-0 kubenswrapper[7756]: I0220 11:53:01.631831 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 11:53:01.631892 master-0 kubenswrapper[7756]: E0220 11:53:01.631861 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 11:53:01.631892 master-0 kubenswrapper[7756]: I0220 11:53:01.631879 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 11:53:01.632159 master-0 kubenswrapper[7756]: E0220 11:53:01.631900 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 20 11:53:01.632159 master-0 kubenswrapper[7756]: I0220 11:53:01.631917 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 20 11:53:01.632159 master-0 kubenswrapper[7756]: I0220 11:53:01.632130 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 11:53:01.632159 master-0 kubenswrapper[7756]: I0220 11:53:01.632155 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 11:53:01.632369 master-0 kubenswrapper[7756]: I0220 11:53:01.632171 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 20 11:53:01.632369 master-0 kubenswrapper[7756]: I0220 11:53:01.632192 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 11:53:01.632658 master-0 kubenswrapper[7756]: I0220 11:53:01.632613 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 11:53:01.634061 master-0 kubenswrapper[7756]: I0220 11:53:01.634006 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:01.663096 master-0 kubenswrapper[7756]: I0220 11:53:01.663040 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7635c0ff-4d40-4310-8187-230323e504e0-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 11:53:01.663189 master-0 kubenswrapper[7756]: I0220 11:53:01.663114 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 11:53:01.663375 master-0 kubenswrapper[7756]: I0220 11:53:01.663337 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-config\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 11:53:01.663416 master-0 kubenswrapper[7756]: I0220 11:53:01.663399 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5m78\" (UniqueName: \"kubernetes.io/projected/7635c0ff-4d40-4310-8187-230323e504e0-kube-api-access-p5m78\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 11:53:01.664216 master-0 kubenswrapper[7756]: I0220 11:53:01.664181 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 11:53:01.664422 master-0 kubenswrapper[7756]: I0220 11:53:01.664387 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-config\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 11:53:01.666750 master-0 kubenswrapper[7756]: I0220 11:53:01.666695 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7635c0ff-4d40-4310-8187-230323e504e0-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 11:53:01.674710 master-0 kubenswrapper[7756]: I0220 11:53:01.674642 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 11:53:01.683018 master-0 kubenswrapper[7756]: I0220 11:53:01.682237 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5m78\" (UniqueName: \"kubernetes.io/projected/7635c0ff-4d40-4310-8187-230323e504e0-kube-api-access-p5m78\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 11:53:01.689666 master-0 kubenswrapper[7756]: I0220 11:53:01.689609 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 11:53:01.765438 master-0 kubenswrapper[7756]: I0220 11:53:01.765198 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a767e0793175d588147a983384ee43db-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a767e0793175d588147a983384ee43db\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:01.765438 master-0 kubenswrapper[7756]: I0220 11:53:01.765302 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a767e0793175d588147a983384ee43db-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a767e0793175d588147a983384ee43db\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:01.768859 master-0 kubenswrapper[7756]: W0220 11:53:01.768816 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c48a22_ed96_42c5_ac4a_dd7d4f204539.slice/crio-db83379789ef98a1c8bd3954093bb31968ab0139d9f5bc569d532d29a9e92213 WatchSource:0}: Error finding container db83379789ef98a1c8bd3954093bb31968ab0139d9f5bc569d532d29a9e92213: Status 404 returned error can't find the container with id db83379789ef98a1c8bd3954093bb31968ab0139d9f5bc569d532d29a9e92213 Feb 20 11:53:01.799545 master-0 kubenswrapper[7756]: I0220 11:53:01.799451 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 11:53:01.836856 master-0 kubenswrapper[7756]: W0220 11:53:01.836802 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7635c0ff_4d40_4310_8187_230323e504e0.slice/crio-1a01adc1f41522dbb8a1d23da740cfd44f6a53e272a46c5d7003ab771e7ccdcb WatchSource:0}: Error finding container 1a01adc1f41522dbb8a1d23da740cfd44f6a53e272a46c5d7003ab771e7ccdcb: Status 404 returned error can't find the container with id 1a01adc1f41522dbb8a1d23da740cfd44f6a53e272a46c5d7003ab771e7ccdcb Feb 20 11:53:01.867120 master-0 kubenswrapper[7756]: I0220 11:53:01.867052 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a767e0793175d588147a983384ee43db-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a767e0793175d588147a983384ee43db\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:01.867211 master-0 kubenswrapper[7756]: I0220 11:53:01.867160 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a767e0793175d588147a983384ee43db-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a767e0793175d588147a983384ee43db\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:01.867318 master-0 kubenswrapper[7756]: I0220 11:53:01.867256 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a767e0793175d588147a983384ee43db-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a767e0793175d588147a983384ee43db\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:01.867367 master-0 kubenswrapper[7756]: I0220 11:53:01.867294 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a767e0793175d588147a983384ee43db-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a767e0793175d588147a983384ee43db\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:01.952480 master-0 kubenswrapper[7756]: I0220 11:53:01.952425 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:53:01.968144 master-0 kubenswrapper[7756]: I0220 11:53:01.968105 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:01.988160 master-0 kubenswrapper[7756]: W0220 11:53:01.988095 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda767e0793175d588147a983384ee43db.slice/crio-0485d41f9e9692494bf4d5e2e9529ac2d8ff045a5850618429ca0e5f2f95327f WatchSource:0}: Error finding container 0485d41f9e9692494bf4d5e2e9529ac2d8ff045a5850618429ca0e5f2f95327f: Status 404 returned error can't find the container with id 0485d41f9e9692494bf4d5e2e9529ac2d8ff045a5850618429ca0e5f2f95327f Feb 20 11:53:02.007501 master-0 kubenswrapper[7756]: I0220 11:53:02.006796 7756 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="440f00a0-01e7-4314-9807-5173894fb112" Feb 20 11:53:02.069857 master-0 kubenswrapper[7756]: I0220 11:53:02.069814 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 20 11:53:02.070045 master-0 kubenswrapper[7756]: I0220 11:53:02.069870 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 20 11:53:02.070045 master-0 kubenswrapper[7756]: I0220 11:53:02.069886 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 20 11:53:02.070045 master-0 kubenswrapper[7756]: I0220 11:53:02.069912 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 20 11:53:02.070045 master-0 kubenswrapper[7756]: I0220 11:53:02.069960 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 20 11:53:02.070437 master-0 kubenswrapper[7756]: I0220 11:53:02.070327 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs" (OuterVolumeSpecName: "logs") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:53:02.070437 master-0 kubenswrapper[7756]: I0220 11:53:02.070356 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:53:02.070437 master-0 kubenswrapper[7756]: I0220 11:53:02.070372 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:53:02.070437 master-0 kubenswrapper[7756]: I0220 11:53:02.070386 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config" (OuterVolumeSpecName: "config") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:53:02.070437 master-0 kubenswrapper[7756]: I0220 11:53:02.070403 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets" (OuterVolumeSpecName: "secrets") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:53:02.171078 master-0 kubenswrapper[7756]: I0220 11:53:02.171026 7756 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:02.171078 master-0 kubenswrapper[7756]: I0220 11:53:02.171063 7756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:02.171078 master-0 kubenswrapper[7756]: I0220 11:53:02.171079 7756 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:02.171324 master-0 kubenswrapper[7756]: I0220 11:53:02.171093 7756 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:02.171324 master-0 kubenswrapper[7756]: I0220 11:53:02.171106 7756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:02.400857 master-0 kubenswrapper[7756]: I0220 11:53:02.400615 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" event={"ID":"e8c48a22-ed96-42c5-ac4a-dd7d4f204539","Type":"ContainerStarted","Data":"3de8f37a5f333a2a0c06335a1e1da92af4239f0f86ce6fc2f55eb1e6b9d57ccf"} Feb 20 11:53:02.400857 master-0 kubenswrapper[7756]: I0220 11:53:02.400663 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" event={"ID":"e8c48a22-ed96-42c5-ac4a-dd7d4f204539","Type":"ContainerStarted","Data":"5ae28e0dd7617cbe98b911e55270072130fade6b7dce5510c67c9d3d17bc60bf"} Feb 20 11:53:02.400857 master-0 kubenswrapper[7756]: I0220 11:53:02.400676 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" event={"ID":"e8c48a22-ed96-42c5-ac4a-dd7d4f204539","Type":"ContainerStarted","Data":"db83379789ef98a1c8bd3954093bb31968ab0139d9f5bc569d532d29a9e92213"} Feb 20 11:53:02.402771 master-0 kubenswrapper[7756]: I0220 11:53:02.402462 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" event={"ID":"7635c0ff-4d40-4310-8187-230323e504e0","Type":"ContainerStarted","Data":"43e3bfd2d03db486eaa07c471fb4184138af1fd2a51e7d71dbadb2ebc26dee9d"} Feb 20 11:53:02.402771 master-0 kubenswrapper[7756]: I0220 11:53:02.402485 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" event={"ID":"7635c0ff-4d40-4310-8187-230323e504e0","Type":"ContainerStarted","Data":"7c9278a1d3d2466c483e93e79bfad049432771a18fc950094bb44a8f0f527114"} Feb 20 11:53:02.402771 master-0 kubenswrapper[7756]: I0220 11:53:02.402495 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" event={"ID":"7635c0ff-4d40-4310-8187-230323e504e0","Type":"ContainerStarted","Data":"1a01adc1f41522dbb8a1d23da740cfd44f6a53e272a46c5d7003ab771e7ccdcb"} Feb 20 11:53:02.405932 master-0 kubenswrapper[7756]: I0220 11:53:02.405884 7756 generic.go:334] "Generic (PLEG): container finished" podID="eb93420d-7c5a-4492-bd16-0104104406b4" containerID="76e85ab561cbad6abc6fb8fe1c91c7b03e4b40963c9f88e69d0121b220aa047b" exitCode=0 Feb 20 11:53:02.406098 master-0 kubenswrapper[7756]: I0220 11:53:02.405968 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"eb93420d-7c5a-4492-bd16-0104104406b4","Type":"ContainerDied","Data":"76e85ab561cbad6abc6fb8fe1c91c7b03e4b40963c9f88e69d0121b220aa047b"} Feb 20 11:53:02.408544 master-0 kubenswrapper[7756]: I0220 11:53:02.408478 7756 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="91bf4bc38d2da6c505ee04354464ef749c6984385a6a3cb062fc7393534e0bd7" exitCode=0 Feb 20 11:53:02.408544 master-0 kubenswrapper[7756]: I0220 11:53:02.408516 7756 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="f1682d7b4b37ab8ab7b0e93abba0b5ee3a264e78978d6dc34d6d434f13d2a6ae" exitCode=0 Feb 20 11:53:02.408664 master-0 kubenswrapper[7756]: I0220 11:53:02.408561 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 11:53:02.408664 master-0 kubenswrapper[7756]: I0220 11:53:02.408575 7756 scope.go:117] "RemoveContainer" containerID="407a4490b53b516c4eaa24c4972588c07da9b5f9574f9b35da5b44a438b78bcc" Feb 20 11:53:02.408748 master-0 kubenswrapper[7756]: I0220 11:53:02.408561 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcd9999695d850ee86685844ce22164c47296c700a3e8af3d20ba2a180990b4a" Feb 20 11:53:02.422214 master-0 kubenswrapper[7756]: I0220 11:53:02.422141 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" podStartSLOduration=2.422124808 podStartE2EDuration="2.422124808s" podCreationTimestamp="2026-02-20 11:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:53:02.416870531 +0000 UTC m=+228.159118539" watchObservedRunningTime="2026-02-20 11:53:02.422124808 +0000 UTC m=+228.164372816" Feb 20 11:53:02.428454 master-0 kubenswrapper[7756]: I0220 11:53:02.428408 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerStarted","Data":"ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da"} Feb 20 11:53:02.428560 master-0 kubenswrapper[7756]: I0220 11:53:02.428455 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerStarted","Data":"0485d41f9e9692494bf4d5e2e9529ac2d8ff045a5850618429ca0e5f2f95327f"} Feb 20 11:53:02.585849 master-0 kubenswrapper[7756]: I0220 11:53:02.585794 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ad9373c007a4fcd25e70622bdc8deb" path="/var/lib/kubelet/pods/c9ad9373c007a4fcd25e70622bdc8deb/volumes" Feb 20 11:53:02.586170 master-0 kubenswrapper[7756]: I0220 11:53:02.586150 7756 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Feb 20 11:53:02.602172 master-0 kubenswrapper[7756]: I0220 11:53:02.602131 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 20 11:53:02.602272 master-0 kubenswrapper[7756]: I0220 11:53:02.602172 7756 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="440f00a0-01e7-4314-9807-5173894fb112" Feb 20 11:53:02.606179 master-0 kubenswrapper[7756]: I0220 11:53:02.606131 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 20 11:53:02.606179 master-0 kubenswrapper[7756]: I0220 11:53:02.606171 7756 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="440f00a0-01e7-4314-9807-5173894fb112" Feb 20 11:53:03.445850 master-0 kubenswrapper[7756]: I0220 11:53:03.445778 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerStarted","Data":"b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c"} Feb 20 11:53:03.445850 master-0 kubenswrapper[7756]: I0220 11:53:03.445849 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerStarted","Data":"112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4"} Feb 20 11:53:03.446906 master-0 kubenswrapper[7756]: I0220 11:53:03.445873 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerStarted","Data":"0616a5d031f34cdf4ba086c5e6e13dc1c06dc0cc61473c6faf71fc5fd1759c28"} Feb 20 11:53:03.449188 master-0 kubenswrapper[7756]: I0220 11:53:03.449100 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" event={"ID":"e8c48a22-ed96-42c5-ac4a-dd7d4f204539","Type":"ContainerStarted","Data":"552b83c8bb34db4eff07bab765e53e42a7951f41a58fae8db68d750b59e22db8"} Feb 20 11:53:03.505602 master-0 kubenswrapper[7756]: I0220 11:53:03.503292 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.503257078 podStartE2EDuration="2.503257078s" podCreationTimestamp="2026-02-20 11:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:53:03.497680123 +0000 UTC m=+229.239928171" watchObservedRunningTime="2026-02-20 11:53:03.503257078 +0000 UTC m=+229.245505156" Feb 20 11:53:03.527251 master-0 kubenswrapper[7756]: I0220 11:53:03.527124 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" podStartSLOduration=3.527089552 podStartE2EDuration="3.527089552s" podCreationTimestamp="2026-02-20 11:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:53:03.523201774 +0000 UTC m=+229.265449842" watchObservedRunningTime="2026-02-20 11:53:03.527089552 +0000 UTC m=+229.269337570" Feb 20 11:53:03.818149 master-0 kubenswrapper[7756]: I0220 11:53:03.818073 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:53:03.892008 master-0 kubenswrapper[7756]: I0220 11:53:03.891923 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eb93420d-7c5a-4492-bd16-0104104406b4-var-lock\") pod \"eb93420d-7c5a-4492-bd16-0104104406b4\" (UID: \"eb93420d-7c5a-4492-bd16-0104104406b4\") " Feb 20 11:53:03.892008 master-0 kubenswrapper[7756]: I0220 11:53:03.892013 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb93420d-7c5a-4492-bd16-0104104406b4-kube-api-access\") pod \"eb93420d-7c5a-4492-bd16-0104104406b4\" (UID: \"eb93420d-7c5a-4492-bd16-0104104406b4\") " Feb 20 11:53:03.892349 master-0 kubenswrapper[7756]: I0220 11:53:03.892049 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb93420d-7c5a-4492-bd16-0104104406b4-var-lock" (OuterVolumeSpecName: "var-lock") pod "eb93420d-7c5a-4492-bd16-0104104406b4" (UID: "eb93420d-7c5a-4492-bd16-0104104406b4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:53:03.892349 master-0 kubenswrapper[7756]: I0220 11:53:03.892148 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb93420d-7c5a-4492-bd16-0104104406b4-kubelet-dir\") pod \"eb93420d-7c5a-4492-bd16-0104104406b4\" (UID: \"eb93420d-7c5a-4492-bd16-0104104406b4\") " Feb 20 11:53:03.892349 master-0 kubenswrapper[7756]: I0220 11:53:03.892316 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb93420d-7c5a-4492-bd16-0104104406b4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eb93420d-7c5a-4492-bd16-0104104406b4" (UID: "eb93420d-7c5a-4492-bd16-0104104406b4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:53:03.892906 master-0 kubenswrapper[7756]: I0220 11:53:03.892856 7756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eb93420d-7c5a-4492-bd16-0104104406b4-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:03.892906 master-0 kubenswrapper[7756]: I0220 11:53:03.892896 7756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb93420d-7c5a-4492-bd16-0104104406b4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:03.897318 master-0 kubenswrapper[7756]: I0220 11:53:03.897203 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb93420d-7c5a-4492-bd16-0104104406b4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eb93420d-7c5a-4492-bd16-0104104406b4" (UID: "eb93420d-7c5a-4492-bd16-0104104406b4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:53:03.995094 master-0 kubenswrapper[7756]: I0220 11:53:03.995005 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb93420d-7c5a-4492-bd16-0104104406b4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:04.460515 master-0 kubenswrapper[7756]: I0220 11:53:04.460413 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"eb93420d-7c5a-4492-bd16-0104104406b4","Type":"ContainerDied","Data":"2afa2b7ebc56f1b83ba6eea0931272420c7f296c9bd03931d27ab411eab9454b"} Feb 20 11:53:04.460515 master-0 kubenswrapper[7756]: I0220 11:53:04.460504 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2afa2b7ebc56f1b83ba6eea0931272420c7f296c9bd03931d27ab411eab9454b" Feb 20 11:53:04.461387 master-0 kubenswrapper[7756]: I0220 11:53:04.460750 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 11:53:11.968637 master-0 kubenswrapper[7756]: I0220 11:53:11.968559 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:11.968637 master-0 kubenswrapper[7756]: I0220 11:53:11.968620 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:11.968637 master-0 kubenswrapper[7756]: I0220 11:53:11.968640 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:11.968637 master-0 kubenswrapper[7756]: I0220 11:53:11.968653 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:11.973155 master-0 kubenswrapper[7756]: I0220 11:53:11.973094 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:11.973288 master-0 kubenswrapper[7756]: I0220 11:53:11.973255 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:12.522764 master-0 kubenswrapper[7756]: I0220 11:53:12.522693 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:12.526152 master-0 kubenswrapper[7756]: I0220 11:53:12.526091 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:53:18.435502 master-0 kubenswrapper[7756]: I0220 11:53:18.435431 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n"] Feb 20 11:53:18.436315 master-0 kubenswrapper[7756]: E0220 11:53:18.435697 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb93420d-7c5a-4492-bd16-0104104406b4" containerName="installer" Feb 20 11:53:18.436315 master-0 kubenswrapper[7756]: I0220 11:53:18.435715 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb93420d-7c5a-4492-bd16-0104104406b4" containerName="installer" Feb 20 11:53:18.436315 master-0 kubenswrapper[7756]: I0220 11:53:18.435852 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb93420d-7c5a-4492-bd16-0104104406b4" containerName="installer" Feb 20 11:53:18.438568 master-0 kubenswrapper[7756]: I0220 11:53:18.436494 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 11:53:18.439161 master-0 kubenswrapper[7756]: I0220 11:53:18.439122 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-k7mnd" Feb 20 11:53:18.439718 master-0 kubenswrapper[7756]: I0220 11:53:18.439682 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 20 11:53:18.462588 master-0 kubenswrapper[7756]: I0220 11:53:18.455518 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n"] Feb 20 11:53:18.492619 master-0 kubenswrapper[7756]: I0220 11:53:18.492338 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxr6j\" (UniqueName: \"kubernetes.io/projected/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-kube-api-access-rxr6j\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 11:53:18.492619 master-0 kubenswrapper[7756]: I0220 11:53:18.492515 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 11:53:18.492856 master-0 kubenswrapper[7756]: I0220 11:53:18.492721 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-proxy-tls\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 11:53:18.604345 master-0 kubenswrapper[7756]: I0220 11:53:18.604284 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 11:53:18.604649 master-0 kubenswrapper[7756]: I0220 11:53:18.604589 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-proxy-tls\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 11:53:18.604774 master-0 kubenswrapper[7756]: I0220 11:53:18.604739 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxr6j\" (UniqueName: \"kubernetes.io/projected/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-kube-api-access-rxr6j\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 11:53:18.605411 master-0 kubenswrapper[7756]: I0220 11:53:18.605370 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 11:53:18.609203 master-0 kubenswrapper[7756]: I0220 11:53:18.609161 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-proxy-tls\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 11:53:18.623115 master-0 kubenswrapper[7756]: I0220 11:53:18.623070 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxr6j\" (UniqueName: \"kubernetes.io/projected/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-kube-api-access-rxr6j\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 11:53:18.770708 master-0 kubenswrapper[7756]: I0220 11:53:18.770649 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 11:53:19.208648 master-0 kubenswrapper[7756]: I0220 11:53:19.206254 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n"] Feb 20 11:53:19.216684 master-0 kubenswrapper[7756]: W0220 11:53:19.216642 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29489539_68c6_49dd_bc1b_dcf0c7bb2ebe.slice/crio-54f65f910e458ec6e67c421fe2cab6c8d04efb4552cacded48383019268d4056 WatchSource:0}: Error finding container 54f65f910e458ec6e67c421fe2cab6c8d04efb4552cacded48383019268d4056: Status 404 returned error can't find the container with id 54f65f910e458ec6e67c421fe2cab6c8d04efb4552cacded48383019268d4056 Feb 20 11:53:19.529268 master-0 kubenswrapper[7756]: I0220 11:53:19.529193 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4"] Feb 20 11:53:19.530409 master-0 kubenswrapper[7756]: I0220 11:53:19.530365 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" Feb 20 11:53:19.532248 master-0 kubenswrapper[7756]: I0220 11:53:19.532202 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 20 11:53:19.536622 master-0 kubenswrapper[7756]: I0220 11:53:19.536587 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7b65dc9fcb-fkkd5"] Feb 20 11:53:19.537559 master-0 kubenswrapper[7756]: I0220 11:53:19.537496 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.542838 master-0 kubenswrapper[7756]: I0220 11:53:19.542785 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 20 11:53:19.542987 master-0 kubenswrapper[7756]: I0220 11:53:19.542873 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 20 11:53:19.543086 master-0 kubenswrapper[7756]: I0220 11:53:19.543063 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 20 11:53:19.543229 master-0 kubenswrapper[7756]: I0220 11:53:19.543205 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 20 11:53:19.543320 master-0 kubenswrapper[7756]: I0220 11:53:19.543290 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 20 11:53:19.543378 master-0 kubenswrapper[7756]: I0220 11:53:19.543328 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-gjgxv"] Feb 20 11:53:19.543741 master-0 kubenswrapper[7756]: I0220 11:53:19.543711 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 20 11:53:19.544187 master-0 kubenswrapper[7756]: I0220 11:53:19.544158 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-gjgxv" Feb 20 11:53:19.552940 master-0 kubenswrapper[7756]: I0220 11:53:19.552886 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn"] Feb 20 11:53:19.561353 master-0 kubenswrapper[7756]: I0220 11:53:19.561322 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn"] Feb 20 11:53:19.561353 master-0 kubenswrapper[7756]: I0220 11:53:19.561358 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4"] Feb 20 11:53:19.561490 master-0 kubenswrapper[7756]: I0220 11:53:19.561427 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn" Feb 20 11:53:19.563876 master-0 kubenswrapper[7756]: I0220 11:53:19.562677 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-gjgxv"] Feb 20 11:53:19.564258 master-0 kubenswrapper[7756]: I0220 11:53:19.564225 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 20 11:53:19.570787 master-0 kubenswrapper[7756]: I0220 11:53:19.570631 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" event={"ID":"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe","Type":"ContainerStarted","Data":"b09856f07cbd1114fdce53282ebd66f15e4f24bb550a9d259be4785f7ad1c10c"} Feb 20 11:53:19.570787 master-0 kubenswrapper[7756]: I0220 11:53:19.570683 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" event={"ID":"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe","Type":"ContainerStarted","Data":"4b16a34c164e3dca501c4332ff0f388668786b32102a5a19b7bf01b7c8440060"} Feb 20 11:53:19.570787 master-0 kubenswrapper[7756]: I0220 11:53:19.570699 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" event={"ID":"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe","Type":"ContainerStarted","Data":"54f65f910e458ec6e67c421fe2cab6c8d04efb4552cacded48383019268d4056"} Feb 20 11:53:19.617677 master-0 kubenswrapper[7756]: I0220 11:53:19.615736 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/4d8cd7c5-31fd-4dca-b39b-6d62eb573707-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-s57jn\" (UID: \"4d8cd7c5-31fd-4dca-b39b-6d62eb573707\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn" Feb 20 11:53:19.617677 master-0 kubenswrapper[7756]: I0220 11:53:19.615807 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-default-certificate\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.617677 master-0 kubenswrapper[7756]: I0220 11:53:19.615840 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-stats-auth\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.617677 master-0 kubenswrapper[7756]: I0220 11:53:19.615900 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cnvt\" (UniqueName: \"kubernetes.io/projected/fca78741-ca32-4867-b44f-483fd62f2942-kube-api-access-2cnvt\") pod \"network-check-source-58fb6744f5-gjgxv\" (UID: \"fca78741-ca32-4867-b44f-483fd62f2942\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-gjgxv" Feb 20 11:53:19.617677 master-0 kubenswrapper[7756]: I0220 11:53:19.616033 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c078827-3bdb-4509-aeb3-eb558df1f6e7-service-ca-bundle\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.617677 master-0 kubenswrapper[7756]: I0220 11:53:19.616105 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5827049e-6178-46cf-83c5-cff55daac768-config-volume\") pod \"collect-profiles-29526465-tpgw4\" (UID: \"5827049e-6178-46cf-83c5-cff55daac768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" Feb 20 11:53:19.617677 master-0 kubenswrapper[7756]: I0220 11:53:19.616145 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2qdb\" (UniqueName: \"kubernetes.io/projected/9c078827-3bdb-4509-aeb3-eb558df1f6e7-kube-api-access-x2qdb\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.617677 master-0 kubenswrapper[7756]: I0220 11:53:19.616167 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-metrics-certs\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.617677 master-0 kubenswrapper[7756]: I0220 11:53:19.616186 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9czx6\" (UniqueName: \"kubernetes.io/projected/5827049e-6178-46cf-83c5-cff55daac768-kube-api-access-9czx6\") pod \"collect-profiles-29526465-tpgw4\" (UID: \"5827049e-6178-46cf-83c5-cff55daac768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" Feb 20 11:53:19.617677 master-0 kubenswrapper[7756]: I0220 11:53:19.616204 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5827049e-6178-46cf-83c5-cff55daac768-secret-volume\") pod \"collect-profiles-29526465-tpgw4\" (UID: \"5827049e-6178-46cf-83c5-cff55daac768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" Feb 20 11:53:19.634208 master-0 kubenswrapper[7756]: I0220 11:53:19.634132 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" podStartSLOduration=1.634114944 podStartE2EDuration="1.634114944s" podCreationTimestamp="2026-02-20 11:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:53:19.63184226 +0000 UTC m=+245.374090268" watchObservedRunningTime="2026-02-20 11:53:19.634114944 +0000 UTC m=+245.376362942" Feb 20 11:53:19.717989 master-0 kubenswrapper[7756]: I0220 11:53:19.717778 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cnvt\" (UniqueName: \"kubernetes.io/projected/fca78741-ca32-4867-b44f-483fd62f2942-kube-api-access-2cnvt\") pod \"network-check-source-58fb6744f5-gjgxv\" (UID: \"fca78741-ca32-4867-b44f-483fd62f2942\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-gjgxv" Feb 20 11:53:19.717989 master-0 kubenswrapper[7756]: I0220 11:53:19.717973 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c078827-3bdb-4509-aeb3-eb558df1f6e7-service-ca-bundle\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.718373 master-0 kubenswrapper[7756]: I0220 11:53:19.718011 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5827049e-6178-46cf-83c5-cff55daac768-config-volume\") pod \"collect-profiles-29526465-tpgw4\" (UID: \"5827049e-6178-46cf-83c5-cff55daac768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" Feb 20 11:53:19.718373 master-0 kubenswrapper[7756]: I0220 11:53:19.718037 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2qdb\" (UniqueName: \"kubernetes.io/projected/9c078827-3bdb-4509-aeb3-eb558df1f6e7-kube-api-access-x2qdb\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.718373 master-0 kubenswrapper[7756]: I0220 11:53:19.718100 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-metrics-certs\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.718373 master-0 kubenswrapper[7756]: I0220 11:53:19.718119 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9czx6\" (UniqueName: \"kubernetes.io/projected/5827049e-6178-46cf-83c5-cff55daac768-kube-api-access-9czx6\") pod \"collect-profiles-29526465-tpgw4\" (UID: \"5827049e-6178-46cf-83c5-cff55daac768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" Feb 20 11:53:19.718373 master-0 kubenswrapper[7756]: I0220 11:53:19.718167 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5827049e-6178-46cf-83c5-cff55daac768-secret-volume\") pod \"collect-profiles-29526465-tpgw4\" (UID: \"5827049e-6178-46cf-83c5-cff55daac768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" Feb 20 11:53:19.720387 master-0 kubenswrapper[7756]: I0220 11:53:19.719428 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/4d8cd7c5-31fd-4dca-b39b-6d62eb573707-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-s57jn\" (UID: \"4d8cd7c5-31fd-4dca-b39b-6d62eb573707\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn" Feb 20 11:53:19.720586 master-0 kubenswrapper[7756]: I0220 11:53:19.720395 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5827049e-6178-46cf-83c5-cff55daac768-config-volume\") pod \"collect-profiles-29526465-tpgw4\" (UID: \"5827049e-6178-46cf-83c5-cff55daac768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" Feb 20 11:53:19.720586 master-0 kubenswrapper[7756]: I0220 11:53:19.720428 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-default-certificate\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.720586 master-0 kubenswrapper[7756]: I0220 11:53:19.720471 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-stats-auth\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.722383 master-0 kubenswrapper[7756]: I0220 11:53:19.722328 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-metrics-certs\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.722804 master-0 kubenswrapper[7756]: I0220 11:53:19.722757 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c078827-3bdb-4509-aeb3-eb558df1f6e7-service-ca-bundle\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.726622 master-0 kubenswrapper[7756]: I0220 11:53:19.726568 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5827049e-6178-46cf-83c5-cff55daac768-secret-volume\") pod \"collect-profiles-29526465-tpgw4\" (UID: \"5827049e-6178-46cf-83c5-cff55daac768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" Feb 20 11:53:19.732137 master-0 kubenswrapper[7756]: I0220 11:53:19.732064 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-default-certificate\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.732410 master-0 kubenswrapper[7756]: I0220 11:53:19.732360 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/4d8cd7c5-31fd-4dca-b39b-6d62eb573707-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-s57jn\" (UID: \"4d8cd7c5-31fd-4dca-b39b-6d62eb573707\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn" Feb 20 11:53:19.733099 master-0 kubenswrapper[7756]: I0220 11:53:19.733036 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-stats-auth\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.734620 master-0 kubenswrapper[7756]: I0220 11:53:19.734560 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2qdb\" (UniqueName: \"kubernetes.io/projected/9c078827-3bdb-4509-aeb3-eb558df1f6e7-kube-api-access-x2qdb\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.735779 master-0 kubenswrapper[7756]: I0220 11:53:19.735721 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9czx6\" (UniqueName: \"kubernetes.io/projected/5827049e-6178-46cf-83c5-cff55daac768-kube-api-access-9czx6\") pod \"collect-profiles-29526465-tpgw4\" (UID: \"5827049e-6178-46cf-83c5-cff55daac768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" Feb 20 11:53:19.742331 master-0 kubenswrapper[7756]: I0220 11:53:19.742269 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cnvt\" (UniqueName: \"kubernetes.io/projected/fca78741-ca32-4867-b44f-483fd62f2942-kube-api-access-2cnvt\") pod \"network-check-source-58fb6744f5-gjgxv\" (UID: \"fca78741-ca32-4867-b44f-483fd62f2942\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-gjgxv" Feb 20 11:53:19.851847 master-0 kubenswrapper[7756]: I0220 11:53:19.851779 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" Feb 20 11:53:19.876526 master-0 kubenswrapper[7756]: I0220 11:53:19.876423 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:19.891766 master-0 kubenswrapper[7756]: I0220 11:53:19.890892 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-gjgxv" Feb 20 11:53:19.907765 master-0 kubenswrapper[7756]: I0220 11:53:19.907712 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn" Feb 20 11:53:19.940154 master-0 kubenswrapper[7756]: I0220 11:53:19.940119 7756 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 11:53:19.964149 master-0 kubenswrapper[7756]: W0220 11:53:19.962832 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c078827_3bdb_4509_aeb3_eb558df1f6e7.slice/crio-d462fc60c97084643070378d982a956e1f53a8cb223bde5d6b24565dab2fc818 WatchSource:0}: Error finding container d462fc60c97084643070378d982a956e1f53a8cb223bde5d6b24565dab2fc818: Status 404 returned error can't find the container with id d462fc60c97084643070378d982a956e1f53a8cb223bde5d6b24565dab2fc818 Feb 20 11:53:20.280798 master-0 kubenswrapper[7756]: W0220 11:53:20.280751 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5827049e_6178_46cf_83c5_cff55daac768.slice/crio-985737b750f90a1abc5074451459d70393e87cfe0c6e8a5a88f5b55243c61581 WatchSource:0}: Error finding container 985737b750f90a1abc5074451459d70393e87cfe0c6e8a5a88f5b55243c61581: Status 404 returned error can't find the container with id 985737b750f90a1abc5074451459d70393e87cfe0c6e8a5a88f5b55243c61581 Feb 20 11:53:20.281808 master-0 kubenswrapper[7756]: I0220 11:53:20.281741 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4"] Feb 20 11:53:20.310301 master-0 kubenswrapper[7756]: W0220 11:53:20.310243 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d8cd7c5_31fd_4dca_b39b_6d62eb573707.slice/crio-ad1e0968f9a0f9395b52d4138ec76c893d5513164ae2900823432b7870c6a271 WatchSource:0}: Error finding container ad1e0968f9a0f9395b52d4138ec76c893d5513164ae2900823432b7870c6a271: Status 404 returned error can't find the container with id ad1e0968f9a0f9395b52d4138ec76c893d5513164ae2900823432b7870c6a271 Feb 20 11:53:20.312520 master-0 kubenswrapper[7756]: I0220 11:53:20.311425 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn"] Feb 20 11:53:20.362619 master-0 kubenswrapper[7756]: I0220 11:53:20.362325 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-gjgxv"] Feb 20 11:53:20.581624 master-0 kubenswrapper[7756]: I0220 11:53:20.581468 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn" event={"ID":"4d8cd7c5-31fd-4dca-b39b-6d62eb573707","Type":"ContainerStarted","Data":"ad1e0968f9a0f9395b52d4138ec76c893d5513164ae2900823432b7870c6a271"} Feb 20 11:53:20.589314 master-0 kubenswrapper[7756]: I0220 11:53:20.589253 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-gjgxv" event={"ID":"fca78741-ca32-4867-b44f-483fd62f2942","Type":"ContainerStarted","Data":"6c7bdb69018d4da44ceed51e332b7075cc0fe758001882272428aa25ce0f3265"} Feb 20 11:53:20.589314 master-0 kubenswrapper[7756]: I0220 11:53:20.589305 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-gjgxv" event={"ID":"fca78741-ca32-4867-b44f-483fd62f2942","Type":"ContainerStarted","Data":"b77cf717eaf94cf8bf6837636ba7313b88c41d8f394ba5e1308558d0bca1c808"} Feb 20 11:53:20.589517 master-0 kubenswrapper[7756]: I0220 11:53:20.589322 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" event={"ID":"5827049e-6178-46cf-83c5-cff55daac768","Type":"ContainerStarted","Data":"2f6e2cdefb1f6c8584f138cfc2ac8b1cae268cc4e1730c5cf5119ebd8fc9f159"} Feb 20 11:53:20.589517 master-0 kubenswrapper[7756]: I0220 11:53:20.589337 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" event={"ID":"5827049e-6178-46cf-83c5-cff55daac768","Type":"ContainerStarted","Data":"985737b750f90a1abc5074451459d70393e87cfe0c6e8a5a88f5b55243c61581"} Feb 20 11:53:20.589517 master-0 kubenswrapper[7756]: I0220 11:53:20.589349 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" event={"ID":"9c078827-3bdb-4509-aeb3-eb558df1f6e7","Type":"ContainerStarted","Data":"d462fc60c97084643070378d982a956e1f53a8cb223bde5d6b24565dab2fc818"} Feb 20 11:53:20.609036 master-0 kubenswrapper[7756]: I0220 11:53:20.608956 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-gjgxv" podStartSLOduration=289.60893596 podStartE2EDuration="4m49.60893596s" podCreationTimestamp="2026-02-20 11:48:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:53:20.605507694 +0000 UTC m=+246.347755742" watchObservedRunningTime="2026-02-20 11:53:20.60893596 +0000 UTC m=+246.351183968" Feb 20 11:53:20.628159 master-0 kubenswrapper[7756]: I0220 11:53:20.628036 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" podStartSLOduration=323.627995601 podStartE2EDuration="5m23.627995601s" podCreationTimestamp="2026-02-20 11:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:53:20.62507575 +0000 UTC m=+246.367323768" watchObservedRunningTime="2026-02-20 11:53:20.627995601 +0000 UTC m=+246.370243629" Feb 20 11:53:21.673798 master-0 kubenswrapper[7756]: I0220 11:53:21.673717 7756 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 20 11:53:21.674512 master-0 kubenswrapper[7756]: I0220 11:53:21.673968 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" containerID="cri-o://842b6aa1bc5c1b0962214c188df26e12d0920e5f8c4d3227e8ab1c9741425d8b" gracePeriod=30 Feb 20 11:53:21.680478 master-0 kubenswrapper[7756]: I0220 11:53:21.680441 7756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 20 11:53:21.682424 master-0 kubenswrapper[7756]: E0220 11:53:21.680764 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 20 11:53:21.682424 master-0 kubenswrapper[7756]: I0220 11:53:21.680787 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 20 11:53:21.682424 master-0 kubenswrapper[7756]: I0220 11:53:21.680938 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 20 11:53:21.682424 master-0 kubenswrapper[7756]: I0220 11:53:21.680952 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 20 11:53:21.682424 master-0 kubenswrapper[7756]: E0220 11:53:21.681088 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 20 11:53:21.682424 master-0 kubenswrapper[7756]: I0220 11:53:21.681099 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 20 11:53:21.682424 master-0 kubenswrapper[7756]: I0220 11:53:21.682225 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:53:21.762643 master-0 kubenswrapper[7756]: I0220 11:53:21.762578 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:53:21.762643 master-0 kubenswrapper[7756]: I0220 11:53:21.762635 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:53:21.772201 master-0 kubenswrapper[7756]: I0220 11:53:21.772118 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 20 11:53:21.864435 master-0 kubenswrapper[7756]: I0220 11:53:21.864361 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:53:21.864863 master-0 kubenswrapper[7756]: I0220 11:53:21.864694 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:53:21.864863 master-0 kubenswrapper[7756]: I0220 11:53:21.864797 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:53:21.864863 master-0 kubenswrapper[7756]: I0220 11:53:21.864543 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:53:22.070013 master-0 kubenswrapper[7756]: I0220 11:53:22.069946 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:53:22.597844 master-0 kubenswrapper[7756]: I0220 11:53:22.597786 7756 generic.go:334] "Generic (PLEG): container finished" podID="35310285-fff9-43d6-ad9a-5d959ef116ec" containerID="359ae664c23ea8eeb6016bf515179345b86f7e1a68413d3d25df9e81032b59ac" exitCode=0 Feb 20 11:53:22.597844 master-0 kubenswrapper[7756]: I0220 11:53:22.597847 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-retry-1-master-0" event={"ID":"35310285-fff9-43d6-ad9a-5d959ef116ec","Type":"ContainerDied","Data":"359ae664c23ea8eeb6016bf515179345b86f7e1a68413d3d25df9e81032b59ac"} Feb 20 11:53:22.603356 master-0 kubenswrapper[7756]: I0220 11:53:22.603310 7756 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="842b6aa1bc5c1b0962214c188df26e12d0920e5f8c4d3227e8ab1c9741425d8b" exitCode=0 Feb 20 11:53:22.603424 master-0 kubenswrapper[7756]: I0220 11:53:22.603371 7756 scope.go:117] "RemoveContainer" containerID="6f48bf3168ea3ca5cdb5d4b4fe30f40410c99744121d1afe1db8ccea90206a28" Feb 20 11:53:22.908133 master-0 kubenswrapper[7756]: W0220 11:53:22.908072 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod416b60c941b7224bbf94e8f78b59b910.slice/crio-bbff31dd4ba8a02321905b3bade3855c36331a3c01be642ab84f9369eaefe349 WatchSource:0}: Error finding container bbff31dd4ba8a02321905b3bade3855c36331a3c01be642ab84f9369eaefe349: Status 404 returned error can't find the container with id bbff31dd4ba8a02321905b3bade3855c36331a3c01be642ab84f9369eaefe349 Feb 20 11:53:23.036737 master-0 kubenswrapper[7756]: I0220 11:53:23.032939 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:53:23.185257 master-0 kubenswrapper[7756]: I0220 11:53:23.185212 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"56c3cb71c9851003c8de7e7c5db4b87e\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " Feb 20 11:53:23.185405 master-0 kubenswrapper[7756]: I0220 11:53:23.185381 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"56c3cb71c9851003c8de7e7c5db4b87e\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " Feb 20 11:53:23.185768 master-0 kubenswrapper[7756]: I0220 11:53:23.185737 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs" (OuterVolumeSpecName: "logs") pod "56c3cb71c9851003c8de7e7c5db4b87e" (UID: "56c3cb71c9851003c8de7e7c5db4b87e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:53:23.185832 master-0 kubenswrapper[7756]: I0220 11:53:23.185779 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets" (OuterVolumeSpecName: "secrets") pod "56c3cb71c9851003c8de7e7c5db4b87e" (UID: "56c3cb71c9851003c8de7e7c5db4b87e"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:53:23.287489 master-0 kubenswrapper[7756]: I0220 11:53:23.287390 7756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:23.287489 master-0 kubenswrapper[7756]: I0220 11:53:23.287446 7756 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:23.612686 master-0 kubenswrapper[7756]: I0220 11:53:23.612622 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn" event={"ID":"4d8cd7c5-31fd-4dca-b39b-6d62eb573707","Type":"ContainerStarted","Data":"ac334a8d1d6966d05d9ae66e347d0c5dfb63c55a764e07e32abe04a43506c36f"} Feb 20 11:53:23.613028 master-0 kubenswrapper[7756]: I0220 11:53:23.613002 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn" Feb 20 11:53:23.615217 master-0 kubenswrapper[7756]: I0220 11:53:23.615158 7756 generic.go:334] "Generic (PLEG): container finished" podID="416b60c941b7224bbf94e8f78b59b910" containerID="e9300776eee7b9f506a4b0f31aa2e187971b5654c77d21860bda2d88ce86d8a4" exitCode=0 Feb 20 11:53:23.615568 master-0 kubenswrapper[7756]: I0220 11:53:23.615297 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"416b60c941b7224bbf94e8f78b59b910","Type":"ContainerDied","Data":"e9300776eee7b9f506a4b0f31aa2e187971b5654c77d21860bda2d88ce86d8a4"} Feb 20 11:53:23.615568 master-0 kubenswrapper[7756]: I0220 11:53:23.615407 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"416b60c941b7224bbf94e8f78b59b910","Type":"ContainerStarted","Data":"bbff31dd4ba8a02321905b3bade3855c36331a3c01be642ab84f9369eaefe349"} Feb 20 11:53:23.616983 master-0 kubenswrapper[7756]: I0220 11:53:23.616923 7756 generic.go:334] "Generic (PLEG): container finished" podID="5827049e-6178-46cf-83c5-cff55daac768" containerID="2f6e2cdefb1f6c8584f138cfc2ac8b1cae268cc4e1730c5cf5119ebd8fc9f159" exitCode=0 Feb 20 11:53:23.617021 master-0 kubenswrapper[7756]: I0220 11:53:23.617000 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" event={"ID":"5827049e-6178-46cf-83c5-cff55daac768","Type":"ContainerDied","Data":"2f6e2cdefb1f6c8584f138cfc2ac8b1cae268cc4e1730c5cf5119ebd8fc9f159"} Feb 20 11:53:23.619161 master-0 kubenswrapper[7756]: I0220 11:53:23.619081 7756 scope.go:117] "RemoveContainer" containerID="842b6aa1bc5c1b0962214c188df26e12d0920e5f8c4d3227e8ab1c9741425d8b" Feb 20 11:53:23.619161 master-0 kubenswrapper[7756]: I0220 11:53:23.619120 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 11:53:23.622467 master-0 kubenswrapper[7756]: I0220 11:53:23.622419 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn" Feb 20 11:53:23.629069 master-0 kubenswrapper[7756]: I0220 11:53:23.629018 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" event={"ID":"9c078827-3bdb-4509-aeb3-eb558df1f6e7","Type":"ContainerStarted","Data":"59934f71df55065f6ab9cbdff084344dc055464c00d5db2644ae6d5d661e4e89"} Feb 20 11:53:23.662958 master-0 kubenswrapper[7756]: I0220 11:53:23.660815 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn" podStartSLOduration=205.072421282 podStartE2EDuration="3m27.660777849s" podCreationTimestamp="2026-02-20 11:49:56 +0000 UTC" firstStartedPulling="2026-02-20 11:53:20.315111089 +0000 UTC m=+246.057359097" lastFinishedPulling="2026-02-20 11:53:22.903467636 +0000 UTC m=+248.645715664" observedRunningTime="2026-02-20 11:53:23.637098228 +0000 UTC m=+249.379346276" watchObservedRunningTime="2026-02-20 11:53:23.660777849 +0000 UTC m=+249.403025927" Feb 20 11:53:23.723778 master-0 kubenswrapper[7756]: I0220 11:53:23.723697 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podStartSLOduration=215.795452489 podStartE2EDuration="3m38.723675382s" podCreationTimestamp="2026-02-20 11:49:45 +0000 UTC" firstStartedPulling="2026-02-20 11:53:19.975907192 +0000 UTC m=+245.718155220" lastFinishedPulling="2026-02-20 11:53:22.904130095 +0000 UTC m=+248.646378113" observedRunningTime="2026-02-20 11:53:23.71999888 +0000 UTC m=+249.462246898" watchObservedRunningTime="2026-02-20 11:53:23.723675382 +0000 UTC m=+249.465923400" Feb 20 11:53:23.878832 master-0 kubenswrapper[7756]: I0220 11:53:23.878270 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:23.882577 master-0 kubenswrapper[7756]: I0220 11:53:23.880780 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:23.882577 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:23.882577 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:23.882577 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:23.882577 master-0 kubenswrapper[7756]: I0220 11:53:23.880825 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:23.967286 master-0 kubenswrapper[7756]: I0220 11:53:23.967228 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-retry-1-master-0" Feb 20 11:53:24.098725 master-0 kubenswrapper[7756]: I0220 11:53:24.098574 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/35310285-fff9-43d6-ad9a-5d959ef116ec-var-lock\") pod \"35310285-fff9-43d6-ad9a-5d959ef116ec\" (UID: \"35310285-fff9-43d6-ad9a-5d959ef116ec\") " Feb 20 11:53:24.098725 master-0 kubenswrapper[7756]: I0220 11:53:24.098714 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35310285-fff9-43d6-ad9a-5d959ef116ec-var-lock" (OuterVolumeSpecName: "var-lock") pod "35310285-fff9-43d6-ad9a-5d959ef116ec" (UID: "35310285-fff9-43d6-ad9a-5d959ef116ec"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:53:24.099135 master-0 kubenswrapper[7756]: I0220 11:53:24.098766 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35310285-fff9-43d6-ad9a-5d959ef116ec-kube-api-access\") pod \"35310285-fff9-43d6-ad9a-5d959ef116ec\" (UID: \"35310285-fff9-43d6-ad9a-5d959ef116ec\") " Feb 20 11:53:24.099135 master-0 kubenswrapper[7756]: I0220 11:53:24.098933 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35310285-fff9-43d6-ad9a-5d959ef116ec-kubelet-dir\") pod \"35310285-fff9-43d6-ad9a-5d959ef116ec\" (UID: \"35310285-fff9-43d6-ad9a-5d959ef116ec\") " Feb 20 11:53:24.099135 master-0 kubenswrapper[7756]: I0220 11:53:24.098992 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35310285-fff9-43d6-ad9a-5d959ef116ec-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "35310285-fff9-43d6-ad9a-5d959ef116ec" (UID: "35310285-fff9-43d6-ad9a-5d959ef116ec"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:53:24.099389 master-0 kubenswrapper[7756]: I0220 11:53:24.099264 7756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35310285-fff9-43d6-ad9a-5d959ef116ec-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:24.099389 master-0 kubenswrapper[7756]: I0220 11:53:24.099284 7756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/35310285-fff9-43d6-ad9a-5d959ef116ec-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:24.101342 master-0 kubenswrapper[7756]: I0220 11:53:24.101259 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35310285-fff9-43d6-ad9a-5d959ef116ec-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "35310285-fff9-43d6-ad9a-5d959ef116ec" (UID: "35310285-fff9-43d6-ad9a-5d959ef116ec"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:53:24.201280 master-0 kubenswrapper[7756]: I0220 11:53:24.201206 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35310285-fff9-43d6-ad9a-5d959ef116ec-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:24.556459 master-0 kubenswrapper[7756]: I0220 11:53:24.556410 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-4wkh4"] Feb 20 11:53:24.556706 master-0 kubenswrapper[7756]: E0220 11:53:24.556668 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35310285-fff9-43d6-ad9a-5d959ef116ec" containerName="installer" Feb 20 11:53:24.556706 master-0 kubenswrapper[7756]: I0220 11:53:24.556682 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="35310285-fff9-43d6-ad9a-5d959ef116ec" containerName="installer" Feb 20 11:53:24.556799 master-0 kubenswrapper[7756]: I0220 11:53:24.556781 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="35310285-fff9-43d6-ad9a-5d959ef116ec" containerName="installer" Feb 20 11:53:24.557189 master-0 kubenswrapper[7756]: I0220 11:53:24.557164 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 11:53:24.558640 master-0 kubenswrapper[7756]: I0220 11:53:24.558602 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-92b9q" Feb 20 11:53:24.559935 master-0 kubenswrapper[7756]: I0220 11:53:24.559406 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 20 11:53:24.559935 master-0 kubenswrapper[7756]: I0220 11:53:24.559430 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 20 11:53:24.586748 master-0 kubenswrapper[7756]: I0220 11:53:24.586708 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c3cb71c9851003c8de7e7c5db4b87e" path="/var/lib/kubelet/pods/56c3cb71c9851003c8de7e7c5db4b87e/volumes" Feb 20 11:53:24.587195 master-0 kubenswrapper[7756]: I0220 11:53:24.587180 7756 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 20 11:53:24.616108 master-0 kubenswrapper[7756]: I0220 11:53:24.616054 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 20 11:53:24.616108 master-0 kubenswrapper[7756]: I0220 11:53:24.616101 7756 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="33705bb4-8996-4330-a613-4a7a1601592c" Feb 20 11:53:24.627150 master-0 kubenswrapper[7756]: I0220 11:53:24.627088 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 20 11:53:24.627150 master-0 kubenswrapper[7756]: I0220 11:53:24.627141 7756 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="33705bb4-8996-4330-a613-4a7a1601592c" Feb 20 11:53:24.637440 master-0 kubenswrapper[7756]: I0220 11:53:24.637404 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"416b60c941b7224bbf94e8f78b59b910","Type":"ContainerStarted","Data":"1d1e4f19b4b937664918df87724b0ce6399cbc186e4b82d3db56d2fb037a5e05"} Feb 20 11:53:24.637440 master-0 kubenswrapper[7756]: I0220 11:53:24.637442 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"416b60c941b7224bbf94e8f78b59b910","Type":"ContainerStarted","Data":"4dfeade65eb878550b91a87841f50892c43c67d9c3d37a72dc5c09f4d1bfeb67"} Feb 20 11:53:24.637702 master-0 kubenswrapper[7756]: I0220 11:53:24.637452 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"416b60c941b7224bbf94e8f78b59b910","Type":"ContainerStarted","Data":"610ed904564d38a9663079b5791a3bed3f3fde288a983c4b6a5a9408be5ffc50"} Feb 20 11:53:24.637702 master-0 kubenswrapper[7756]: I0220 11:53:24.637591 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:53:24.640471 master-0 kubenswrapper[7756]: I0220 11:53:24.640420 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-retry-1-master-0" event={"ID":"35310285-fff9-43d6-ad9a-5d959ef116ec","Type":"ContainerDied","Data":"4736b5e4686f09c0b07f8d18c3b19a3ccd55085c207b7cd94523bcb6efbbf4ee"} Feb 20 11:53:24.640579 master-0 kubenswrapper[7756]: I0220 11:53:24.640479 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4736b5e4686f09c0b07f8d18c3b19a3ccd55085c207b7cd94523bcb6efbbf4ee" Feb 20 11:53:24.640647 master-0 kubenswrapper[7756]: I0220 11:53:24.640606 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-retry-1-master-0" Feb 20 11:53:24.688556 master-0 kubenswrapper[7756]: I0220 11:53:24.688469 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=3.688445588 podStartE2EDuration="3.688445588s" podCreationTimestamp="2026-02-20 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:53:24.6669898 +0000 UTC m=+250.409237798" watchObservedRunningTime="2026-02-20 11:53:24.688445588 +0000 UTC m=+250.430693606" Feb 20 11:53:24.700072 master-0 kubenswrapper[7756]: I0220 11:53:24.700019 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-5kbrl"] Feb 20 11:53:24.701033 master-0 kubenswrapper[7756]: I0220 11:53:24.700989 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 11:53:24.702588 master-0 kubenswrapper[7756]: I0220 11:53:24.702504 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-jxm2z" Feb 20 11:53:24.702795 master-0 kubenswrapper[7756]: I0220 11:53:24.702779 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 20 11:53:24.703214 master-0 kubenswrapper[7756]: I0220 11:53:24.703189 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 20 11:53:24.703430 master-0 kubenswrapper[7756]: I0220 11:53:24.703406 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 20 11:53:24.706109 master-0 kubenswrapper[7756]: I0220 11:53:24.706075 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-certs\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 11:53:24.706172 master-0 kubenswrapper[7756]: I0220 11:53:24.706127 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m98rt\" (UniqueName: \"kubernetes.io/projected/2f9cd117-c84f-44c9-80a9-879a04d62934-kube-api-access-m98rt\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 11:53:24.706207 master-0 kubenswrapper[7756]: I0220 11:53:24.706193 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-node-bootstrap-token\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 11:53:24.709236 master-0 kubenswrapper[7756]: I0220 11:53:24.709053 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-5kbrl"] Feb 20 11:53:24.807362 master-0 kubenswrapper[7756]: I0220 11:53:24.807290 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-certs\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 11:53:24.807362 master-0 kubenswrapper[7756]: I0220 11:53:24.807357 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9fe0660-fae4-4f97-8895-dbc4845cee40-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 11:53:24.807362 master-0 kubenswrapper[7756]: I0220 11:53:24.807381 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m98rt\" (UniqueName: \"kubernetes.io/projected/2f9cd117-c84f-44c9-80a9-879a04d62934-kube-api-access-m98rt\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 11:53:24.807666 master-0 kubenswrapper[7756]: I0220 11:53:24.807413 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r85p\" (UniqueName: \"kubernetes.io/projected/b9fe0660-fae4-4f97-8895-dbc4845cee40-kube-api-access-7r85p\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 11:53:24.807666 master-0 kubenswrapper[7756]: I0220 11:53:24.807441 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 11:53:24.807666 master-0 kubenswrapper[7756]: I0220 11:53:24.807464 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-node-bootstrap-token\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 11:53:24.807666 master-0 kubenswrapper[7756]: I0220 11:53:24.807493 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 11:53:24.812134 master-0 kubenswrapper[7756]: I0220 11:53:24.812100 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-certs\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 11:53:24.812646 master-0 kubenswrapper[7756]: I0220 11:53:24.812601 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-node-bootstrap-token\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 11:53:24.822958 master-0 kubenswrapper[7756]: I0220 11:53:24.822937 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m98rt\" (UniqueName: \"kubernetes.io/projected/2f9cd117-c84f-44c9-80a9-879a04d62934-kube-api-access-m98rt\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 11:53:24.878034 master-0 kubenswrapper[7756]: I0220 11:53:24.877999 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 11:53:24.889855 master-0 kubenswrapper[7756]: I0220 11:53:24.889814 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:24.889855 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:24.889855 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:24.889855 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:24.890033 master-0 kubenswrapper[7756]: I0220 11:53:24.889874 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:24.912245 master-0 kubenswrapper[7756]: I0220 11:53:24.908376 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9fe0660-fae4-4f97-8895-dbc4845cee40-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 11:53:24.912245 master-0 kubenswrapper[7756]: I0220 11:53:24.908426 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r85p\" (UniqueName: \"kubernetes.io/projected/b9fe0660-fae4-4f97-8895-dbc4845cee40-kube-api-access-7r85p\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 11:53:24.912245 master-0 kubenswrapper[7756]: I0220 11:53:24.908459 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 11:53:24.912245 master-0 kubenswrapper[7756]: I0220 11:53:24.908490 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 11:53:24.912245 master-0 kubenswrapper[7756]: I0220 11:53:24.910016 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9fe0660-fae4-4f97-8895-dbc4845cee40-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 11:53:24.912245 master-0 kubenswrapper[7756]: I0220 11:53:24.912081 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 11:53:24.914486 master-0 kubenswrapper[7756]: I0220 11:53:24.913900 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 11:53:24.931695 master-0 kubenswrapper[7756]: I0220 11:53:24.931658 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r85p\" (UniqueName: \"kubernetes.io/projected/b9fe0660-fae4-4f97-8895-dbc4845cee40-kube-api-access-7r85p\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 11:53:25.012592 master-0 kubenswrapper[7756]: I0220 11:53:25.012557 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" Feb 20 11:53:25.021642 master-0 kubenswrapper[7756]: I0220 11:53:25.021594 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 11:53:25.112786 master-0 kubenswrapper[7756]: I0220 11:53:25.111721 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9czx6\" (UniqueName: \"kubernetes.io/projected/5827049e-6178-46cf-83c5-cff55daac768-kube-api-access-9czx6\") pod \"5827049e-6178-46cf-83c5-cff55daac768\" (UID: \"5827049e-6178-46cf-83c5-cff55daac768\") " Feb 20 11:53:25.112786 master-0 kubenswrapper[7756]: I0220 11:53:25.111790 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5827049e-6178-46cf-83c5-cff55daac768-secret-volume\") pod \"5827049e-6178-46cf-83c5-cff55daac768\" (UID: \"5827049e-6178-46cf-83c5-cff55daac768\") " Feb 20 11:53:25.112786 master-0 kubenswrapper[7756]: I0220 11:53:25.111844 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5827049e-6178-46cf-83c5-cff55daac768-config-volume\") pod \"5827049e-6178-46cf-83c5-cff55daac768\" (UID: \"5827049e-6178-46cf-83c5-cff55daac768\") " Feb 20 11:53:25.112786 master-0 kubenswrapper[7756]: I0220 11:53:25.112441 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5827049e-6178-46cf-83c5-cff55daac768-config-volume" (OuterVolumeSpecName: "config-volume") pod "5827049e-6178-46cf-83c5-cff55daac768" (UID: "5827049e-6178-46cf-83c5-cff55daac768"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:53:25.116297 master-0 kubenswrapper[7756]: I0220 11:53:25.116261 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5827049e-6178-46cf-83c5-cff55daac768-kube-api-access-9czx6" (OuterVolumeSpecName: "kube-api-access-9czx6") pod "5827049e-6178-46cf-83c5-cff55daac768" (UID: "5827049e-6178-46cf-83c5-cff55daac768"). InnerVolumeSpecName "kube-api-access-9czx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:53:25.117781 master-0 kubenswrapper[7756]: I0220 11:53:25.117379 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5827049e-6178-46cf-83c5-cff55daac768-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5827049e-6178-46cf-83c5-cff55daac768" (UID: "5827049e-6178-46cf-83c5-cff55daac768"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:53:25.213066 master-0 kubenswrapper[7756]: I0220 11:53:25.212896 7756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5827049e-6178-46cf-83c5-cff55daac768-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:25.213066 master-0 kubenswrapper[7756]: I0220 11:53:25.212936 7756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5827049e-6178-46cf-83c5-cff55daac768-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:25.213066 master-0 kubenswrapper[7756]: I0220 11:53:25.212946 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9czx6\" (UniqueName: \"kubernetes.io/projected/5827049e-6178-46cf-83c5-cff55daac768-kube-api-access-9czx6\") on node \"master-0\" DevicePath \"\"" Feb 20 11:53:25.471651 master-0 kubenswrapper[7756]: I0220 11:53:25.470785 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-5kbrl"] Feb 20 11:53:25.481411 master-0 kubenswrapper[7756]: W0220 11:53:25.481348 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9fe0660_fae4_4f97_8895_dbc4845cee40.slice/crio-17031191ab6d96a7b42b27f8e62cc7de662a0a1661bf978c7cf3315a18929da9 WatchSource:0}: Error finding container 17031191ab6d96a7b42b27f8e62cc7de662a0a1661bf978c7cf3315a18929da9: Status 404 returned error can't find the container with id 17031191ab6d96a7b42b27f8e62cc7de662a0a1661bf978c7cf3315a18929da9 Feb 20 11:53:25.649823 master-0 kubenswrapper[7756]: I0220 11:53:25.649708 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" event={"ID":"5827049e-6178-46cf-83c5-cff55daac768","Type":"ContainerDied","Data":"985737b750f90a1abc5074451459d70393e87cfe0c6e8a5a88f5b55243c61581"} Feb 20 11:53:25.649823 master-0 kubenswrapper[7756]: I0220 11:53:25.649753 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" Feb 20 11:53:25.650132 master-0 kubenswrapper[7756]: I0220 11:53:25.649777 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="985737b750f90a1abc5074451459d70393e87cfe0c6e8a5a88f5b55243c61581" Feb 20 11:53:25.652336 master-0 kubenswrapper[7756]: I0220 11:53:25.652265 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4wkh4" event={"ID":"2f9cd117-c84f-44c9-80a9-879a04d62934","Type":"ContainerStarted","Data":"b8f0e521193dc13091df547594f2c698a4aee268a1b0ed67b7f76aaf657656ff"} Feb 20 11:53:25.652441 master-0 kubenswrapper[7756]: I0220 11:53:25.652349 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4wkh4" event={"ID":"2f9cd117-c84f-44c9-80a9-879a04d62934","Type":"ContainerStarted","Data":"d0a141b311d0fcd6bd712d0075c6fb1c7f72a45707678fd94f7971d15d34a88f"} Feb 20 11:53:25.654582 master-0 kubenswrapper[7756]: I0220 11:53:25.654518 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" event={"ID":"b9fe0660-fae4-4f97-8895-dbc4845cee40","Type":"ContainerStarted","Data":"17031191ab6d96a7b42b27f8e62cc7de662a0a1661bf978c7cf3315a18929da9"} Feb 20 11:53:25.679686 master-0 kubenswrapper[7756]: I0220 11:53:25.679584 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-4wkh4" podStartSLOduration=3.679519067 podStartE2EDuration="3.679519067s" podCreationTimestamp="2026-02-20 11:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:53:25.674836786 +0000 UTC m=+251.417084804" watchObservedRunningTime="2026-02-20 11:53:25.679519067 +0000 UTC m=+251.421767125" Feb 20 11:53:25.879717 master-0 kubenswrapper[7756]: I0220 11:53:25.879665 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:25.879717 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:25.879717 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:25.879717 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:25.879996 master-0 kubenswrapper[7756]: I0220 11:53:25.879729 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:26.864292 master-0 kubenswrapper[7756]: I0220 11:53:26.864186 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 20 11:53:26.864969 master-0 kubenswrapper[7756]: E0220 11:53:26.864554 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5827049e-6178-46cf-83c5-cff55daac768" containerName="collect-profiles" Feb 20 11:53:26.864969 master-0 kubenswrapper[7756]: I0220 11:53:26.864576 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5827049e-6178-46cf-83c5-cff55daac768" containerName="collect-profiles" Feb 20 11:53:26.864969 master-0 kubenswrapper[7756]: I0220 11:53:26.864793 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="5827049e-6178-46cf-83c5-cff55daac768" containerName="collect-profiles" Feb 20 11:53:26.865408 master-0 kubenswrapper[7756]: I0220 11:53:26.865368 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 11:53:26.868520 master-0 kubenswrapper[7756]: I0220 11:53:26.868447 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 20 11:53:26.870763 master-0 kubenswrapper[7756]: I0220 11:53:26.870702 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-zgwqj" Feb 20 11:53:26.875448 master-0 kubenswrapper[7756]: I0220 11:53:26.875352 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 20 11:53:26.883475 master-0 kubenswrapper[7756]: I0220 11:53:26.883420 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:26.883475 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:26.883475 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:26.883475 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:26.883700 master-0 kubenswrapper[7756]: I0220 11:53:26.883487 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:26.934929 master-0 kubenswrapper[7756]: I0220 11:53:26.934849 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/148cc321-3a17-4852-a75a-e8ac95139eb8-var-lock\") pod \"installer-3-master-0\" (UID: \"148cc321-3a17-4852-a75a-e8ac95139eb8\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 11:53:26.934929 master-0 kubenswrapper[7756]: I0220 11:53:26.934897 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/148cc321-3a17-4852-a75a-e8ac95139eb8-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"148cc321-3a17-4852-a75a-e8ac95139eb8\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 11:53:26.934929 master-0 kubenswrapper[7756]: I0220 11:53:26.934917 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/148cc321-3a17-4852-a75a-e8ac95139eb8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"148cc321-3a17-4852-a75a-e8ac95139eb8\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 11:53:27.037121 master-0 kubenswrapper[7756]: I0220 11:53:27.037051 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/148cc321-3a17-4852-a75a-e8ac95139eb8-var-lock\") pod \"installer-3-master-0\" (UID: \"148cc321-3a17-4852-a75a-e8ac95139eb8\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 11:53:27.037354 master-0 kubenswrapper[7756]: I0220 11:53:27.037181 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/148cc321-3a17-4852-a75a-e8ac95139eb8-var-lock\") pod \"installer-3-master-0\" (UID: \"148cc321-3a17-4852-a75a-e8ac95139eb8\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 11:53:27.037354 master-0 kubenswrapper[7756]: I0220 11:53:27.037332 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/148cc321-3a17-4852-a75a-e8ac95139eb8-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"148cc321-3a17-4852-a75a-e8ac95139eb8\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 11:53:27.037441 master-0 kubenswrapper[7756]: I0220 11:53:27.037372 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/148cc321-3a17-4852-a75a-e8ac95139eb8-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"148cc321-3a17-4852-a75a-e8ac95139eb8\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 11:53:27.037441 master-0 kubenswrapper[7756]: I0220 11:53:27.037412 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/148cc321-3a17-4852-a75a-e8ac95139eb8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"148cc321-3a17-4852-a75a-e8ac95139eb8\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 11:53:27.058395 master-0 kubenswrapper[7756]: I0220 11:53:27.058334 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/148cc321-3a17-4852-a75a-e8ac95139eb8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"148cc321-3a17-4852-a75a-e8ac95139eb8\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 11:53:27.190081 master-0 kubenswrapper[7756]: I0220 11:53:27.189975 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 11:53:27.196514 master-0 kubenswrapper[7756]: I0220 11:53:27.196473 7756 patch_prober.go:28] interesting pod/machine-config-daemon-mpwks container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 11:53:27.196594 master-0 kubenswrapper[7756]: I0220 11:53:27.196522 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mpwks" podUID="37cb3bb1-f5ba-4b7b-9af9-55bf61906a51" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 11:53:27.676160 master-0 kubenswrapper[7756]: I0220 11:53:27.676090 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" event={"ID":"b9fe0660-fae4-4f97-8895-dbc4845cee40","Type":"ContainerStarted","Data":"fc3d6350c279feb69a62a6813cb6e2a0c4c8271f1cc243dece6150dc4f4f4af5"} Feb 20 11:53:27.845968 master-0 kubenswrapper[7756]: I0220 11:53:27.845894 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 20 11:53:27.850729 master-0 kubenswrapper[7756]: W0220 11:53:27.850650 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod148cc321_3a17_4852_a75a_e8ac95139eb8.slice/crio-d637b82c89f26c321cedef58ed73b3beb4ce3dd682ac20250458654f4757c3e7 WatchSource:0}: Error finding container d637b82c89f26c321cedef58ed73b3beb4ce3dd682ac20250458654f4757c3e7: Status 404 returned error can't find the container with id d637b82c89f26c321cedef58ed73b3beb4ce3dd682ac20250458654f4757c3e7 Feb 20 11:53:27.880210 master-0 kubenswrapper[7756]: I0220 11:53:27.880161 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:27.880210 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:27.880210 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:27.880210 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:27.880858 master-0 kubenswrapper[7756]: I0220 11:53:27.880229 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:28.687477 master-0 kubenswrapper[7756]: I0220 11:53:28.687302 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" event={"ID":"b9fe0660-fae4-4f97-8895-dbc4845cee40","Type":"ContainerStarted","Data":"896384081a38765a9e936c0fcb6839d7eeed4f3f67378ee3efa24bf97d3050c1"} Feb 20 11:53:28.690313 master-0 kubenswrapper[7756]: I0220 11:53:28.690259 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"148cc321-3a17-4852-a75a-e8ac95139eb8","Type":"ContainerStarted","Data":"02fab2fdba837309f8086a9fb1b2446510dff4e4ca65a786c3fc86a795a7af11"} Feb 20 11:53:28.690443 master-0 kubenswrapper[7756]: I0220 11:53:28.690324 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"148cc321-3a17-4852-a75a-e8ac95139eb8","Type":"ContainerStarted","Data":"d637b82c89f26c321cedef58ed73b3beb4ce3dd682ac20250458654f4757c3e7"} Feb 20 11:53:28.717729 master-0 kubenswrapper[7756]: I0220 11:53:28.717589 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" podStartSLOduration=2.824946579 podStartE2EDuration="4.717555822s" podCreationTimestamp="2026-02-20 11:53:24 +0000 UTC" firstStartedPulling="2026-02-20 11:53:25.484316445 +0000 UTC m=+251.226564453" lastFinishedPulling="2026-02-20 11:53:27.376925688 +0000 UTC m=+253.119173696" observedRunningTime="2026-02-20 11:53:28.711355299 +0000 UTC m=+254.453603377" watchObservedRunningTime="2026-02-20 11:53:28.717555822 +0000 UTC m=+254.459803930" Feb 20 11:53:28.739601 master-0 kubenswrapper[7756]: I0220 11:53:28.739122 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=2.739096792 podStartE2EDuration="2.739096792s" podCreationTimestamp="2026-02-20 11:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:53:28.734121814 +0000 UTC m=+254.476369892" watchObservedRunningTime="2026-02-20 11:53:28.739096792 +0000 UTC m=+254.481344810" Feb 20 11:53:28.879975 master-0 kubenswrapper[7756]: I0220 11:53:28.879866 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:28.879975 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:28.879975 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:28.879975 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:28.881442 master-0 kubenswrapper[7756]: I0220 11:53:28.879982 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:29.877731 master-0 kubenswrapper[7756]: I0220 11:53:29.877647 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:53:29.880553 master-0 kubenswrapper[7756]: I0220 11:53:29.880461 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:29.880553 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:29.880553 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:29.880553 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:29.881384 master-0 kubenswrapper[7756]: I0220 11:53:29.880575 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:30.879519 master-0 kubenswrapper[7756]: I0220 11:53:30.879463 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:30.879519 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:30.879519 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:30.879519 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:30.879864 master-0 kubenswrapper[7756]: I0220 11:53:30.879553 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:31.029508 master-0 kubenswrapper[7756]: I0220 11:53:31.029439 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-8d7nc"] Feb 20 11:53:31.030715 master-0 kubenswrapper[7756]: I0220 11:53:31.030688 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.031964 master-0 kubenswrapper[7756]: W0220 11:53:31.031932 7756 reflector.go:561] object-"openshift-monitoring"/"node-exporter-tls": failed to list *v1.Secret: secrets "node-exporter-tls" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'master-0' and this object Feb 20 11:53:31.032029 master-0 kubenswrapper[7756]: E0220 11:53:31.031977 7756 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"node-exporter-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-exporter-tls\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 20 11:53:31.032263 master-0 kubenswrapper[7756]: W0220 11:53:31.032221 7756 reflector.go:561] object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config": failed to list *v1.Secret: secrets "node-exporter-kube-rbac-proxy-config" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'master-0' and this object Feb 20 11:53:31.032313 master-0 kubenswrapper[7756]: E0220 11:53:31.032287 7756 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-exporter-kube-rbac-proxy-config\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 20 11:53:31.037406 master-0 kubenswrapper[7756]: W0220 11:53:31.037360 7756 reflector.go:561] object-"openshift-monitoring"/"node-exporter-dockercfg-zxcjx": failed to list *v1.Secret: secrets "node-exporter-dockercfg-zxcjx" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'master-0' and this object Feb 20 11:53:31.037498 master-0 kubenswrapper[7756]: E0220 11:53:31.037424 7756 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-zxcjx\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-exporter-dockercfg-zxcjx\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 20 11:53:31.056453 master-0 kubenswrapper[7756]: I0220 11:53:31.056389 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z"] Feb 20 11:53:31.058024 master-0 kubenswrapper[7756]: I0220 11:53:31.057992 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 11:53:31.060437 master-0 kubenswrapper[7756]: I0220 11:53:31.060392 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-l7xzb" Feb 20 11:53:31.061004 master-0 kubenswrapper[7756]: I0220 11:53:31.060970 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 20 11:53:31.061068 master-0 kubenswrapper[7756]: I0220 11:53:31.061012 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 20 11:53:31.075159 master-0 kubenswrapper[7756]: I0220 11:53:31.075103 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-9fdgm"] Feb 20 11:53:31.077103 master-0 kubenswrapper[7756]: I0220 11:53:31.077067 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.078343 master-0 kubenswrapper[7756]: I0220 11:53:31.078300 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z"] Feb 20 11:53:31.079671 master-0 kubenswrapper[7756]: I0220 11:53:31.079643 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-46trq" Feb 20 11:53:31.080027 master-0 kubenswrapper[7756]: I0220 11:53:31.080006 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 20 11:53:31.080165 master-0 kubenswrapper[7756]: I0220 11:53:31.080146 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 20 11:53:31.080357 master-0 kubenswrapper[7756]: I0220 11:53:31.080334 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 20 11:53:31.095605 master-0 kubenswrapper[7756]: I0220 11:53:31.095558 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-9fdgm"] Feb 20 11:53:31.110286 master-0 kubenswrapper[7756]: I0220 11:53:31.110239 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-tls\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.110425 master-0 kubenswrapper[7756]: I0220 11:53:31.110333 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-sys\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.110425 master-0 kubenswrapper[7756]: I0220 11:53:31.110378 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.110425 master-0 kubenswrapper[7756]: I0220 11:53:31.110415 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-wtmp\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.110543 master-0 kubenswrapper[7756]: I0220 11:53:31.110445 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-root\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.110543 master-0 kubenswrapper[7756]: I0220 11:53:31.110492 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-textfile\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.110543 master-0 kubenswrapper[7756]: I0220 11:53:31.110517 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-metrics-client-ca\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.110724 master-0 kubenswrapper[7756]: I0220 11:53:31.110684 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dx69\" (UniqueName: \"kubernetes.io/projected/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-kube-api-access-2dx69\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.212616 master-0 kubenswrapper[7756]: I0220 11:53:31.212563 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.212830 master-0 kubenswrapper[7756]: I0220 11:53:31.212627 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dx69\" (UniqueName: \"kubernetes.io/projected/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-kube-api-access-2dx69\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.212830 master-0 kubenswrapper[7756]: I0220 11:53:31.212778 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-tls\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.212979 master-0 kubenswrapper[7756]: I0220 11:53:31.212939 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/89ed6373-78f8-4d77-82b2-1ab055b5b862-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 11:53:31.213034 master-0 kubenswrapper[7756]: I0220 11:53:31.212998 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.213077 master-0 kubenswrapper[7756]: I0220 11:53:31.213042 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-sys\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.213192 master-0 kubenswrapper[7756]: I0220 11:53:31.213163 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.213239 master-0 kubenswrapper[7756]: I0220 11:53:31.213200 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-sys\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.213299 master-0 kubenswrapper[7756]: I0220 11:53:31.213251 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-wtmp\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.213382 master-0 kubenswrapper[7756]: I0220 11:53:31.213355 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-root\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.213443 master-0 kubenswrapper[7756]: I0220 11:53:31.213415 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 11:53:31.213553 master-0 kubenswrapper[7756]: I0220 11:53:31.213505 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-root\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.213603 master-0 kubenswrapper[7756]: I0220 11:53:31.213521 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.213676 master-0 kubenswrapper[7756]: I0220 11:53:31.213637 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f64ql\" (UniqueName: \"kubernetes.io/projected/89ed6373-78f8-4d77-82b2-1ab055b5b862-kube-api-access-f64ql\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 11:53:31.213722 master-0 kubenswrapper[7756]: I0220 11:53:31.213696 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-wtmp\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.213802 master-0 kubenswrapper[7756]: I0220 11:53:31.213771 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-textfile\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.213856 master-0 kubenswrapper[7756]: I0220 11:53:31.213826 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.213967 master-0 kubenswrapper[7756]: I0220 11:53:31.213939 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-metrics-client-ca\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.214018 master-0 kubenswrapper[7756]: I0220 11:53:31.213979 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 11:53:31.214078 master-0 kubenswrapper[7756]: I0220 11:53:31.214053 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/042d8457-04dc-4171-8b0f-f9e3de695c46-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.214120 master-0 kubenswrapper[7756]: I0220 11:53:31.214097 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpz9d\" (UniqueName: \"kubernetes.io/projected/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-api-access-hpz9d\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.214173 master-0 kubenswrapper[7756]: I0220 11:53:31.214150 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-textfile\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.215055 master-0 kubenswrapper[7756]: I0220 11:53:31.215010 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-metrics-client-ca\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.227193 master-0 kubenswrapper[7756]: I0220 11:53:31.227154 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dx69\" (UniqueName: \"kubernetes.io/projected/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-kube-api-access-2dx69\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.314997 master-0 kubenswrapper[7756]: I0220 11:53:31.314919 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f64ql\" (UniqueName: \"kubernetes.io/projected/89ed6373-78f8-4d77-82b2-1ab055b5b862-kube-api-access-f64ql\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 11:53:31.314997 master-0 kubenswrapper[7756]: I0220 11:53:31.314982 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.315282 master-0 kubenswrapper[7756]: I0220 11:53:31.315014 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.315282 master-0 kubenswrapper[7756]: I0220 11:53:31.315264 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 11:53:31.315393 master-0 kubenswrapper[7756]: I0220 11:53:31.315323 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/042d8457-04dc-4171-8b0f-f9e3de695c46-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.315393 master-0 kubenswrapper[7756]: I0220 11:53:31.315361 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpz9d\" (UniqueName: \"kubernetes.io/projected/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-api-access-hpz9d\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.315482 master-0 kubenswrapper[7756]: I0220 11:53:31.315410 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.315556 master-0 kubenswrapper[7756]: I0220 11:53:31.315499 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/89ed6373-78f8-4d77-82b2-1ab055b5b862-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 11:53:31.315610 master-0 kubenswrapper[7756]: I0220 11:53:31.315565 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.315942 master-0 kubenswrapper[7756]: I0220 11:53:31.315891 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/042d8457-04dc-4171-8b0f-f9e3de695c46-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.316002 master-0 kubenswrapper[7756]: I0220 11:53:31.315930 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 11:53:31.316852 master-0 kubenswrapper[7756]: I0220 11:53:31.316796 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/89ed6373-78f8-4d77-82b2-1ab055b5b862-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 11:53:31.316996 master-0 kubenswrapper[7756]: I0220 11:53:31.316946 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.317463 master-0 kubenswrapper[7756]: I0220 11:53:31.317393 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.320152 master-0 kubenswrapper[7756]: I0220 11:53:31.320112 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.320334 master-0 kubenswrapper[7756]: I0220 11:53:31.320302 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 11:53:31.320389 master-0 kubenswrapper[7756]: I0220 11:53:31.320330 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.320466 master-0 kubenswrapper[7756]: I0220 11:53:31.320429 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 11:53:31.336353 master-0 kubenswrapper[7756]: I0220 11:53:31.336267 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpz9d\" (UniqueName: \"kubernetes.io/projected/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-api-access-hpz9d\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.339663 master-0 kubenswrapper[7756]: I0220 11:53:31.339613 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f64ql\" (UniqueName: \"kubernetes.io/projected/89ed6373-78f8-4d77-82b2-1ab055b5b862-kube-api-access-f64ql\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 11:53:31.450604 master-0 kubenswrapper[7756]: I0220 11:53:31.450476 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 11:53:31.473837 master-0 kubenswrapper[7756]: I0220 11:53:31.473792 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 11:53:31.880135 master-0 kubenswrapper[7756]: I0220 11:53:31.880049 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:31.880135 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:31.880135 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:31.880135 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:31.880135 master-0 kubenswrapper[7756]: I0220 11:53:31.880132 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:31.956855 master-0 kubenswrapper[7756]: I0220 11:53:31.956772 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 20 11:53:31.969161 master-0 kubenswrapper[7756]: I0220 11:53:31.968656 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:31.975267 master-0 kubenswrapper[7756]: I0220 11:53:31.975145 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z"] Feb 20 11:53:31.976068 master-0 kubenswrapper[7756]: W0220 11:53:31.976026 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89ed6373_78f8_4d77_82b2_1ab055b5b862.slice/crio-3372bbf7f4c306095391a5b4c0a6615ca5aaf373fb3cc461d59deb2a7e8dca2b WatchSource:0}: Error finding container 3372bbf7f4c306095391a5b4c0a6615ca5aaf373fb3cc461d59deb2a7e8dca2b: Status 404 returned error can't find the container with id 3372bbf7f4c306095391a5b4c0a6615ca5aaf373fb3cc461d59deb2a7e8dca2b Feb 20 11:53:31.986794 master-0 kubenswrapper[7756]: I0220 11:53:31.986755 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 20 11:53:31.991608 master-0 kubenswrapper[7756]: I0220 11:53:31.991549 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-9fdgm"] Feb 20 11:53:31.999117 master-0 kubenswrapper[7756]: I0220 11:53:31.999063 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-tls\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:32.001282 master-0 kubenswrapper[7756]: W0220 11:53:32.001224 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod042d8457_04dc_4171_8b0f_f9e3de695c46.slice/crio-fb0b310e4353078b29e20eeb338d9c0abab57242511e7e72ade79783d9a85447 WatchSource:0}: Error finding container fb0b310e4353078b29e20eeb338d9c0abab57242511e7e72ade79783d9a85447: Status 404 returned error can't find the container with id fb0b310e4353078b29e20eeb338d9c0abab57242511e7e72ade79783d9a85447 Feb 20 11:53:32.115081 master-0 kubenswrapper[7756]: I0220 11:53:32.114984 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-zxcjx" Feb 20 11:53:32.267876 master-0 kubenswrapper[7756]: I0220 11:53:32.267835 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 11:53:32.723287 master-0 kubenswrapper[7756]: I0220 11:53:32.723210 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8d7nc" event={"ID":"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2","Type":"ContainerStarted","Data":"e9b84bcaac977feb96e17841e41bc90c7743f1709d32f1ab4ffe9c651b7c5436"} Feb 20 11:53:32.725431 master-0 kubenswrapper[7756]: I0220 11:53:32.725365 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" event={"ID":"042d8457-04dc-4171-8b0f-f9e3de695c46","Type":"ContainerStarted","Data":"fb0b310e4353078b29e20eeb338d9c0abab57242511e7e72ade79783d9a85447"} Feb 20 11:53:32.727998 master-0 kubenswrapper[7756]: I0220 11:53:32.727955 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" event={"ID":"89ed6373-78f8-4d77-82b2-1ab055b5b862","Type":"ContainerStarted","Data":"5b4d262fc03a3478edf40c9723e0c0e5c0f740fce56e0634a32a48e69fb176f5"} Feb 20 11:53:32.727998 master-0 kubenswrapper[7756]: I0220 11:53:32.727994 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" event={"ID":"89ed6373-78f8-4d77-82b2-1ab055b5b862","Type":"ContainerStarted","Data":"07bd704a7af2ce153f2d07b9566ceac27133b337ca81d9073a4288938e0c1f09"} Feb 20 11:53:32.728114 master-0 kubenswrapper[7756]: I0220 11:53:32.728010 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" event={"ID":"89ed6373-78f8-4d77-82b2-1ab055b5b862","Type":"ContainerStarted","Data":"3372bbf7f4c306095391a5b4c0a6615ca5aaf373fb3cc461d59deb2a7e8dca2b"} Feb 20 11:53:32.880470 master-0 kubenswrapper[7756]: I0220 11:53:32.880292 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:32.880470 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:32.880470 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:32.880470 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:32.880987 master-0 kubenswrapper[7756]: I0220 11:53:32.880453 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:33.880563 master-0 kubenswrapper[7756]: I0220 11:53:33.879769 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:33.880563 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:33.880563 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:33.880563 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:33.880563 master-0 kubenswrapper[7756]: I0220 11:53:33.879841 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:34.745077 master-0 kubenswrapper[7756]: I0220 11:53:34.744380 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" event={"ID":"042d8457-04dc-4171-8b0f-f9e3de695c46","Type":"ContainerStarted","Data":"1a5164777b36dced448816d4475ee4227f65d348d287d5b0f0e395ba93c1025c"} Feb 20 11:53:34.745077 master-0 kubenswrapper[7756]: I0220 11:53:34.744559 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" event={"ID":"042d8457-04dc-4171-8b0f-f9e3de695c46","Type":"ContainerStarted","Data":"f6fe05e6164b8633fc54b22e624cd58b59996d57432899789ffb2f7cd4a3f6e8"} Feb 20 11:53:34.745077 master-0 kubenswrapper[7756]: I0220 11:53:34.744580 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" event={"ID":"042d8457-04dc-4171-8b0f-f9e3de695c46","Type":"ContainerStarted","Data":"71bfbd48de034282754b9d81e8ed2970b5cff95b919b9d76b37fe3cf0e0590a4"} Feb 20 11:53:34.748436 master-0 kubenswrapper[7756]: I0220 11:53:34.748393 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" event={"ID":"89ed6373-78f8-4d77-82b2-1ab055b5b862","Type":"ContainerStarted","Data":"8305dbad96ecf9cd12df2076f3c1a56da3927f1c781aa5ae7f0f5ba06b70ab5f"} Feb 20 11:53:34.750350 master-0 kubenswrapper[7756]: I0220 11:53:34.750256 7756 generic.go:334] "Generic (PLEG): container finished" podID="62ba4bae-a5e1-4c4d-b544-25d0e59eeac2" containerID="34f6ced44b08101957e2d08e45cc1aa5835ffa33e5d435353a4994d649f8ae48" exitCode=0 Feb 20 11:53:34.750350 master-0 kubenswrapper[7756]: I0220 11:53:34.750321 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8d7nc" event={"ID":"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2","Type":"ContainerDied","Data":"34f6ced44b08101957e2d08e45cc1aa5835ffa33e5d435353a4994d649f8ae48"} Feb 20 11:53:34.775063 master-0 kubenswrapper[7756]: I0220 11:53:34.774961 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" podStartSLOduration=1.749625778 podStartE2EDuration="3.774939009s" podCreationTimestamp="2026-02-20 11:53:31 +0000 UTC" firstStartedPulling="2026-02-20 11:53:32.004817544 +0000 UTC m=+257.747065562" lastFinishedPulling="2026-02-20 11:53:34.030130785 +0000 UTC m=+259.772378793" observedRunningTime="2026-02-20 11:53:34.772040278 +0000 UTC m=+260.514288326" watchObservedRunningTime="2026-02-20 11:53:34.774939009 +0000 UTC m=+260.517187057" Feb 20 11:53:34.798191 master-0 kubenswrapper[7756]: I0220 11:53:34.798048 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" podStartSLOduration=2.050553937 podStartE2EDuration="3.798027083s" podCreationTimestamp="2026-02-20 11:53:31 +0000 UTC" firstStartedPulling="2026-02-20 11:53:32.284038038 +0000 UTC m=+258.026286036" lastFinishedPulling="2026-02-20 11:53:34.031511164 +0000 UTC m=+259.773759182" observedRunningTime="2026-02-20 11:53:34.796630514 +0000 UTC m=+260.538878542" watchObservedRunningTime="2026-02-20 11:53:34.798027083 +0000 UTC m=+260.540275131" Feb 20 11:53:34.880925 master-0 kubenswrapper[7756]: I0220 11:53:34.880805 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:34.880925 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:34.880925 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:34.880925 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:34.882074 master-0 kubenswrapper[7756]: I0220 11:53:34.881024 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:35.767832 master-0 kubenswrapper[7756]: I0220 11:53:35.767578 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8d7nc" event={"ID":"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2","Type":"ContainerStarted","Data":"209217d2f01cffd5019382d1deefe5486c6baf113746f0bf32e653815dc1fd6f"} Feb 20 11:53:35.768173 master-0 kubenswrapper[7756]: I0220 11:53:35.768143 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8d7nc" event={"ID":"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2","Type":"ContainerStarted","Data":"ded97a764fa02517754d9bc7891827bc812e4153679216bca6c40c8898ed6fd4"} Feb 20 11:53:35.810248 master-0 kubenswrapper[7756]: I0220 11:53:35.810096 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-8d7nc" podStartSLOduration=4.058821166 podStartE2EDuration="5.810062807s" podCreationTimestamp="2026-02-20 11:53:30 +0000 UTC" firstStartedPulling="2026-02-20 11:53:32.293650376 +0000 UTC m=+258.035898384" lastFinishedPulling="2026-02-20 11:53:34.044891997 +0000 UTC m=+259.787140025" observedRunningTime="2026-02-20 11:53:35.801554329 +0000 UTC m=+261.543802377" watchObservedRunningTime="2026-02-20 11:53:35.810062807 +0000 UTC m=+261.552310845" Feb 20 11:53:35.880880 master-0 kubenswrapper[7756]: I0220 11:53:35.880758 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:35.880880 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:35.880880 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:35.880880 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:35.881914 master-0 kubenswrapper[7756]: I0220 11:53:35.880877 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:36.474957 master-0 kubenswrapper[7756]: I0220 11:53:36.474877 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l"] Feb 20 11:53:36.476003 master-0 kubenswrapper[7756]: I0220 11:53:36.475934 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.478766 master-0 kubenswrapper[7756]: I0220 11:53:36.478692 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-fgsdc" Feb 20 11:53:36.479367 master-0 kubenswrapper[7756]: I0220 11:53:36.479313 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 20 11:53:36.479572 master-0 kubenswrapper[7756]: I0220 11:53:36.479317 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-cqc0j177hn3k9" Feb 20 11:53:36.479702 master-0 kubenswrapper[7756]: I0220 11:53:36.479589 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 20 11:53:36.481930 master-0 kubenswrapper[7756]: I0220 11:53:36.481858 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 20 11:53:36.483829 master-0 kubenswrapper[7756]: I0220 11:53:36.483785 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 20 11:53:36.505517 master-0 kubenswrapper[7756]: I0220 11:53:36.505456 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l"] Feb 20 11:53:36.610334 master-0 kubenswrapper[7756]: I0220 11:53:36.610270 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.610334 master-0 kubenswrapper[7756]: I0220 11:53:36.610333 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfzqt\" (UniqueName: \"kubernetes.io/projected/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-kube-api-access-kfzqt\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.610593 master-0 kubenswrapper[7756]: I0220 11:53:36.610371 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-client-ca-bundle\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.610593 master-0 kubenswrapper[7756]: I0220 11:53:36.610555 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-audit-log\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.610706 master-0 kubenswrapper[7756]: I0220 11:53:36.610670 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-client-certs\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.610805 master-0 kubenswrapper[7756]: I0220 11:53:36.610775 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-metrics-server-audit-profiles\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.610890 master-0 kubenswrapper[7756]: I0220 11:53:36.610862 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-server-tls\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.712971 master-0 kubenswrapper[7756]: I0220 11:53:36.712881 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-server-tls\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.713200 master-0 kubenswrapper[7756]: I0220 11:53:36.713171 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.713268 master-0 kubenswrapper[7756]: I0220 11:53:36.713244 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfzqt\" (UniqueName: \"kubernetes.io/projected/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-kube-api-access-kfzqt\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.713575 master-0 kubenswrapper[7756]: I0220 11:53:36.713484 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-client-ca-bundle\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.713755 master-0 kubenswrapper[7756]: I0220 11:53:36.713694 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-audit-log\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.713912 master-0 kubenswrapper[7756]: I0220 11:53:36.713856 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-client-certs\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.714030 master-0 kubenswrapper[7756]: I0220 11:53:36.713990 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-metrics-server-audit-profiles\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.714568 master-0 kubenswrapper[7756]: I0220 11:53:36.714485 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-audit-log\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.715039 master-0 kubenswrapper[7756]: I0220 11:53:36.714971 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.716487 master-0 kubenswrapper[7756]: I0220 11:53:36.716387 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-metrics-server-audit-profiles\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.718813 master-0 kubenswrapper[7756]: I0220 11:53:36.718734 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-server-tls\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.719324 master-0 kubenswrapper[7756]: I0220 11:53:36.719271 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-client-certs\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.719798 master-0 kubenswrapper[7756]: I0220 11:53:36.719743 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-client-ca-bundle\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.741339 master-0 kubenswrapper[7756]: I0220 11:53:36.741220 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfzqt\" (UniqueName: \"kubernetes.io/projected/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-kube-api-access-kfzqt\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.845083 master-0 kubenswrapper[7756]: I0220 11:53:36.844998 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:36.880149 master-0 kubenswrapper[7756]: I0220 11:53:36.880073 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:36.880149 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:36.880149 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:36.880149 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:36.880471 master-0 kubenswrapper[7756]: I0220 11:53:36.880167 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:37.393653 master-0 kubenswrapper[7756]: I0220 11:53:37.393580 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l"] Feb 20 11:53:37.399967 master-0 kubenswrapper[7756]: W0220 11:53:37.399738 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6717f0b4_c2f6_4ed5_94fb_778e5c7c983c.slice/crio-166c259337ecc4f073ab3f6650460578e7d7cb947fe167df547591f1f002809b WatchSource:0}: Error finding container 166c259337ecc4f073ab3f6650460578e7d7cb947fe167df547591f1f002809b: Status 404 returned error can't find the container with id 166c259337ecc4f073ab3f6650460578e7d7cb947fe167df547591f1f002809b Feb 20 11:53:37.784164 master-0 kubenswrapper[7756]: I0220 11:53:37.784057 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" event={"ID":"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c","Type":"ContainerStarted","Data":"166c259337ecc4f073ab3f6650460578e7d7cb947fe167df547591f1f002809b"} Feb 20 11:53:37.880880 master-0 kubenswrapper[7756]: I0220 11:53:37.880783 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:37.880880 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:37.880880 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:37.880880 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:37.881290 master-0 kubenswrapper[7756]: I0220 11:53:37.880882 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:38.879614 master-0 kubenswrapper[7756]: I0220 11:53:38.879498 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:38.879614 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:38.879614 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:38.879614 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:38.881074 master-0 kubenswrapper[7756]: I0220 11:53:38.879646 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:39.803197 master-0 kubenswrapper[7756]: I0220 11:53:39.803113 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" event={"ID":"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c","Type":"ContainerStarted","Data":"4b07f9abf3972d915c07bfcfda610b6bf504d961e92676742b5c0b96fc8aab1a"} Feb 20 11:53:39.833053 master-0 kubenswrapper[7756]: I0220 11:53:39.832945 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" podStartSLOduration=2.266153836 podStartE2EDuration="3.832923985s" podCreationTimestamp="2026-02-20 11:53:36 +0000 UTC" firstStartedPulling="2026-02-20 11:53:37.404096194 +0000 UTC m=+263.146344242" lastFinishedPulling="2026-02-20 11:53:38.970866383 +0000 UTC m=+264.713114391" observedRunningTime="2026-02-20 11:53:39.829491199 +0000 UTC m=+265.571739237" watchObservedRunningTime="2026-02-20 11:53:39.832923985 +0000 UTC m=+265.575172033" Feb 20 11:53:39.881011 master-0 kubenswrapper[7756]: I0220 11:53:39.880955 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:39.881011 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:39.881011 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:39.881011 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:39.882020 master-0 kubenswrapper[7756]: I0220 11:53:39.881659 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:40.880122 master-0 kubenswrapper[7756]: I0220 11:53:40.880030 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:40.880122 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:40.880122 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:40.880122 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:40.880122 master-0 kubenswrapper[7756]: I0220 11:53:40.880119 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:41.880698 master-0 kubenswrapper[7756]: I0220 11:53:41.880609 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:41.880698 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:41.880698 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:41.880698 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:41.881655 master-0 kubenswrapper[7756]: I0220 11:53:41.880702 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:42.880369 master-0 kubenswrapper[7756]: I0220 11:53:42.880279 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:42.880369 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:42.880369 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:42.880369 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:42.881342 master-0 kubenswrapper[7756]: I0220 11:53:42.880388 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:43.880549 master-0 kubenswrapper[7756]: I0220 11:53:43.880422 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:43.880549 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:43.880549 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:43.880549 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:43.881506 master-0 kubenswrapper[7756]: I0220 11:53:43.880560 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:44.880348 master-0 kubenswrapper[7756]: I0220 11:53:44.880298 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:44.880348 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:44.880348 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:44.880348 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:44.880863 master-0 kubenswrapper[7756]: I0220 11:53:44.880370 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:45.880301 master-0 kubenswrapper[7756]: I0220 11:53:45.880208 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:45.880301 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:45.880301 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:45.880301 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:45.880301 master-0 kubenswrapper[7756]: I0220 11:53:45.880298 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:46.879946 master-0 kubenswrapper[7756]: I0220 11:53:46.879856 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:46.879946 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:46.879946 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:46.879946 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:46.879946 master-0 kubenswrapper[7756]: I0220 11:53:46.879933 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:47.882925 master-0 kubenswrapper[7756]: I0220 11:53:47.882834 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:47.882925 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:47.882925 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:47.882925 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:47.883542 master-0 kubenswrapper[7756]: I0220 11:53:47.883107 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:48.879520 master-0 kubenswrapper[7756]: I0220 11:53:48.879407 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:48.879520 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:48.879520 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:48.879520 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:48.880096 master-0 kubenswrapper[7756]: I0220 11:53:48.879644 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:49.879770 master-0 kubenswrapper[7756]: I0220 11:53:49.879656 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:49.879770 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:49.879770 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:49.879770 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:49.881031 master-0 kubenswrapper[7756]: I0220 11:53:49.879762 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:50.880434 master-0 kubenswrapper[7756]: I0220 11:53:50.880332 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:50.880434 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:50.880434 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:50.880434 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:50.881477 master-0 kubenswrapper[7756]: I0220 11:53:50.880445 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:51.880074 master-0 kubenswrapper[7756]: I0220 11:53:51.879986 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:51.880074 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:51.880074 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:51.880074 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:51.880402 master-0 kubenswrapper[7756]: I0220 11:53:51.880094 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:52.880235 master-0 kubenswrapper[7756]: I0220 11:53:52.880128 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:52.880235 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:52.880235 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:52.880235 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:52.880961 master-0 kubenswrapper[7756]: I0220 11:53:52.880271 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:53.881156 master-0 kubenswrapper[7756]: I0220 11:53:53.881058 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:53.881156 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:53.881156 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:53.881156 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:53.882205 master-0 kubenswrapper[7756]: I0220 11:53:53.881160 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:54.882451 master-0 kubenswrapper[7756]: I0220 11:53:54.882339 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:54.882451 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:54.882451 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:54.882451 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:54.882451 master-0 kubenswrapper[7756]: I0220 11:53:54.882427 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:55.880934 master-0 kubenswrapper[7756]: I0220 11:53:55.880839 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:55.880934 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:55.880934 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:55.880934 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:55.881508 master-0 kubenswrapper[7756]: I0220 11:53:55.880947 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:56.845856 master-0 kubenswrapper[7756]: I0220 11:53:56.845718 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:56.847079 master-0 kubenswrapper[7756]: I0220 11:53:56.845947 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:53:56.881075 master-0 kubenswrapper[7756]: I0220 11:53:56.880991 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:56.881075 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:56.881075 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:56.881075 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:56.881680 master-0 kubenswrapper[7756]: I0220 11:53:56.881093 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:57.196498 master-0 kubenswrapper[7756]: I0220 11:53:57.196197 7756 patch_prober.go:28] interesting pod/machine-config-daemon-mpwks container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 11:53:57.196498 master-0 kubenswrapper[7756]: I0220 11:53:57.196304 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mpwks" podUID="37cb3bb1-f5ba-4b7b-9af9-55bf61906a51" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 11:53:57.196498 master-0 kubenswrapper[7756]: I0220 11:53:57.196374 7756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 11:53:57.197254 master-0 kubenswrapper[7756]: I0220 11:53:57.197179 7756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd16bd752b73b8b49c9f915a16effe79766d7670ad8d9f340d00a15fdc577892"} pod="openshift-machine-config-operator/machine-config-daemon-mpwks" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 11:53:57.197417 master-0 kubenswrapper[7756]: I0220 11:53:57.197344 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mpwks" podUID="37cb3bb1-f5ba-4b7b-9af9-55bf61906a51" containerName="machine-config-daemon" containerID="cri-o://cd16bd752b73b8b49c9f915a16effe79766d7670ad8d9f340d00a15fdc577892" gracePeriod=600 Feb 20 11:53:57.880193 master-0 kubenswrapper[7756]: I0220 11:53:57.880118 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:57.880193 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:57.880193 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:57.880193 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:57.882169 master-0 kubenswrapper[7756]: I0220 11:53:57.880195 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:57.972240 master-0 kubenswrapper[7756]: I0220 11:53:57.972163 7756 generic.go:334] "Generic (PLEG): container finished" podID="37cb3bb1-f5ba-4b7b-9af9-55bf61906a51" containerID="cd16bd752b73b8b49c9f915a16effe79766d7670ad8d9f340d00a15fdc577892" exitCode=0 Feb 20 11:53:57.972240 master-0 kubenswrapper[7756]: I0220 11:53:57.972234 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mpwks" event={"ID":"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51","Type":"ContainerDied","Data":"cd16bd752b73b8b49c9f915a16effe79766d7670ad8d9f340d00a15fdc577892"} Feb 20 11:53:58.880790 master-0 kubenswrapper[7756]: I0220 11:53:58.880695 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:58.880790 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:58.880790 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:58.880790 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:58.881941 master-0 kubenswrapper[7756]: I0220 11:53:58.880803 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:58.986342 master-0 kubenswrapper[7756]: I0220 11:53:58.986268 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mpwks" event={"ID":"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51","Type":"ContainerStarted","Data":"940953f7e2f345818490c59d55a2ae3bb182fa33cff86435de9ff8ae78c9426c"} Feb 20 11:53:59.836341 master-0 kubenswrapper[7756]: I0220 11:53:59.836274 7756 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 20 11:53:59.836771 master-0 kubenswrapper[7756]: I0220 11:53:59.836726 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="416b60c941b7224bbf94e8f78b59b910" containerName="kube-scheduler" containerID="cri-o://610ed904564d38a9663079b5791a3bed3f3fde288a983c4b6a5a9408be5ffc50" gracePeriod=30 Feb 20 11:53:59.836921 master-0 kubenswrapper[7756]: I0220 11:53:59.836807 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="416b60c941b7224bbf94e8f78b59b910" containerName="kube-scheduler-recovery-controller" containerID="cri-o://1d1e4f19b4b937664918df87724b0ce6399cbc186e4b82d3db56d2fb037a5e05" gracePeriod=30 Feb 20 11:53:59.837068 master-0 kubenswrapper[7756]: I0220 11:53:59.836811 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="416b60c941b7224bbf94e8f78b59b910" containerName="kube-scheduler-cert-syncer" containerID="cri-o://4dfeade65eb878550b91a87841f50892c43c67d9c3d37a72dc5c09f4d1bfeb67" gracePeriod=30 Feb 20 11:53:59.838721 master-0 kubenswrapper[7756]: I0220 11:53:59.837923 7756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 20 11:53:59.838721 master-0 kubenswrapper[7756]: E0220 11:53:59.838343 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416b60c941b7224bbf94e8f78b59b910" containerName="wait-for-host-port" Feb 20 11:53:59.838721 master-0 kubenswrapper[7756]: I0220 11:53:59.838374 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="416b60c941b7224bbf94e8f78b59b910" containerName="wait-for-host-port" Feb 20 11:53:59.838721 master-0 kubenswrapper[7756]: E0220 11:53:59.838410 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416b60c941b7224bbf94e8f78b59b910" containerName="kube-scheduler-recovery-controller" Feb 20 11:53:59.838721 master-0 kubenswrapper[7756]: I0220 11:53:59.838429 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="416b60c941b7224bbf94e8f78b59b910" containerName="kube-scheduler-recovery-controller" Feb 20 11:53:59.838721 master-0 kubenswrapper[7756]: E0220 11:53:59.838460 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416b60c941b7224bbf94e8f78b59b910" containerName="kube-scheduler" Feb 20 11:53:59.838721 master-0 kubenswrapper[7756]: I0220 11:53:59.838478 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="416b60c941b7224bbf94e8f78b59b910" containerName="kube-scheduler" Feb 20 11:53:59.838721 master-0 kubenswrapper[7756]: E0220 11:53:59.838563 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416b60c941b7224bbf94e8f78b59b910" containerName="kube-scheduler-cert-syncer" Feb 20 11:53:59.838721 master-0 kubenswrapper[7756]: I0220 11:53:59.838585 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="416b60c941b7224bbf94e8f78b59b910" containerName="kube-scheduler-cert-syncer" Feb 20 11:53:59.839157 master-0 kubenswrapper[7756]: I0220 11:53:59.838865 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="416b60c941b7224bbf94e8f78b59b910" containerName="kube-scheduler-cert-syncer" Feb 20 11:53:59.839157 master-0 kubenswrapper[7756]: I0220 11:53:59.838921 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="416b60c941b7224bbf94e8f78b59b910" containerName="kube-scheduler-recovery-controller" Feb 20 11:53:59.839157 master-0 kubenswrapper[7756]: I0220 11:53:59.838952 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="416b60c941b7224bbf94e8f78b59b910" containerName="kube-scheduler" Feb 20 11:53:59.880107 master-0 kubenswrapper[7756]: I0220 11:53:59.880029 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:53:59.880107 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:53:59.880107 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:53:59.880107 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:53:59.880330 master-0 kubenswrapper[7756]: I0220 11:53:59.880139 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:53:59.996593 master-0 kubenswrapper[7756]: I0220 11:53:59.996519 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_416b60c941b7224bbf94e8f78b59b910/kube-scheduler-cert-syncer/0.log" Feb 20 11:53:59.997603 master-0 kubenswrapper[7756]: I0220 11:53:59.997558 7756 generic.go:334] "Generic (PLEG): container finished" podID="416b60c941b7224bbf94e8f78b59b910" containerID="1d1e4f19b4b937664918df87724b0ce6399cbc186e4b82d3db56d2fb037a5e05" exitCode=0 Feb 20 11:53:59.997603 master-0 kubenswrapper[7756]: I0220 11:53:59.997598 7756 generic.go:334] "Generic (PLEG): container finished" podID="416b60c941b7224bbf94e8f78b59b910" containerID="4dfeade65eb878550b91a87841f50892c43c67d9c3d37a72dc5c09f4d1bfeb67" exitCode=2 Feb 20 11:53:59.997763 master-0 kubenswrapper[7756]: I0220 11:53:59.997612 7756 generic.go:334] "Generic (PLEG): container finished" podID="416b60c941b7224bbf94e8f78b59b910" containerID="610ed904564d38a9663079b5791a3bed3f3fde288a983c4b6a5a9408be5ffc50" exitCode=0 Feb 20 11:53:59.997835 master-0 kubenswrapper[7756]: I0220 11:53:59.997741 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbff31dd4ba8a02321905b3bade3855c36331a3c01be642ab84f9369eaefe349" Feb 20 11:54:00.013369 master-0 kubenswrapper[7756]: I0220 11:54:00.013327 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_416b60c941b7224bbf94e8f78b59b910/kube-scheduler-cert-syncer/0.log" Feb 20 11:54:00.014336 master-0 kubenswrapper[7756]: I0220 11:54:00.014310 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:54:00.017799 master-0 kubenswrapper[7756]: I0220 11:54:00.017761 7756 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="416b60c941b7224bbf94e8f78b59b910" podUID="aa2f6c0cf73fadd0d96a26150bb4dbb3" Feb 20 11:54:00.022541 master-0 kubenswrapper[7756]: I0220 11:54:00.022478 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa2f6c0cf73fadd0d96a26150bb4dbb3-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa2f6c0cf73fadd0d96a26150bb4dbb3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:54:00.022666 master-0 kubenswrapper[7756]: I0220 11:54:00.022633 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa2f6c0cf73fadd0d96a26150bb4dbb3-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa2f6c0cf73fadd0d96a26150bb4dbb3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:54:00.124998 master-0 kubenswrapper[7756]: I0220 11:54:00.124725 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-resource-dir\") pod \"416b60c941b7224bbf94e8f78b59b910\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " Feb 20 11:54:00.124998 master-0 kubenswrapper[7756]: I0220 11:54:00.124919 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "416b60c941b7224bbf94e8f78b59b910" (UID: "416b60c941b7224bbf94e8f78b59b910"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:54:00.125366 master-0 kubenswrapper[7756]: I0220 11:54:00.125077 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-cert-dir\") pod \"416b60c941b7224bbf94e8f78b59b910\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " Feb 20 11:54:00.125366 master-0 kubenswrapper[7756]: I0220 11:54:00.125194 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "416b60c941b7224bbf94e8f78b59b910" (UID: "416b60c941b7224bbf94e8f78b59b910"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:54:00.125980 master-0 kubenswrapper[7756]: I0220 11:54:00.125910 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa2f6c0cf73fadd0d96a26150bb4dbb3-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa2f6c0cf73fadd0d96a26150bb4dbb3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:54:00.126104 master-0 kubenswrapper[7756]: I0220 11:54:00.126080 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa2f6c0cf73fadd0d96a26150bb4dbb3-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa2f6c0cf73fadd0d96a26150bb4dbb3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:54:00.126397 master-0 kubenswrapper[7756]: I0220 11:54:00.126340 7756 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:54:00.126397 master-0 kubenswrapper[7756]: I0220 11:54:00.126391 7756 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:54:00.126648 master-0 kubenswrapper[7756]: I0220 11:54:00.126436 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa2f6c0cf73fadd0d96a26150bb4dbb3-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa2f6c0cf73fadd0d96a26150bb4dbb3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:54:00.127014 master-0 kubenswrapper[7756]: I0220 11:54:00.126948 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa2f6c0cf73fadd0d96a26150bb4dbb3-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa2f6c0cf73fadd0d96a26150bb4dbb3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:54:00.589919 master-0 kubenswrapper[7756]: I0220 11:54:00.589840 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="416b60c941b7224bbf94e8f78b59b910" path="/var/lib/kubelet/pods/416b60c941b7224bbf94e8f78b59b910/volumes" Feb 20 11:54:00.880028 master-0 kubenswrapper[7756]: I0220 11:54:00.879904 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:00.880028 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:00.880028 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:00.880028 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:00.880028 master-0 kubenswrapper[7756]: I0220 11:54:00.879984 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:01.004513 master-0 kubenswrapper[7756]: I0220 11:54:01.004455 7756 generic.go:334] "Generic (PLEG): container finished" podID="148cc321-3a17-4852-a75a-e8ac95139eb8" containerID="02fab2fdba837309f8086a9fb1b2446510dff4e4ca65a786c3fc86a795a7af11" exitCode=0 Feb 20 11:54:01.004964 master-0 kubenswrapper[7756]: I0220 11:54:01.004539 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"148cc321-3a17-4852-a75a-e8ac95139eb8","Type":"ContainerDied","Data":"02fab2fdba837309f8086a9fb1b2446510dff4e4ca65a786c3fc86a795a7af11"} Feb 20 11:54:01.004964 master-0 kubenswrapper[7756]: I0220 11:54:01.004618 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:54:01.061608 master-0 kubenswrapper[7756]: I0220 11:54:01.061547 7756 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="416b60c941b7224bbf94e8f78b59b910" podUID="aa2f6c0cf73fadd0d96a26150bb4dbb3" Feb 20 11:54:01.880667 master-0 kubenswrapper[7756]: I0220 11:54:01.880564 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:01.880667 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:01.880667 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:01.880667 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:01.881090 master-0 kubenswrapper[7756]: I0220 11:54:01.880669 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:02.331839 master-0 kubenswrapper[7756]: I0220 11:54:02.331490 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 11:54:02.482867 master-0 kubenswrapper[7756]: I0220 11:54:02.482771 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/148cc321-3a17-4852-a75a-e8ac95139eb8-var-lock\") pod \"148cc321-3a17-4852-a75a-e8ac95139eb8\" (UID: \"148cc321-3a17-4852-a75a-e8ac95139eb8\") " Feb 20 11:54:02.483139 master-0 kubenswrapper[7756]: I0220 11:54:02.482961 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/148cc321-3a17-4852-a75a-e8ac95139eb8-kubelet-dir\") pod \"148cc321-3a17-4852-a75a-e8ac95139eb8\" (UID: \"148cc321-3a17-4852-a75a-e8ac95139eb8\") " Feb 20 11:54:02.483139 master-0 kubenswrapper[7756]: I0220 11:54:02.483077 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/148cc321-3a17-4852-a75a-e8ac95139eb8-kube-api-access\") pod \"148cc321-3a17-4852-a75a-e8ac95139eb8\" (UID: \"148cc321-3a17-4852-a75a-e8ac95139eb8\") " Feb 20 11:54:02.483139 master-0 kubenswrapper[7756]: I0220 11:54:02.483075 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/148cc321-3a17-4852-a75a-e8ac95139eb8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "148cc321-3a17-4852-a75a-e8ac95139eb8" (UID: "148cc321-3a17-4852-a75a-e8ac95139eb8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:54:02.483484 master-0 kubenswrapper[7756]: I0220 11:54:02.483426 7756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/148cc321-3a17-4852-a75a-e8ac95139eb8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:54:02.483949 master-0 kubenswrapper[7756]: I0220 11:54:02.483000 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/148cc321-3a17-4852-a75a-e8ac95139eb8-var-lock" (OuterVolumeSpecName: "var-lock") pod "148cc321-3a17-4852-a75a-e8ac95139eb8" (UID: "148cc321-3a17-4852-a75a-e8ac95139eb8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:54:02.487832 master-0 kubenswrapper[7756]: I0220 11:54:02.487767 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/148cc321-3a17-4852-a75a-e8ac95139eb8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "148cc321-3a17-4852-a75a-e8ac95139eb8" (UID: "148cc321-3a17-4852-a75a-e8ac95139eb8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:54:02.585051 master-0 kubenswrapper[7756]: I0220 11:54:02.584874 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/148cc321-3a17-4852-a75a-e8ac95139eb8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 11:54:02.585051 master-0 kubenswrapper[7756]: I0220 11:54:02.584925 7756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/148cc321-3a17-4852-a75a-e8ac95139eb8-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 11:54:02.880621 master-0 kubenswrapper[7756]: I0220 11:54:02.880446 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:02.880621 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:02.880621 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:02.880621 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:02.880621 master-0 kubenswrapper[7756]: I0220 11:54:02.880563 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:03.022482 master-0 kubenswrapper[7756]: I0220 11:54:03.022426 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"148cc321-3a17-4852-a75a-e8ac95139eb8","Type":"ContainerDied","Data":"d637b82c89f26c321cedef58ed73b3beb4ce3dd682ac20250458654f4757c3e7"} Feb 20 11:54:03.022835 master-0 kubenswrapper[7756]: I0220 11:54:03.022808 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d637b82c89f26c321cedef58ed73b3beb4ce3dd682ac20250458654f4757c3e7" Feb 20 11:54:03.022982 master-0 kubenswrapper[7756]: I0220 11:54:03.022508 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 11:54:03.880565 master-0 kubenswrapper[7756]: I0220 11:54:03.880459 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:03.880565 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:03.880565 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:03.880565 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:03.881891 master-0 kubenswrapper[7756]: I0220 11:54:03.880605 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:04.880643 master-0 kubenswrapper[7756]: I0220 11:54:04.880572 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:04.880643 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:04.880643 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:04.880643 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:04.881591 master-0 kubenswrapper[7756]: I0220 11:54:04.880862 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:05.879837 master-0 kubenswrapper[7756]: I0220 11:54:05.879746 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:05.879837 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:05.879837 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:05.879837 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:05.880271 master-0 kubenswrapper[7756]: I0220 11:54:05.879844 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:06.880240 master-0 kubenswrapper[7756]: I0220 11:54:06.880152 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:06.880240 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:06.880240 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:06.880240 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:06.881228 master-0 kubenswrapper[7756]: I0220 11:54:06.880253 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:07.880240 master-0 kubenswrapper[7756]: I0220 11:54:07.880170 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:07.880240 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:07.880240 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:07.880240 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:07.881205 master-0 kubenswrapper[7756]: I0220 11:54:07.880254 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:08.880575 master-0 kubenswrapper[7756]: I0220 11:54:08.880468 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:08.880575 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:08.880575 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:08.880575 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:08.881465 master-0 kubenswrapper[7756]: I0220 11:54:08.880583 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:09.880185 master-0 kubenswrapper[7756]: I0220 11:54:09.880089 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:09.880185 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:09.880185 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:09.880185 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:09.881373 master-0 kubenswrapper[7756]: I0220 11:54:09.880199 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:10.880779 master-0 kubenswrapper[7756]: I0220 11:54:10.880574 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:10.880779 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:10.880779 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:10.880779 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:10.880779 master-0 kubenswrapper[7756]: I0220 11:54:10.880684 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:11.880575 master-0 kubenswrapper[7756]: I0220 11:54:11.880470 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:11.880575 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:11.880575 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:11.880575 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:11.881580 master-0 kubenswrapper[7756]: I0220 11:54:11.880596 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:12.880867 master-0 kubenswrapper[7756]: I0220 11:54:12.880744 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:12.880867 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:12.880867 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:12.880867 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:12.881993 master-0 kubenswrapper[7756]: I0220 11:54:12.880873 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:13.578497 master-0 kubenswrapper[7756]: I0220 11:54:13.578386 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:54:13.610153 master-0 kubenswrapper[7756]: I0220 11:54:13.610077 7756 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="6cc875c2-7dec-4839-a128-430780fdb366" Feb 20 11:54:13.610153 master-0 kubenswrapper[7756]: I0220 11:54:13.610131 7756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="6cc875c2-7dec-4839-a128-430780fdb366" Feb 20 11:54:13.632787 master-0 kubenswrapper[7756]: I0220 11:54:13.632726 7756 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:54:13.634901 master-0 kubenswrapper[7756]: I0220 11:54:13.634816 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 20 11:54:13.647350 master-0 kubenswrapper[7756]: I0220 11:54:13.647247 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 20 11:54:13.652927 master-0 kubenswrapper[7756]: I0220 11:54:13.652876 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:54:13.659296 master-0 kubenswrapper[7756]: I0220 11:54:13.659218 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 20 11:54:13.686362 master-0 kubenswrapper[7756]: W0220 11:54:13.686297 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa2f6c0cf73fadd0d96a26150bb4dbb3.slice/crio-c16bea21819b6e1d15437de870badf6de1fc66d12913185c9cecf80f61f24b54 WatchSource:0}: Error finding container c16bea21819b6e1d15437de870badf6de1fc66d12913185c9cecf80f61f24b54: Status 404 returned error can't find the container with id c16bea21819b6e1d15437de870badf6de1fc66d12913185c9cecf80f61f24b54 Feb 20 11:54:13.880567 master-0 kubenswrapper[7756]: I0220 11:54:13.880395 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:13.880567 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:13.880567 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:13.880567 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:13.880567 master-0 kubenswrapper[7756]: I0220 11:54:13.880492 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:14.115465 master-0 kubenswrapper[7756]: I0220 11:54:14.115358 7756 generic.go:334] "Generic (PLEG): container finished" podID="aa2f6c0cf73fadd0d96a26150bb4dbb3" containerID="970c2892630032916bff16279185e91dd2db588a1ad81c9a738b21187856ab20" exitCode=0 Feb 20 11:54:14.115465 master-0 kubenswrapper[7756]: I0220 11:54:14.115427 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerDied","Data":"970c2892630032916bff16279185e91dd2db588a1ad81c9a738b21187856ab20"} Feb 20 11:54:14.115465 master-0 kubenswrapper[7756]: I0220 11:54:14.115470 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerStarted","Data":"c16bea21819b6e1d15437de870badf6de1fc66d12913185c9cecf80f61f24b54"} Feb 20 11:54:14.779231 master-0 kubenswrapper[7756]: I0220 11:54:14.779141 7756 scope.go:117] "RemoveContainer" containerID="f1682d7b4b37ab8ab7b0e93abba0b5ee3a264e78978d6dc34d6d434f13d2a6ae" Feb 20 11:54:14.804162 master-0 kubenswrapper[7756]: I0220 11:54:14.804104 7756 scope.go:117] "RemoveContainer" containerID="5ad7139b014a017e9214a9b49d5763ba0bf59d3613eecad560b203e714e96877" Feb 20 11:54:14.879884 master-0 kubenswrapper[7756]: I0220 11:54:14.879822 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:14.879884 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:14.879884 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:14.879884 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:14.880093 master-0 kubenswrapper[7756]: I0220 11:54:14.879920 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:15.138877 master-0 kubenswrapper[7756]: I0220 11:54:15.138753 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerStarted","Data":"b3973bb4e0436fc81dccb8348c1f9f8491e95c0a5851afc33de82d620bb3b291"} Feb 20 11:54:15.138877 master-0 kubenswrapper[7756]: I0220 11:54:15.138828 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerStarted","Data":"7e6c16941011718bcf6a9f94acdb17c25246b75a0407ed5d83ac4536ca1a0a88"} Feb 20 11:54:15.880200 master-0 kubenswrapper[7756]: I0220 11:54:15.880093 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:15.880200 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:15.880200 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:15.880200 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:15.880200 master-0 kubenswrapper[7756]: I0220 11:54:15.880197 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:16.152853 master-0 kubenswrapper[7756]: I0220 11:54:16.152672 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerStarted","Data":"fa37ec307276e57d8dcd075e874fe4bda0e8faf9e0a2759374c512cf7a51b796"} Feb 20 11:54:16.153630 master-0 kubenswrapper[7756]: I0220 11:54:16.153313 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:54:16.188572 master-0 kubenswrapper[7756]: I0220 11:54:16.185976 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=3.185949095 podStartE2EDuration="3.185949095s" podCreationTimestamp="2026-02-20 11:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:54:16.179915076 +0000 UTC m=+301.922163124" watchObservedRunningTime="2026-02-20 11:54:16.185949095 +0000 UTC m=+301.928197143" Feb 20 11:54:16.856687 master-0 kubenswrapper[7756]: I0220 11:54:16.854691 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:54:16.863678 master-0 kubenswrapper[7756]: I0220 11:54:16.862784 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 11:54:16.879957 master-0 kubenswrapper[7756]: I0220 11:54:16.879897 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:16.879957 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:16.879957 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:16.879957 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:16.880293 master-0 kubenswrapper[7756]: I0220 11:54:16.879977 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:17.879695 master-0 kubenswrapper[7756]: I0220 11:54:17.879608 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:17.879695 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:17.879695 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:17.879695 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:17.880728 master-0 kubenswrapper[7756]: I0220 11:54:17.879714 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:18.880480 master-0 kubenswrapper[7756]: I0220 11:54:18.880401 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:18.880480 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:18.880480 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:18.880480 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:18.881518 master-0 kubenswrapper[7756]: I0220 11:54:18.880492 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:19.880838 master-0 kubenswrapper[7756]: I0220 11:54:19.880752 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:19.880838 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:19.880838 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:19.880838 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:19.880838 master-0 kubenswrapper[7756]: I0220 11:54:19.880860 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:20.880660 master-0 kubenswrapper[7756]: I0220 11:54:20.880583 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:20.880660 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:20.880660 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:20.880660 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:20.881812 master-0 kubenswrapper[7756]: I0220 11:54:20.880679 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:21.880226 master-0 kubenswrapper[7756]: I0220 11:54:21.880105 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:21.880226 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:21.880226 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:21.880226 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:21.880754 master-0 kubenswrapper[7756]: I0220 11:54:21.880232 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:22.880599 master-0 kubenswrapper[7756]: I0220 11:54:22.880433 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:22.880599 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:22.880599 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:22.880599 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:22.881502 master-0 kubenswrapper[7756]: I0220 11:54:22.880596 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:23.881185 master-0 kubenswrapper[7756]: I0220 11:54:23.881106 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:23.881185 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:23.881185 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:23.881185 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:23.882219 master-0 kubenswrapper[7756]: I0220 11:54:23.881194 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:24.880744 master-0 kubenswrapper[7756]: I0220 11:54:24.880669 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:24.880744 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:24.880744 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:24.880744 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:24.881056 master-0 kubenswrapper[7756]: I0220 11:54:24.880770 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:25.880222 master-0 kubenswrapper[7756]: I0220 11:54:25.880122 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:25.880222 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:25.880222 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:25.880222 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:25.881175 master-0 kubenswrapper[7756]: I0220 11:54:25.880236 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:26.880940 master-0 kubenswrapper[7756]: I0220 11:54:26.880848 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:26.880940 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:26.880940 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:26.880940 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:26.880940 master-0 kubenswrapper[7756]: I0220 11:54:26.880928 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:27.880229 master-0 kubenswrapper[7756]: I0220 11:54:27.880134 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:27.880229 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:27.880229 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:27.880229 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:27.880229 master-0 kubenswrapper[7756]: I0220 11:54:27.880223 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:28.881003 master-0 kubenswrapper[7756]: I0220 11:54:28.880915 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:28.881003 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:28.881003 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:28.881003 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:28.881003 master-0 kubenswrapper[7756]: I0220 11:54:28.881000 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:29.880007 master-0 kubenswrapper[7756]: I0220 11:54:29.879896 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:29.880007 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:29.880007 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:29.880007 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:29.880424 master-0 kubenswrapper[7756]: I0220 11:54:29.880010 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:30.880386 master-0 kubenswrapper[7756]: I0220 11:54:30.880294 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:30.880386 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:30.880386 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:30.880386 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:30.881395 master-0 kubenswrapper[7756]: I0220 11:54:30.880405 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:31.881225 master-0 kubenswrapper[7756]: I0220 11:54:31.881004 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:31.881225 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:31.881225 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:31.881225 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:31.882356 master-0 kubenswrapper[7756]: I0220 11:54:31.881248 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:32.880859 master-0 kubenswrapper[7756]: I0220 11:54:32.880743 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:32.880859 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:32.880859 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:32.880859 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:32.882065 master-0 kubenswrapper[7756]: I0220 11:54:32.880931 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:33.880108 master-0 kubenswrapper[7756]: I0220 11:54:33.880015 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:33.880108 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:33.880108 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:33.880108 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:33.880643 master-0 kubenswrapper[7756]: I0220 11:54:33.880242 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:34.880806 master-0 kubenswrapper[7756]: I0220 11:54:34.880701 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:34.880806 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:34.880806 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:34.880806 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:34.881805 master-0 kubenswrapper[7756]: I0220 11:54:34.880807 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:35.880647 master-0 kubenswrapper[7756]: I0220 11:54:35.880523 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:35.880647 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:35.880647 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:35.880647 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:35.882005 master-0 kubenswrapper[7756]: I0220 11:54:35.881559 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:36.322672 master-0 kubenswrapper[7756]: I0220 11:54:36.322578 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/1.log" Feb 20 11:54:36.323999 master-0 kubenswrapper[7756]: I0220 11:54:36.323944 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/0.log" Feb 20 11:54:36.324120 master-0 kubenswrapper[7756]: I0220 11:54:36.324018 7756 generic.go:334] "Generic (PLEG): container finished" podID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" containerID="92faf490ce07d81111f5c9023da3d201553d13fd825a7918d8a229dadf38bba3" exitCode=1 Feb 20 11:54:36.324120 master-0 kubenswrapper[7756]: I0220 11:54:36.324062 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" event={"ID":"db2a7cb1-1d05-4b24-86ed-f823fad5013e","Type":"ContainerDied","Data":"92faf490ce07d81111f5c9023da3d201553d13fd825a7918d8a229dadf38bba3"} Feb 20 11:54:36.324120 master-0 kubenswrapper[7756]: I0220 11:54:36.324107 7756 scope.go:117] "RemoveContainer" containerID="3d07e9c592eed7a379f55e981ead57df10fdecdbcdadc7facb3720be20c537af" Feb 20 11:54:36.324979 master-0 kubenswrapper[7756]: I0220 11:54:36.324911 7756 scope.go:117] "RemoveContainer" containerID="92faf490ce07d81111f5c9023da3d201553d13fd825a7918d8a229dadf38bba3" Feb 20 11:54:36.325378 master-0 kubenswrapper[7756]: E0220 11:54:36.325315 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-kw2v6_openshift-ingress-operator(db2a7cb1-1d05-4b24-86ed-f823fad5013e)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" podUID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" Feb 20 11:54:36.880104 master-0 kubenswrapper[7756]: I0220 11:54:36.879998 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:36.880104 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:36.880104 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:36.880104 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:36.880555 master-0 kubenswrapper[7756]: I0220 11:54:36.880129 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:37.339257 master-0 kubenswrapper[7756]: I0220 11:54:37.339194 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/1.log" Feb 20 11:54:37.880174 master-0 kubenswrapper[7756]: I0220 11:54:37.880101 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:37.880174 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:37.880174 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:37.880174 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:37.880615 master-0 kubenswrapper[7756]: I0220 11:54:37.880188 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:38.881051 master-0 kubenswrapper[7756]: I0220 11:54:38.880980 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:38.881051 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:38.881051 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:38.881051 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:38.882258 master-0 kubenswrapper[7756]: I0220 11:54:38.881067 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:39.880184 master-0 kubenswrapper[7756]: I0220 11:54:39.880122 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:39.880184 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:39.880184 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:39.880184 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:39.880785 master-0 kubenswrapper[7756]: I0220 11:54:39.880740 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:40.880248 master-0 kubenswrapper[7756]: I0220 11:54:40.880138 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:40.880248 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:40.880248 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:40.880248 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:40.880248 master-0 kubenswrapper[7756]: I0220 11:54:40.880232 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:41.880554 master-0 kubenswrapper[7756]: I0220 11:54:41.880411 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:41.880554 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:41.880554 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:41.880554 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:41.880554 master-0 kubenswrapper[7756]: I0220 11:54:41.880574 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:42.879912 master-0 kubenswrapper[7756]: I0220 11:54:42.879805 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:42.879912 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:42.879912 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:42.879912 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:42.879912 master-0 kubenswrapper[7756]: I0220 11:54:42.879897 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:43.880012 master-0 kubenswrapper[7756]: I0220 11:54:43.879941 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:43.880012 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:43.880012 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:43.880012 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:43.880908 master-0 kubenswrapper[7756]: I0220 11:54:43.880022 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:44.880671 master-0 kubenswrapper[7756]: I0220 11:54:44.880588 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:44.880671 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:44.880671 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:44.880671 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:44.881364 master-0 kubenswrapper[7756]: I0220 11:54:44.880701 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:45.880664 master-0 kubenswrapper[7756]: I0220 11:54:45.880572 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:45.880664 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:45.880664 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:45.880664 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:45.881754 master-0 kubenswrapper[7756]: I0220 11:54:45.880680 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:46.578721 master-0 kubenswrapper[7756]: I0220 11:54:46.578656 7756 scope.go:117] "RemoveContainer" containerID="92faf490ce07d81111f5c9023da3d201553d13fd825a7918d8a229dadf38bba3" Feb 20 11:54:46.879816 master-0 kubenswrapper[7756]: I0220 11:54:46.879495 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:46.879816 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:46.879816 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:46.879816 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:46.879816 master-0 kubenswrapper[7756]: I0220 11:54:46.879729 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:47.432571 master-0 kubenswrapper[7756]: I0220 11:54:47.432436 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/1.log" Feb 20 11:54:47.433361 master-0 kubenswrapper[7756]: I0220 11:54:47.433114 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" event={"ID":"db2a7cb1-1d05-4b24-86ed-f823fad5013e","Type":"ContainerStarted","Data":"1c5678620badef46cf4ec23ce00f114b50bbc4d668fc0a3de390930731198bcb"} Feb 20 11:54:47.881946 master-0 kubenswrapper[7756]: I0220 11:54:47.881864 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:47.881946 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:47.881946 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:47.881946 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:47.882402 master-0 kubenswrapper[7756]: I0220 11:54:47.881961 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:48.880367 master-0 kubenswrapper[7756]: I0220 11:54:48.880256 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:48.880367 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:48.880367 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:48.880367 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:48.880367 master-0 kubenswrapper[7756]: I0220 11:54:48.880357 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:49.880471 master-0 kubenswrapper[7756]: I0220 11:54:49.880383 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:49.880471 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:49.880471 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:49.880471 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:49.880471 master-0 kubenswrapper[7756]: I0220 11:54:49.880467 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:50.880297 master-0 kubenswrapper[7756]: I0220 11:54:50.880199 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:50.880297 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:50.880297 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:50.880297 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:50.881479 master-0 kubenswrapper[7756]: I0220 11:54:50.880311 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:51.880149 master-0 kubenswrapper[7756]: I0220 11:54:51.880069 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:51.880149 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:51.880149 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:51.880149 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:51.880604 master-0 kubenswrapper[7756]: I0220 11:54:51.880160 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:52.879810 master-0 kubenswrapper[7756]: I0220 11:54:52.879704 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:52.879810 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:52.879810 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:52.879810 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:52.880740 master-0 kubenswrapper[7756]: I0220 11:54:52.879820 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:53.880133 master-0 kubenswrapper[7756]: I0220 11:54:53.880056 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:53.880133 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:53.880133 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:53.880133 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:53.881155 master-0 kubenswrapper[7756]: I0220 11:54:53.880153 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:54.880242 master-0 kubenswrapper[7756]: I0220 11:54:54.880181 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:54.880242 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:54.880242 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:54.880242 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:54.881143 master-0 kubenswrapper[7756]: I0220 11:54:54.880256 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:55.879749 master-0 kubenswrapper[7756]: I0220 11:54:55.879688 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:55.879749 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:55.879749 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:55.879749 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:55.881005 master-0 kubenswrapper[7756]: I0220 11:54:55.879777 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:56.880791 master-0 kubenswrapper[7756]: I0220 11:54:56.880724 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:56.880791 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:56.880791 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:56.880791 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:56.881775 master-0 kubenswrapper[7756]: I0220 11:54:56.880799 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:57.880585 master-0 kubenswrapper[7756]: I0220 11:54:57.880482 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:57.880585 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:57.880585 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:57.880585 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:57.881600 master-0 kubenswrapper[7756]: I0220 11:54:57.880614 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:58.880043 master-0 kubenswrapper[7756]: I0220 11:54:58.879922 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:58.880043 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:58.880043 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:58.880043 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:58.880043 master-0 kubenswrapper[7756]: I0220 11:54:58.880030 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:54:59.880056 master-0 kubenswrapper[7756]: I0220 11:54:59.879973 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:54:59.880056 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:54:59.880056 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:54:59.880056 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:54:59.881070 master-0 kubenswrapper[7756]: I0220 11:54:59.880071 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:00.880899 master-0 kubenswrapper[7756]: I0220 11:55:00.880833 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:00.880899 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:00.880899 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:00.880899 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:00.882035 master-0 kubenswrapper[7756]: I0220 11:55:00.881702 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:01.879923 master-0 kubenswrapper[7756]: I0220 11:55:01.879853 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:01.879923 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:01.879923 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:01.879923 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:01.880649 master-0 kubenswrapper[7756]: I0220 11:55:01.879947 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:02.880729 master-0 kubenswrapper[7756]: I0220 11:55:02.880576 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:02.880729 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:02.880729 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:02.880729 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:02.880729 master-0 kubenswrapper[7756]: I0220 11:55:02.880695 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:03.664137 master-0 kubenswrapper[7756]: I0220 11:55:03.663879 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 11:55:03.878546 master-0 kubenswrapper[7756]: I0220 11:55:03.878339 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:03.878546 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:03.878546 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:03.878546 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:03.878546 master-0 kubenswrapper[7756]: I0220 11:55:03.878391 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:04.880111 master-0 kubenswrapper[7756]: I0220 11:55:04.879994 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:04.880111 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:04.880111 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:04.880111 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:04.880111 master-0 kubenswrapper[7756]: I0220 11:55:04.880099 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:05.880190 master-0 kubenswrapper[7756]: I0220 11:55:05.880117 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:05.880190 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:05.880190 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:05.880190 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:05.881107 master-0 kubenswrapper[7756]: I0220 11:55:05.880212 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:06.879833 master-0 kubenswrapper[7756]: I0220 11:55:06.879761 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:06.879833 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:06.879833 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:06.879833 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:06.880227 master-0 kubenswrapper[7756]: I0220 11:55:06.879848 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:07.880445 master-0 kubenswrapper[7756]: I0220 11:55:07.880347 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:07.880445 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:07.880445 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:07.880445 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:07.881638 master-0 kubenswrapper[7756]: I0220 11:55:07.880463 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:08.880857 master-0 kubenswrapper[7756]: I0220 11:55:08.880723 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:08.880857 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:08.880857 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:08.880857 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:08.881403 master-0 kubenswrapper[7756]: I0220 11:55:08.880896 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:09.879594 master-0 kubenswrapper[7756]: I0220 11:55:09.879452 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:09.879594 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:09.879594 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:09.879594 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:09.880008 master-0 kubenswrapper[7756]: I0220 11:55:09.879609 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:10.880721 master-0 kubenswrapper[7756]: I0220 11:55:10.880658 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:10.880721 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:10.880721 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:10.880721 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:10.881968 master-0 kubenswrapper[7756]: I0220 11:55:10.881908 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:11.879822 master-0 kubenswrapper[7756]: I0220 11:55:11.879746 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:11.879822 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:11.879822 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:11.879822 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:11.880399 master-0 kubenswrapper[7756]: I0220 11:55:11.879838 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:12.880139 master-0 kubenswrapper[7756]: I0220 11:55:12.880077 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:12.880139 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:12.880139 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:12.880139 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:12.880741 master-0 kubenswrapper[7756]: I0220 11:55:12.880169 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:13.879643 master-0 kubenswrapper[7756]: I0220 11:55:13.879587 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:13.879643 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:13.879643 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:13.879643 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:13.880063 master-0 kubenswrapper[7756]: I0220 11:55:13.879660 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:14.880822 master-0 kubenswrapper[7756]: I0220 11:55:14.880745 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:14.880822 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:14.880822 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:14.880822 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:14.881404 master-0 kubenswrapper[7756]: I0220 11:55:14.880852 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:15.880762 master-0 kubenswrapper[7756]: I0220 11:55:15.880658 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:15.880762 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:15.880762 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:15.880762 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:15.881827 master-0 kubenswrapper[7756]: I0220 11:55:15.880799 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:16.880077 master-0 kubenswrapper[7756]: I0220 11:55:16.879966 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:16.880077 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:16.880077 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:16.880077 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:16.880879 master-0 kubenswrapper[7756]: I0220 11:55:16.880107 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:17.880697 master-0 kubenswrapper[7756]: I0220 11:55:17.880599 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:17.880697 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:17.880697 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:17.880697 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:17.880697 master-0 kubenswrapper[7756]: I0220 11:55:17.880691 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:18.880645 master-0 kubenswrapper[7756]: I0220 11:55:18.880548 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:18.880645 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:18.880645 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:18.880645 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:18.881660 master-0 kubenswrapper[7756]: I0220 11:55:18.880662 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:19.879965 master-0 kubenswrapper[7756]: I0220 11:55:19.879883 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:19.879965 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:19.879965 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:19.879965 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:19.879965 master-0 kubenswrapper[7756]: I0220 11:55:19.879957 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:20.880953 master-0 kubenswrapper[7756]: I0220 11:55:20.880847 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:20.880953 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:20.880953 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:20.880953 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:20.880953 master-0 kubenswrapper[7756]: I0220 11:55:20.880944 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:21.881211 master-0 kubenswrapper[7756]: I0220 11:55:21.881069 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:21.881211 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:21.881211 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:21.881211 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:21.882165 master-0 kubenswrapper[7756]: I0220 11:55:21.881234 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:22.880024 master-0 kubenswrapper[7756]: I0220 11:55:22.879934 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:55:22.880024 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:55:22.880024 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:55:22.880024 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:55:22.880448 master-0 kubenswrapper[7756]: I0220 11:55:22.880027 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:55:22.880448 master-0 kubenswrapper[7756]: I0220 11:55:22.880100 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:55:22.880898 master-0 kubenswrapper[7756]: I0220 11:55:22.880843 7756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"59934f71df55065f6ab9cbdff084344dc055464c00d5db2644ae6d5d661e4e89"} pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" containerMessage="Container router failed startup probe, will be restarted" Feb 20 11:55:22.880988 master-0 kubenswrapper[7756]: I0220 11:55:22.880915 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" containerID="cri-o://59934f71df55065f6ab9cbdff084344dc055464c00d5db2644ae6d5d661e4e89" gracePeriod=3600 Feb 20 11:56:09.093140 master-0 kubenswrapper[7756]: I0220 11:56:09.093051 7756 generic.go:334] "Generic (PLEG): container finished" podID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerID="59934f71df55065f6ab9cbdff084344dc055464c00d5db2644ae6d5d661e4e89" exitCode=0 Feb 20 11:56:09.093140 master-0 kubenswrapper[7756]: I0220 11:56:09.093109 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" event={"ID":"9c078827-3bdb-4509-aeb3-eb558df1f6e7","Type":"ContainerDied","Data":"59934f71df55065f6ab9cbdff084344dc055464c00d5db2644ae6d5d661e4e89"} Feb 20 11:56:10.105187 master-0 kubenswrapper[7756]: I0220 11:56:10.105099 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" event={"ID":"9c078827-3bdb-4509-aeb3-eb558df1f6e7","Type":"ContainerStarted","Data":"e665c0ba7cf5562cef899fea3b259e95ae91076c695d828d8b5ee4e482dac445"} Feb 20 11:56:10.878234 master-0 kubenswrapper[7756]: I0220 11:56:10.878106 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:56:10.881569 master-0 kubenswrapper[7756]: I0220 11:56:10.881461 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:10.881569 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:10.881569 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:10.881569 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:10.881854 master-0 kubenswrapper[7756]: I0220 11:56:10.881587 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:11.880379 master-0 kubenswrapper[7756]: I0220 11:56:11.880289 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:11.880379 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:11.880379 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:11.880379 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:11.881362 master-0 kubenswrapper[7756]: I0220 11:56:11.880395 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:12.880584 master-0 kubenswrapper[7756]: I0220 11:56:12.880486 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:12.880584 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:12.880584 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:12.880584 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:12.882484 master-0 kubenswrapper[7756]: I0220 11:56:12.880610 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:13.880817 master-0 kubenswrapper[7756]: I0220 11:56:13.880707 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:13.880817 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:13.880817 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:13.880817 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:13.881855 master-0 kubenswrapper[7756]: I0220 11:56:13.880818 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:14.880165 master-0 kubenswrapper[7756]: I0220 11:56:14.880072 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:14.880165 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:14.880165 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:14.880165 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:14.880165 master-0 kubenswrapper[7756]: I0220 11:56:14.880158 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:15.881350 master-0 kubenswrapper[7756]: I0220 11:56:15.881237 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:15.881350 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:15.881350 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:15.881350 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:15.881350 master-0 kubenswrapper[7756]: I0220 11:56:15.881347 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:16.880573 master-0 kubenswrapper[7756]: I0220 11:56:16.880460 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:16.880573 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:16.880573 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:16.880573 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:16.881281 master-0 kubenswrapper[7756]: I0220 11:56:16.880614 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:17.880706 master-0 kubenswrapper[7756]: I0220 11:56:17.880615 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:17.880706 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:17.880706 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:17.880706 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:17.881659 master-0 kubenswrapper[7756]: I0220 11:56:17.880722 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:18.880094 master-0 kubenswrapper[7756]: I0220 11:56:18.880024 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:18.880094 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:18.880094 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:18.880094 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:18.880638 master-0 kubenswrapper[7756]: I0220 11:56:18.880115 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:19.878805 master-0 kubenswrapper[7756]: I0220 11:56:19.877319 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:56:19.880565 master-0 kubenswrapper[7756]: I0220 11:56:19.880030 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:19.880565 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:19.880565 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:19.880565 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:19.880565 master-0 kubenswrapper[7756]: I0220 11:56:19.880122 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:20.879729 master-0 kubenswrapper[7756]: I0220 11:56:20.879594 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:20.879729 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:20.879729 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:20.879729 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:20.879729 master-0 kubenswrapper[7756]: I0220 11:56:20.879696 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:21.880230 master-0 kubenswrapper[7756]: I0220 11:56:21.880168 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:21.880230 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:21.880230 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:21.880230 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:21.881238 master-0 kubenswrapper[7756]: I0220 11:56:21.880253 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:22.880896 master-0 kubenswrapper[7756]: I0220 11:56:22.880759 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:22.880896 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:22.880896 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:22.880896 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:22.882391 master-0 kubenswrapper[7756]: I0220 11:56:22.880886 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:23.880236 master-0 kubenswrapper[7756]: I0220 11:56:23.880117 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:23.880236 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:23.880236 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:23.880236 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:23.880766 master-0 kubenswrapper[7756]: I0220 11:56:23.880239 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:24.898288 master-0 kubenswrapper[7756]: I0220 11:56:24.898194 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:24.898288 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:24.898288 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:24.898288 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:24.899309 master-0 kubenswrapper[7756]: I0220 11:56:24.898343 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:25.880082 master-0 kubenswrapper[7756]: I0220 11:56:25.879971 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:25.880082 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:25.880082 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:25.880082 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:25.880610 master-0 kubenswrapper[7756]: I0220 11:56:25.880089 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:26.879871 master-0 kubenswrapper[7756]: I0220 11:56:26.879749 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:26.879871 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:26.879871 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:26.879871 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:26.879871 master-0 kubenswrapper[7756]: I0220 11:56:26.879841 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:27.880044 master-0 kubenswrapper[7756]: I0220 11:56:27.879931 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:27.880044 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:27.880044 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:27.880044 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:27.881022 master-0 kubenswrapper[7756]: I0220 11:56:27.880072 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:28.880562 master-0 kubenswrapper[7756]: I0220 11:56:28.880447 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:28.880562 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:28.880562 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:28.880562 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:28.881800 master-0 kubenswrapper[7756]: I0220 11:56:28.880606 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:29.879674 master-0 kubenswrapper[7756]: I0220 11:56:29.879575 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:29.879674 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:29.879674 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:29.879674 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:29.880126 master-0 kubenswrapper[7756]: I0220 11:56:29.879693 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:30.880307 master-0 kubenswrapper[7756]: I0220 11:56:30.880165 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:30.880307 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:30.880307 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:30.880307 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:30.881286 master-0 kubenswrapper[7756]: I0220 11:56:30.880321 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:31.880238 master-0 kubenswrapper[7756]: I0220 11:56:31.880146 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:31.880238 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:31.880238 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:31.880238 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:31.881319 master-0 kubenswrapper[7756]: I0220 11:56:31.880245 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:32.880226 master-0 kubenswrapper[7756]: I0220 11:56:32.880113 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:32.880226 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:32.880226 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:32.880226 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:32.881391 master-0 kubenswrapper[7756]: I0220 11:56:32.880223 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:33.880260 master-0 kubenswrapper[7756]: I0220 11:56:33.880171 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:33.880260 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:33.880260 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:33.880260 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:33.881248 master-0 kubenswrapper[7756]: I0220 11:56:33.880260 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:34.880669 master-0 kubenswrapper[7756]: I0220 11:56:34.880506 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:34.880669 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:34.880669 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:34.880669 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:34.880669 master-0 kubenswrapper[7756]: I0220 11:56:34.880660 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:35.880811 master-0 kubenswrapper[7756]: I0220 11:56:35.880742 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:35.880811 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:35.880811 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:35.880811 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:35.882014 master-0 kubenswrapper[7756]: I0220 11:56:35.880837 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:36.880420 master-0 kubenswrapper[7756]: I0220 11:56:36.880288 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:36.880420 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:36.880420 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:36.880420 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:36.880420 master-0 kubenswrapper[7756]: I0220 11:56:36.880396 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:37.880559 master-0 kubenswrapper[7756]: I0220 11:56:37.880446 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:37.880559 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:37.880559 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:37.880559 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:37.881037 master-0 kubenswrapper[7756]: I0220 11:56:37.880609 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:38.880087 master-0 kubenswrapper[7756]: I0220 11:56:38.880011 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:38.880087 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:38.880087 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:38.880087 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:38.881094 master-0 kubenswrapper[7756]: I0220 11:56:38.880106 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:39.880960 master-0 kubenswrapper[7756]: I0220 11:56:39.880863 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:39.880960 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:39.880960 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:39.880960 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:39.882211 master-0 kubenswrapper[7756]: I0220 11:56:39.880966 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:40.880441 master-0 kubenswrapper[7756]: I0220 11:56:40.880352 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:40.880441 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:40.880441 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:40.880441 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:40.880441 master-0 kubenswrapper[7756]: I0220 11:56:40.880437 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:41.880184 master-0 kubenswrapper[7756]: I0220 11:56:41.880076 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:41.880184 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:41.880184 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:41.880184 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:41.881210 master-0 kubenswrapper[7756]: I0220 11:56:41.880180 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:42.881980 master-0 kubenswrapper[7756]: I0220 11:56:42.881668 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:42.881980 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:42.881980 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:42.881980 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:42.883451 master-0 kubenswrapper[7756]: I0220 11:56:42.883392 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:43.880106 master-0 kubenswrapper[7756]: I0220 11:56:43.879994 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:43.880106 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:43.880106 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:43.880106 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:43.880571 master-0 kubenswrapper[7756]: I0220 11:56:43.880116 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:44.880283 master-0 kubenswrapper[7756]: I0220 11:56:44.880177 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:44.880283 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:44.880283 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:44.880283 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:44.881227 master-0 kubenswrapper[7756]: I0220 11:56:44.880281 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:45.881076 master-0 kubenswrapper[7756]: I0220 11:56:45.880997 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:45.881076 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:45.881076 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:45.881076 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:45.882237 master-0 kubenswrapper[7756]: I0220 11:56:45.881097 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:46.881220 master-0 kubenswrapper[7756]: I0220 11:56:46.881112 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:46.881220 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:46.881220 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:46.881220 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:46.882230 master-0 kubenswrapper[7756]: I0220 11:56:46.881230 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:47.880320 master-0 kubenswrapper[7756]: I0220 11:56:47.880257 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:47.880320 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:47.880320 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:47.880320 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:47.880873 master-0 kubenswrapper[7756]: I0220 11:56:47.880829 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:48.669554 master-0 kubenswrapper[7756]: I0220 11:56:48.669272 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-f6xzr"] Feb 20 11:56:48.671473 master-0 kubenswrapper[7756]: E0220 11:56:48.670216 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148cc321-3a17-4852-a75a-e8ac95139eb8" containerName="installer" Feb 20 11:56:48.671473 master-0 kubenswrapper[7756]: I0220 11:56:48.670249 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="148cc321-3a17-4852-a75a-e8ac95139eb8" containerName="installer" Feb 20 11:56:48.671473 master-0 kubenswrapper[7756]: I0220 11:56:48.670661 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="148cc321-3a17-4852-a75a-e8ac95139eb8" containerName="installer" Feb 20 11:56:48.671716 master-0 kubenswrapper[7756]: I0220 11:56:48.671573 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-f6xzr" Feb 20 11:56:48.674014 master-0 kubenswrapper[7756]: I0220 11:56:48.673969 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 20 11:56:48.674186 master-0 kubenswrapper[7756]: I0220 11:56:48.674007 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 20 11:56:48.676214 master-0 kubenswrapper[7756]: I0220 11:56:48.676118 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 20 11:56:48.676894 master-0 kubenswrapper[7756]: I0220 11:56:48.676820 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-mr6l7" Feb 20 11:56:48.688929 master-0 kubenswrapper[7756]: I0220 11:56:48.688857 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-f6xzr"] Feb 20 11:56:48.870976 master-0 kubenswrapper[7756]: I0220 11:56:48.870893 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39790258-73bc-4c37-a935-e8d3c2a2d5c6-cert\") pod \"ingress-canary-f6xzr\" (UID: \"39790258-73bc-4c37-a935-e8d3c2a2d5c6\") " pod="openshift-ingress-canary/ingress-canary-f6xzr" Feb 20 11:56:48.871216 master-0 kubenswrapper[7756]: I0220 11:56:48.871045 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94lkp\" (UniqueName: \"kubernetes.io/projected/39790258-73bc-4c37-a935-e8d3c2a2d5c6-kube-api-access-94lkp\") pod \"ingress-canary-f6xzr\" (UID: \"39790258-73bc-4c37-a935-e8d3c2a2d5c6\") " pod="openshift-ingress-canary/ingress-canary-f6xzr" Feb 20 11:56:48.879929 master-0 kubenswrapper[7756]: I0220 11:56:48.879856 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:48.879929 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:48.879929 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:48.879929 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:48.880205 master-0 kubenswrapper[7756]: I0220 11:56:48.879938 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:48.972350 master-0 kubenswrapper[7756]: I0220 11:56:48.972222 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94lkp\" (UniqueName: \"kubernetes.io/projected/39790258-73bc-4c37-a935-e8d3c2a2d5c6-kube-api-access-94lkp\") pod \"ingress-canary-f6xzr\" (UID: \"39790258-73bc-4c37-a935-e8d3c2a2d5c6\") " pod="openshift-ingress-canary/ingress-canary-f6xzr" Feb 20 11:56:48.972478 master-0 kubenswrapper[7756]: I0220 11:56:48.972405 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39790258-73bc-4c37-a935-e8d3c2a2d5c6-cert\") pod \"ingress-canary-f6xzr\" (UID: \"39790258-73bc-4c37-a935-e8d3c2a2d5c6\") " pod="openshift-ingress-canary/ingress-canary-f6xzr" Feb 20 11:56:48.972647 master-0 kubenswrapper[7756]: E0220 11:56:48.972603 7756 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 20 11:56:48.972738 master-0 kubenswrapper[7756]: E0220 11:56:48.972701 7756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39790258-73bc-4c37-a935-e8d3c2a2d5c6-cert podName:39790258-73bc-4c37-a935-e8d3c2a2d5c6 nodeName:}" failed. No retries permitted until 2026-02-20 11:56:49.472670498 +0000 UTC m=+455.214918546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/39790258-73bc-4c37-a935-e8d3c2a2d5c6-cert") pod "ingress-canary-f6xzr" (UID: "39790258-73bc-4c37-a935-e8d3c2a2d5c6") : secret "canary-serving-cert" not found Feb 20 11:56:49.017483 master-0 kubenswrapper[7756]: I0220 11:56:49.017200 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94lkp\" (UniqueName: \"kubernetes.io/projected/39790258-73bc-4c37-a935-e8d3c2a2d5c6-kube-api-access-94lkp\") pod \"ingress-canary-f6xzr\" (UID: \"39790258-73bc-4c37-a935-e8d3c2a2d5c6\") " pod="openshift-ingress-canary/ingress-canary-f6xzr" Feb 20 11:56:49.429449 master-0 kubenswrapper[7756]: I0220 11:56:49.429402 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/2.log" Feb 20 11:56:49.430799 master-0 kubenswrapper[7756]: I0220 11:56:49.430772 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/1.log" Feb 20 11:56:49.431721 master-0 kubenswrapper[7756]: I0220 11:56:49.431650 7756 generic.go:334] "Generic (PLEG): container finished" podID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" containerID="1c5678620badef46cf4ec23ce00f114b50bbc4d668fc0a3de390930731198bcb" exitCode=1 Feb 20 11:56:49.431901 master-0 kubenswrapper[7756]: I0220 11:56:49.431703 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" event={"ID":"db2a7cb1-1d05-4b24-86ed-f823fad5013e","Type":"ContainerDied","Data":"1c5678620badef46cf4ec23ce00f114b50bbc4d668fc0a3de390930731198bcb"} Feb 20 11:56:49.432087 master-0 kubenswrapper[7756]: I0220 11:56:49.432057 7756 scope.go:117] "RemoveContainer" containerID="92faf490ce07d81111f5c9023da3d201553d13fd825a7918d8a229dadf38bba3" Feb 20 11:56:49.432951 master-0 kubenswrapper[7756]: I0220 11:56:49.432885 7756 scope.go:117] "RemoveContainer" containerID="1c5678620badef46cf4ec23ce00f114b50bbc4d668fc0a3de390930731198bcb" Feb 20 11:56:49.433342 master-0 kubenswrapper[7756]: E0220 11:56:49.433289 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-kw2v6_openshift-ingress-operator(db2a7cb1-1d05-4b24-86ed-f823fad5013e)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" podUID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" Feb 20 11:56:49.479842 master-0 kubenswrapper[7756]: I0220 11:56:49.479755 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39790258-73bc-4c37-a935-e8d3c2a2d5c6-cert\") pod \"ingress-canary-f6xzr\" (UID: \"39790258-73bc-4c37-a935-e8d3c2a2d5c6\") " pod="openshift-ingress-canary/ingress-canary-f6xzr" Feb 20 11:56:49.485858 master-0 kubenswrapper[7756]: I0220 11:56:49.485772 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39790258-73bc-4c37-a935-e8d3c2a2d5c6-cert\") pod \"ingress-canary-f6xzr\" (UID: \"39790258-73bc-4c37-a935-e8d3c2a2d5c6\") " pod="openshift-ingress-canary/ingress-canary-f6xzr" Feb 20 11:56:49.604785 master-0 kubenswrapper[7756]: I0220 11:56:49.604731 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-f6xzr" Feb 20 11:56:49.886922 master-0 kubenswrapper[7756]: I0220 11:56:49.886823 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:49.886922 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:49.886922 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:49.886922 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:49.887967 master-0 kubenswrapper[7756]: I0220 11:56:49.887034 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:50.120848 master-0 kubenswrapper[7756]: I0220 11:56:50.120771 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-f6xzr"] Feb 20 11:56:50.128702 master-0 kubenswrapper[7756]: W0220 11:56:50.128649 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39790258_73bc_4c37_a935_e8d3c2a2d5c6.slice/crio-e89e4070dae8204d097a2414e77e4c5c562c772569afe17fb4b2e8b090f82fda WatchSource:0}: Error finding container e89e4070dae8204d097a2414e77e4c5c562c772569afe17fb4b2e8b090f82fda: Status 404 returned error can't find the container with id e89e4070dae8204d097a2414e77e4c5c562c772569afe17fb4b2e8b090f82fda Feb 20 11:56:50.444129 master-0 kubenswrapper[7756]: I0220 11:56:50.443953 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/2.log" Feb 20 11:56:50.447129 master-0 kubenswrapper[7756]: I0220 11:56:50.447062 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-f6xzr" event={"ID":"39790258-73bc-4c37-a935-e8d3c2a2d5c6","Type":"ContainerStarted","Data":"7b41d6e413b6ec2b448155249420c4817d95da718646ffd028268d3b99cbc14b"} Feb 20 11:56:50.447277 master-0 kubenswrapper[7756]: I0220 11:56:50.447128 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-f6xzr" event={"ID":"39790258-73bc-4c37-a935-e8d3c2a2d5c6","Type":"ContainerStarted","Data":"e89e4070dae8204d097a2414e77e4c5c562c772569afe17fb4b2e8b090f82fda"} Feb 20 11:56:50.477851 master-0 kubenswrapper[7756]: I0220 11:56:50.477665 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-f6xzr" podStartSLOduration=2.477634351 podStartE2EDuration="2.477634351s" podCreationTimestamp="2026-02-20 11:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:56:50.473955625 +0000 UTC m=+456.216203633" watchObservedRunningTime="2026-02-20 11:56:50.477634351 +0000 UTC m=+456.219882789" Feb 20 11:56:50.880588 master-0 kubenswrapper[7756]: I0220 11:56:50.880470 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:50.880588 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:50.880588 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:50.880588 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:50.881036 master-0 kubenswrapper[7756]: I0220 11:56:50.880601 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:51.880256 master-0 kubenswrapper[7756]: I0220 11:56:51.880149 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:51.880256 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:51.880256 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:51.880256 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:51.880256 master-0 kubenswrapper[7756]: I0220 11:56:51.880249 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:52.879680 master-0 kubenswrapper[7756]: I0220 11:56:52.879590 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:52.879680 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:52.879680 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:52.879680 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:52.879680 master-0 kubenswrapper[7756]: I0220 11:56:52.879672 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:53.880272 master-0 kubenswrapper[7756]: I0220 11:56:53.880163 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:53.880272 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:53.880272 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:53.880272 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:53.880272 master-0 kubenswrapper[7756]: I0220 11:56:53.880259 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:54.880387 master-0 kubenswrapper[7756]: I0220 11:56:54.880289 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:54.880387 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:54.880387 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:54.880387 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:54.880387 master-0 kubenswrapper[7756]: I0220 11:56:54.880379 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:55.880883 master-0 kubenswrapper[7756]: I0220 11:56:55.880752 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:55.880883 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:55.880883 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:55.880883 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:55.880883 master-0 kubenswrapper[7756]: I0220 11:56:55.880860 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:56.879844 master-0 kubenswrapper[7756]: I0220 11:56:56.879756 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:56.879844 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:56.879844 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:56.879844 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:56.880330 master-0 kubenswrapper[7756]: I0220 11:56:56.879847 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:57.880881 master-0 kubenswrapper[7756]: I0220 11:56:57.880756 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:57.880881 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:57.880881 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:57.880881 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:57.880881 master-0 kubenswrapper[7756]: I0220 11:56:57.880871 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:58.880391 master-0 kubenswrapper[7756]: I0220 11:56:58.880296 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:58.880391 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:58.880391 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:58.880391 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:58.880802 master-0 kubenswrapper[7756]: I0220 11:56:58.880400 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:56:59.579366 master-0 kubenswrapper[7756]: I0220 11:56:59.579251 7756 scope.go:117] "RemoveContainer" containerID="1c5678620badef46cf4ec23ce00f114b50bbc4d668fc0a3de390930731198bcb" Feb 20 11:56:59.580625 master-0 kubenswrapper[7756]: E0220 11:56:59.579706 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-kw2v6_openshift-ingress-operator(db2a7cb1-1d05-4b24-86ed-f823fad5013e)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" podUID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" Feb 20 11:56:59.880415 master-0 kubenswrapper[7756]: I0220 11:56:59.880229 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:56:59.880415 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:56:59.880415 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:56:59.880415 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:56:59.880415 master-0 kubenswrapper[7756]: I0220 11:56:59.880342 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:00.879942 master-0 kubenswrapper[7756]: I0220 11:57:00.879839 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:00.879942 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:00.879942 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:00.879942 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:00.880916 master-0 kubenswrapper[7756]: I0220 11:57:00.879975 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:01.879939 master-0 kubenswrapper[7756]: I0220 11:57:01.879857 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:01.879939 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:01.879939 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:01.879939 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:01.880917 master-0 kubenswrapper[7756]: I0220 11:57:01.879951 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:02.880129 master-0 kubenswrapper[7756]: I0220 11:57:02.880044 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:02.880129 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:02.880129 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:02.880129 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:02.881081 master-0 kubenswrapper[7756]: I0220 11:57:02.880138 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:03.880933 master-0 kubenswrapper[7756]: I0220 11:57:03.880850 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:03.880933 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:03.880933 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:03.880933 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:03.882216 master-0 kubenswrapper[7756]: I0220 11:57:03.880946 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:04.880509 master-0 kubenswrapper[7756]: I0220 11:57:04.880459 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:04.880509 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:04.880509 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:04.880509 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:04.880965 master-0 kubenswrapper[7756]: I0220 11:57:04.880922 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:05.880716 master-0 kubenswrapper[7756]: I0220 11:57:05.880638 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:05.880716 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:05.880716 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:05.880716 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:05.881777 master-0 kubenswrapper[7756]: I0220 11:57:05.880739 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:06.880488 master-0 kubenswrapper[7756]: I0220 11:57:06.880413 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:06.880488 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:06.880488 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:06.880488 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:06.880946 master-0 kubenswrapper[7756]: I0220 11:57:06.880501 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:07.879907 master-0 kubenswrapper[7756]: I0220 11:57:07.879791 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:07.879907 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:07.879907 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:07.879907 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:07.880928 master-0 kubenswrapper[7756]: I0220 11:57:07.879937 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:08.879355 master-0 kubenswrapper[7756]: I0220 11:57:08.879267 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:08.879355 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:08.879355 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:08.879355 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:08.879807 master-0 kubenswrapper[7756]: I0220 11:57:08.879376 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:09.880557 master-0 kubenswrapper[7756]: I0220 11:57:09.880453 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:09.880557 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:09.880557 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:09.880557 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:09.882732 master-0 kubenswrapper[7756]: I0220 11:57:09.880599 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:10.880261 master-0 kubenswrapper[7756]: I0220 11:57:10.880166 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:10.880261 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:10.880261 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:10.880261 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:10.881306 master-0 kubenswrapper[7756]: I0220 11:57:10.880277 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:11.580306 master-0 kubenswrapper[7756]: I0220 11:57:11.580211 7756 scope.go:117] "RemoveContainer" containerID="1c5678620badef46cf4ec23ce00f114b50bbc4d668fc0a3de390930731198bcb" Feb 20 11:57:11.881383 master-0 kubenswrapper[7756]: I0220 11:57:11.881200 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:11.881383 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:11.881383 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:11.881383 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:11.882516 master-0 kubenswrapper[7756]: I0220 11:57:11.881335 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:12.660115 master-0 kubenswrapper[7756]: I0220 11:57:12.660058 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/2.log" Feb 20 11:57:12.661272 master-0 kubenswrapper[7756]: I0220 11:57:12.661204 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" event={"ID":"db2a7cb1-1d05-4b24-86ed-f823fad5013e","Type":"ContainerStarted","Data":"095bad8ef44d69b4cc26fcf2dd343a67938137ce3213cb7022a98a05d1eb31af"} Feb 20 11:57:12.879327 master-0 kubenswrapper[7756]: I0220 11:57:12.879266 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:12.879327 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:12.879327 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:12.879327 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:12.879630 master-0 kubenswrapper[7756]: I0220 11:57:12.879360 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:13.880182 master-0 kubenswrapper[7756]: I0220 11:57:13.880108 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:13.880182 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:13.880182 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:13.880182 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:13.881078 master-0 kubenswrapper[7756]: I0220 11:57:13.880235 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:14.880270 master-0 kubenswrapper[7756]: I0220 11:57:14.880211 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:14.880270 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:14.880270 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:14.880270 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:14.880803 master-0 kubenswrapper[7756]: I0220 11:57:14.880289 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:15.880404 master-0 kubenswrapper[7756]: I0220 11:57:15.880295 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:15.880404 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:15.880404 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:15.880404 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:15.880404 master-0 kubenswrapper[7756]: I0220 11:57:15.880391 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:16.880483 master-0 kubenswrapper[7756]: I0220 11:57:16.880388 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:16.880483 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:16.880483 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:16.880483 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:16.880483 master-0 kubenswrapper[7756]: I0220 11:57:16.880476 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:17.880378 master-0 kubenswrapper[7756]: I0220 11:57:17.879905 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:17.880378 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:17.880378 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:17.880378 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:17.880378 master-0 kubenswrapper[7756]: I0220 11:57:17.879982 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:18.880688 master-0 kubenswrapper[7756]: I0220 11:57:18.880080 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:18.880688 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:18.880688 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:18.880688 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:18.880688 master-0 kubenswrapper[7756]: I0220 11:57:18.880186 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:19.880875 master-0 kubenswrapper[7756]: I0220 11:57:19.880763 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:19.880875 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:19.880875 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:19.880875 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:19.881843 master-0 kubenswrapper[7756]: I0220 11:57:19.880893 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:20.880114 master-0 kubenswrapper[7756]: I0220 11:57:20.879991 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:20.880114 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:20.880114 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:20.880114 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:20.880114 master-0 kubenswrapper[7756]: I0220 11:57:20.880110 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:21.880382 master-0 kubenswrapper[7756]: I0220 11:57:21.880278 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:21.880382 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:21.880382 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:21.880382 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:21.881084 master-0 kubenswrapper[7756]: I0220 11:57:21.880382 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:22.880340 master-0 kubenswrapper[7756]: I0220 11:57:22.880249 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:22.880340 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:22.880340 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:22.880340 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:22.881259 master-0 kubenswrapper[7756]: I0220 11:57:22.880342 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:23.855281 master-0 kubenswrapper[7756]: I0220 11:57:23.855230 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-942hp"] Feb 20 11:57:23.855999 master-0 kubenswrapper[7756]: I0220 11:57:23.855972 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:23.858041 master-0 kubenswrapper[7756]: I0220 11:57:23.857982 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-fdbqz" Feb 20 11:57:23.858185 master-0 kubenswrapper[7756]: I0220 11:57:23.858005 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 20 11:57:23.880247 master-0 kubenswrapper[7756]: I0220 11:57:23.880184 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:23.880247 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:23.880247 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:23.880247 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:23.881001 master-0 kubenswrapper[7756]: I0220 11:57:23.880274 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:24.056452 master-0 kubenswrapper[7756]: I0220 11:57:24.056386 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-ready\") pod \"cni-sysctl-allowlist-ds-942hp\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:24.056693 master-0 kubenswrapper[7756]: I0220 11:57:24.056506 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sc8b\" (UniqueName: \"kubernetes.io/projected/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-kube-api-access-2sc8b\") pod \"cni-sysctl-allowlist-ds-942hp\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:24.056769 master-0 kubenswrapper[7756]: I0220 11:57:24.056719 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-942hp\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:24.056989 master-0 kubenswrapper[7756]: I0220 11:57:24.056946 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-942hp\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:24.158473 master-0 kubenswrapper[7756]: I0220 11:57:24.158305 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-942hp\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:24.158473 master-0 kubenswrapper[7756]: I0220 11:57:24.158461 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-ready\") pod \"cni-sysctl-allowlist-ds-942hp\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:24.158805 master-0 kubenswrapper[7756]: I0220 11:57:24.158500 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sc8b\" (UniqueName: \"kubernetes.io/projected/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-kube-api-access-2sc8b\") pod \"cni-sysctl-allowlist-ds-942hp\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:24.158805 master-0 kubenswrapper[7756]: I0220 11:57:24.158689 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-942hp\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:24.159075 master-0 kubenswrapper[7756]: I0220 11:57:24.159006 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-942hp\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:24.159431 master-0 kubenswrapper[7756]: I0220 11:57:24.159350 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-ready\") pod \"cni-sysctl-allowlist-ds-942hp\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:24.159692 master-0 kubenswrapper[7756]: I0220 11:57:24.159357 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-942hp\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:24.192610 master-0 kubenswrapper[7756]: I0220 11:57:24.192284 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sc8b\" (UniqueName: \"kubernetes.io/projected/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-kube-api-access-2sc8b\") pod \"cni-sysctl-allowlist-ds-942hp\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:24.478051 master-0 kubenswrapper[7756]: I0220 11:57:24.477900 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:24.508683 master-0 kubenswrapper[7756]: W0220 11:57:24.508607 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2e77fc7_257a_4cf6_81f2_a4ce111f5470.slice/crio-39dd029bc3399a5b13d47bb0762450053fa4fe2da076ec2edf5f8ccd0e0cae9d WatchSource:0}: Error finding container 39dd029bc3399a5b13d47bb0762450053fa4fe2da076ec2edf5f8ccd0e0cae9d: Status 404 returned error can't find the container with id 39dd029bc3399a5b13d47bb0762450053fa4fe2da076ec2edf5f8ccd0e0cae9d Feb 20 11:57:24.754268 master-0 kubenswrapper[7756]: I0220 11:57:24.754205 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" event={"ID":"f2e77fc7-257a-4cf6-81f2-a4ce111f5470","Type":"ContainerStarted","Data":"39dd029bc3399a5b13d47bb0762450053fa4fe2da076ec2edf5f8ccd0e0cae9d"} Feb 20 11:57:24.880326 master-0 kubenswrapper[7756]: I0220 11:57:24.880213 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:24.880326 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:24.880326 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:24.880326 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:24.881902 master-0 kubenswrapper[7756]: I0220 11:57:24.880307 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:25.765484 master-0 kubenswrapper[7756]: I0220 11:57:25.765401 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" event={"ID":"f2e77fc7-257a-4cf6-81f2-a4ce111f5470","Type":"ContainerStarted","Data":"c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a"} Feb 20 11:57:25.793227 master-0 kubenswrapper[7756]: I0220 11:57:25.793103 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" podStartSLOduration=2.793075376 podStartE2EDuration="2.793075376s" podCreationTimestamp="2026-02-20 11:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:57:25.786919971 +0000 UTC m=+491.529168019" watchObservedRunningTime="2026-02-20 11:57:25.793075376 +0000 UTC m=+491.535323424" Feb 20 11:57:25.880849 master-0 kubenswrapper[7756]: I0220 11:57:25.880730 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:25.880849 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:25.880849 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:25.880849 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:25.881790 master-0 kubenswrapper[7756]: I0220 11:57:25.880892 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:26.774882 master-0 kubenswrapper[7756]: I0220 11:57:26.774774 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:26.809345 master-0 kubenswrapper[7756]: I0220 11:57:26.809221 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:57:26.880230 master-0 kubenswrapper[7756]: I0220 11:57:26.880141 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:26.880230 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:26.880230 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:26.880230 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:26.880727 master-0 kubenswrapper[7756]: I0220 11:57:26.880232 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:27.868115 master-0 kubenswrapper[7756]: I0220 11:57:27.868012 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-942hp"] Feb 20 11:57:27.880296 master-0 kubenswrapper[7756]: I0220 11:57:27.880241 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:27.880296 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:27.880296 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:27.880296 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:27.880760 master-0 kubenswrapper[7756]: I0220 11:57:27.880712 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:28.880205 master-0 kubenswrapper[7756]: I0220 11:57:28.880119 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:28.880205 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:28.880205 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:28.880205 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:28.881264 master-0 kubenswrapper[7756]: I0220 11:57:28.880214 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:29.801230 master-0 kubenswrapper[7756]: I0220 11:57:29.801046 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" podUID="f2e77fc7-257a-4cf6-81f2-a4ce111f5470" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a" gracePeriod=30 Feb 20 11:57:29.880204 master-0 kubenswrapper[7756]: I0220 11:57:29.880120 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:29.880204 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:29.880204 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:29.880204 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:29.881317 master-0 kubenswrapper[7756]: I0220 11:57:29.880212 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:30.879981 master-0 kubenswrapper[7756]: I0220 11:57:30.879878 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:30.879981 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:30.879981 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:30.879981 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:30.879981 master-0 kubenswrapper[7756]: I0220 11:57:30.879979 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:31.879829 master-0 kubenswrapper[7756]: I0220 11:57:31.879745 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:31.879829 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:31.879829 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:31.879829 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:31.879829 master-0 kubenswrapper[7756]: I0220 11:57:31.879824 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:32.880856 master-0 kubenswrapper[7756]: I0220 11:57:32.880741 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:32.880856 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:32.880856 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:32.880856 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:32.882161 master-0 kubenswrapper[7756]: I0220 11:57:32.880861 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:33.446382 master-0 kubenswrapper[7756]: I0220 11:57:33.446252 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2"] Feb 20 11:57:33.447942 master-0 kubenswrapper[7756]: I0220 11:57:33.447903 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" Feb 20 11:57:33.461239 master-0 kubenswrapper[7756]: I0220 11:57:33.461156 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-j7fmn" Feb 20 11:57:33.475587 master-0 kubenswrapper[7756]: I0220 11:57:33.475039 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2"] Feb 20 11:57:33.526013 master-0 kubenswrapper[7756]: I0220 11:57:33.525934 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfmdd\" (UniqueName: \"kubernetes.io/projected/6479d88f-463f-48ed-846d-2747752a8abb-kube-api-access-mfmdd\") pod \"multus-admission-controller-5f54bf67d4-zxsc2\" (UID: \"6479d88f-463f-48ed-846d-2747752a8abb\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" Feb 20 11:57:33.526247 master-0 kubenswrapper[7756]: I0220 11:57:33.526058 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6479d88f-463f-48ed-846d-2747752a8abb-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-zxsc2\" (UID: \"6479d88f-463f-48ed-846d-2747752a8abb\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" Feb 20 11:57:33.627707 master-0 kubenswrapper[7756]: I0220 11:57:33.627644 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfmdd\" (UniqueName: \"kubernetes.io/projected/6479d88f-463f-48ed-846d-2747752a8abb-kube-api-access-mfmdd\") pod \"multus-admission-controller-5f54bf67d4-zxsc2\" (UID: \"6479d88f-463f-48ed-846d-2747752a8abb\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" Feb 20 11:57:33.627973 master-0 kubenswrapper[7756]: I0220 11:57:33.627757 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6479d88f-463f-48ed-846d-2747752a8abb-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-zxsc2\" (UID: \"6479d88f-463f-48ed-846d-2747752a8abb\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" Feb 20 11:57:33.648555 master-0 kubenswrapper[7756]: I0220 11:57:33.648322 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6479d88f-463f-48ed-846d-2747752a8abb-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-zxsc2\" (UID: \"6479d88f-463f-48ed-846d-2747752a8abb\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" Feb 20 11:57:33.667552 master-0 kubenswrapper[7756]: I0220 11:57:33.666627 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfmdd\" (UniqueName: \"kubernetes.io/projected/6479d88f-463f-48ed-846d-2747752a8abb-kube-api-access-mfmdd\") pod \"multus-admission-controller-5f54bf67d4-zxsc2\" (UID: \"6479d88f-463f-48ed-846d-2747752a8abb\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" Feb 20 11:57:33.782788 master-0 kubenswrapper[7756]: I0220 11:57:33.782700 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" Feb 20 11:57:33.880201 master-0 kubenswrapper[7756]: I0220 11:57:33.880092 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:33.880201 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:33.880201 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:33.880201 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:33.880755 master-0 kubenswrapper[7756]: I0220 11:57:33.880201 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:34.294865 master-0 kubenswrapper[7756]: I0220 11:57:34.294767 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2"] Feb 20 11:57:34.304084 master-0 kubenswrapper[7756]: W0220 11:57:34.303968 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6479d88f_463f_48ed_846d_2747752a8abb.slice/crio-0318746ff4f748b910f4c4078a258eb92f24f864ae719352a32329d892129cdb WatchSource:0}: Error finding container 0318746ff4f748b910f4c4078a258eb92f24f864ae719352a32329d892129cdb: Status 404 returned error can't find the container with id 0318746ff4f748b910f4c4078a258eb92f24f864ae719352a32329d892129cdb Feb 20 11:57:34.481270 master-0 kubenswrapper[7756]: E0220 11:57:34.481147 7756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 11:57:34.483637 master-0 kubenswrapper[7756]: E0220 11:57:34.483520 7756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 11:57:34.485762 master-0 kubenswrapper[7756]: E0220 11:57:34.485664 7756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 11:57:34.485912 master-0 kubenswrapper[7756]: E0220 11:57:34.485767 7756 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" podUID="f2e77fc7-257a-4cf6-81f2-a4ce111f5470" containerName="kube-multus-additional-cni-plugins" Feb 20 11:57:34.863025 master-0 kubenswrapper[7756]: I0220 11:57:34.862927 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" event={"ID":"6479d88f-463f-48ed-846d-2747752a8abb","Type":"ContainerStarted","Data":"91535d87de5caa8be22b3852e1b71fa80c46d559ec9a91f4a42a840c174f5874"} Feb 20 11:57:34.863025 master-0 kubenswrapper[7756]: I0220 11:57:34.863008 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" event={"ID":"6479d88f-463f-48ed-846d-2747752a8abb","Type":"ContainerStarted","Data":"0318746ff4f748b910f4c4078a258eb92f24f864ae719352a32329d892129cdb"} Feb 20 11:57:34.880445 master-0 kubenswrapper[7756]: I0220 11:57:34.880345 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:34.880445 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:34.880445 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:34.880445 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:34.880445 master-0 kubenswrapper[7756]: I0220 11:57:34.880430 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:35.875168 master-0 kubenswrapper[7756]: I0220 11:57:35.875063 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" event={"ID":"6479d88f-463f-48ed-846d-2747752a8abb","Type":"ContainerStarted","Data":"154ac47a03fcb0de7c9b9ab34cd4da0e80e14ffaf8568e39613be5843499f26e"} Feb 20 11:57:35.880268 master-0 kubenswrapper[7756]: I0220 11:57:35.880207 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:35.880268 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:35.880268 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:35.880268 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:35.880618 master-0 kubenswrapper[7756]: I0220 11:57:35.880267 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:35.900329 master-0 kubenswrapper[7756]: I0220 11:57:35.900211 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" podStartSLOduration=2.900185971 podStartE2EDuration="2.900185971s" podCreationTimestamp="2026-02-20 11:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:57:35.898894992 +0000 UTC m=+501.641143060" watchObservedRunningTime="2026-02-20 11:57:35.900185971 +0000 UTC m=+501.642434029" Feb 20 11:57:35.966156 master-0 kubenswrapper[7756]: I0220 11:57:35.965126 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89"] Feb 20 11:57:35.966156 master-0 kubenswrapper[7756]: I0220 11:57:35.965651 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" podUID="dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783" containerName="multus-admission-controller" containerID="cri-o://6e0d344ebc9083ae093b3615560303e004f95402f791a1230a823e11b3266557" gracePeriod=30 Feb 20 11:57:35.966156 master-0 kubenswrapper[7756]: I0220 11:57:35.965889 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" podUID="dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783" containerName="kube-rbac-proxy" containerID="cri-o://6cceab7cff3eceea2a18c3f9dabbbeccd1e0ebcb1b3ce52fedc88dcebb268425" gracePeriod=30 Feb 20 11:57:36.879058 master-0 kubenswrapper[7756]: I0220 11:57:36.878960 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:36.879058 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:36.879058 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:36.879058 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:36.880039 master-0 kubenswrapper[7756]: I0220 11:57:36.879058 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:36.887779 master-0 kubenswrapper[7756]: I0220 11:57:36.887712 7756 generic.go:334] "Generic (PLEG): container finished" podID="dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783" containerID="6cceab7cff3eceea2a18c3f9dabbbeccd1e0ebcb1b3ce52fedc88dcebb268425" exitCode=0 Feb 20 11:57:36.887914 master-0 kubenswrapper[7756]: I0220 11:57:36.887797 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" event={"ID":"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783","Type":"ContainerDied","Data":"6cceab7cff3eceea2a18c3f9dabbbeccd1e0ebcb1b3ce52fedc88dcebb268425"} Feb 20 11:57:37.880059 master-0 kubenswrapper[7756]: I0220 11:57:37.879955 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:37.880059 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:37.880059 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:37.880059 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:37.881020 master-0 kubenswrapper[7756]: I0220 11:57:37.880058 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:38.880262 master-0 kubenswrapper[7756]: I0220 11:57:38.880158 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:38.880262 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:38.880262 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:38.880262 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:38.881437 master-0 kubenswrapper[7756]: I0220 11:57:38.880276 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:39.880741 master-0 kubenswrapper[7756]: I0220 11:57:39.880681 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:39.880741 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:39.880741 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:39.880741 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:39.881777 master-0 kubenswrapper[7756]: I0220 11:57:39.881699 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:40.880550 master-0 kubenswrapper[7756]: I0220 11:57:40.880406 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:40.880550 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:40.880550 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:40.880550 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:40.880550 master-0 kubenswrapper[7756]: I0220 11:57:40.880486 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:41.879426 master-0 kubenswrapper[7756]: I0220 11:57:41.879354 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:41.879426 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:41.879426 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:41.879426 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:41.879426 master-0 kubenswrapper[7756]: I0220 11:57:41.879422 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:42.880056 master-0 kubenswrapper[7756]: I0220 11:57:42.879942 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:42.880056 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:42.880056 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:42.880056 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:42.881144 master-0 kubenswrapper[7756]: I0220 11:57:42.880050 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:43.880084 master-0 kubenswrapper[7756]: I0220 11:57:43.879965 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:43.880084 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:43.880084 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:43.880084 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:43.880084 master-0 kubenswrapper[7756]: I0220 11:57:43.880065 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:44.481512 master-0 kubenswrapper[7756]: E0220 11:57:44.481418 7756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 11:57:44.484144 master-0 kubenswrapper[7756]: E0220 11:57:44.484062 7756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 11:57:44.487172 master-0 kubenswrapper[7756]: E0220 11:57:44.487120 7756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 11:57:44.487172 master-0 kubenswrapper[7756]: E0220 11:57:44.487146 7756 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" podUID="f2e77fc7-257a-4cf6-81f2-a4ce111f5470" containerName="kube-multus-additional-cni-plugins" Feb 20 11:57:44.880030 master-0 kubenswrapper[7756]: I0220 11:57:44.879925 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:44.880030 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:44.880030 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:44.880030 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:44.880030 master-0 kubenswrapper[7756]: I0220 11:57:44.880021 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:45.880566 master-0 kubenswrapper[7756]: I0220 11:57:45.880478 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:45.880566 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:45.880566 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:45.880566 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:45.881550 master-0 kubenswrapper[7756]: I0220 11:57:45.880604 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:46.879836 master-0 kubenswrapper[7756]: I0220 11:57:46.879715 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:46.879836 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:46.879836 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:46.879836 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:46.879836 master-0 kubenswrapper[7756]: I0220 11:57:46.879808 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:47.880442 master-0 kubenswrapper[7756]: I0220 11:57:47.880351 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:47.880442 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:47.880442 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:47.880442 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:47.881432 master-0 kubenswrapper[7756]: I0220 11:57:47.880452 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:48.880737 master-0 kubenswrapper[7756]: I0220 11:57:48.880636 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:48.880737 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:48.880737 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:48.880737 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:48.880737 master-0 kubenswrapper[7756]: I0220 11:57:48.880728 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:49.880232 master-0 kubenswrapper[7756]: I0220 11:57:49.880145 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:49.880232 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:49.880232 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:49.880232 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:49.881348 master-0 kubenswrapper[7756]: I0220 11:57:49.880251 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:50.880756 master-0 kubenswrapper[7756]: I0220 11:57:50.880683 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:50.880756 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:50.880756 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:50.880756 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:50.881887 master-0 kubenswrapper[7756]: I0220 11:57:50.880762 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:51.880180 master-0 kubenswrapper[7756]: I0220 11:57:51.880087 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:51.880180 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:51.880180 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:51.880180 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:51.880612 master-0 kubenswrapper[7756]: I0220 11:57:51.880205 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:52.879712 master-0 kubenswrapper[7756]: I0220 11:57:52.879622 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:52.879712 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:52.879712 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:52.879712 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:52.881356 master-0 kubenswrapper[7756]: I0220 11:57:52.879723 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:53.881046 master-0 kubenswrapper[7756]: I0220 11:57:53.880940 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:53.881046 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:53.881046 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:53.881046 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:53.882787 master-0 kubenswrapper[7756]: I0220 11:57:53.881105 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:54.481853 master-0 kubenswrapper[7756]: E0220 11:57:54.481761 7756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 11:57:54.484262 master-0 kubenswrapper[7756]: E0220 11:57:54.484129 7756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 11:57:54.486727 master-0 kubenswrapper[7756]: E0220 11:57:54.486657 7756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 11:57:54.486853 master-0 kubenswrapper[7756]: E0220 11:57:54.486730 7756 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" podUID="f2e77fc7-257a-4cf6-81f2-a4ce111f5470" containerName="kube-multus-additional-cni-plugins" Feb 20 11:57:54.880116 master-0 kubenswrapper[7756]: I0220 11:57:54.880028 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:54.880116 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:54.880116 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:54.880116 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:54.880426 master-0 kubenswrapper[7756]: I0220 11:57:54.880121 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:55.880176 master-0 kubenswrapper[7756]: I0220 11:57:55.880098 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:55.880176 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:55.880176 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:55.880176 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:55.881172 master-0 kubenswrapper[7756]: I0220 11:57:55.880183 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:56.880201 master-0 kubenswrapper[7756]: I0220 11:57:56.880132 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:56.880201 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:56.880201 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:56.880201 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:56.881303 master-0 kubenswrapper[7756]: I0220 11:57:56.880223 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:57.879563 master-0 kubenswrapper[7756]: I0220 11:57:57.879480 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:57.879563 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:57.879563 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:57.879563 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:57.879872 master-0 kubenswrapper[7756]: I0220 11:57:57.879576 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:58.880179 master-0 kubenswrapper[7756]: I0220 11:57:58.880101 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:58.880179 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:58.880179 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:58.880179 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:58.881158 master-0 kubenswrapper[7756]: I0220 11:57:58.880209 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:59.880108 master-0 kubenswrapper[7756]: I0220 11:57:59.880040 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:57:59.880108 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:57:59.880108 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:57:59.880108 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:57:59.880886 master-0 kubenswrapper[7756]: I0220 11:57:59.880121 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:57:59.989832 master-0 kubenswrapper[7756]: I0220 11:57:59.989739 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-942hp_f2e77fc7-257a-4cf6-81f2-a4ce111f5470/kube-multus-additional-cni-plugins/0.log" Feb 20 11:57:59.989832 master-0 kubenswrapper[7756]: I0220 11:57:59.989840 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:58:00.052286 master-0 kubenswrapper[7756]: I0220 11:58:00.052168 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-cni-sysctl-allowlist\") pod \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " Feb 20 11:58:00.053005 master-0 kubenswrapper[7756]: I0220 11:58:00.052936 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "f2e77fc7-257a-4cf6-81f2-a4ce111f5470" (UID: "f2e77fc7-257a-4cf6-81f2-a4ce111f5470"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 11:58:00.078206 master-0 kubenswrapper[7756]: I0220 11:58:00.078059 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-942hp_f2e77fc7-257a-4cf6-81f2-a4ce111f5470/kube-multus-additional-cni-plugins/0.log" Feb 20 11:58:00.078206 master-0 kubenswrapper[7756]: I0220 11:58:00.078129 7756 generic.go:334] "Generic (PLEG): container finished" podID="f2e77fc7-257a-4cf6-81f2-a4ce111f5470" containerID="c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a" exitCode=137 Feb 20 11:58:00.078206 master-0 kubenswrapper[7756]: I0220 11:58:00.078167 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" event={"ID":"f2e77fc7-257a-4cf6-81f2-a4ce111f5470","Type":"ContainerDied","Data":"c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a"} Feb 20 11:58:00.078206 master-0 kubenswrapper[7756]: I0220 11:58:00.078202 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" event={"ID":"f2e77fc7-257a-4cf6-81f2-a4ce111f5470","Type":"ContainerDied","Data":"39dd029bc3399a5b13d47bb0762450053fa4fe2da076ec2edf5f8ccd0e0cae9d"} Feb 20 11:58:00.078721 master-0 kubenswrapper[7756]: I0220 11:58:00.078221 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-942hp" Feb 20 11:58:00.078721 master-0 kubenswrapper[7756]: I0220 11:58:00.078227 7756 scope.go:117] "RemoveContainer" containerID="c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a" Feb 20 11:58:00.103570 master-0 kubenswrapper[7756]: I0220 11:58:00.103494 7756 scope.go:117] "RemoveContainer" containerID="c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a" Feb 20 11:58:00.104058 master-0 kubenswrapper[7756]: E0220 11:58:00.103990 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a\": container with ID starting with c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a not found: ID does not exist" containerID="c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a" Feb 20 11:58:00.104177 master-0 kubenswrapper[7756]: I0220 11:58:00.104050 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a"} err="failed to get container status \"c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a\": rpc error: code = NotFound desc = could not find container \"c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a\": container with ID starting with c144501b4a0a28c3e4aaca24b3507e080361785de7119c76649674276344711a not found: ID does not exist" Feb 20 11:58:00.153568 master-0 kubenswrapper[7756]: I0220 11:58:00.153492 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-ready\") pod \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " Feb 20 11:58:00.153705 master-0 kubenswrapper[7756]: I0220 11:58:00.153631 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sc8b\" (UniqueName: \"kubernetes.io/projected/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-kube-api-access-2sc8b\") pod \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " Feb 20 11:58:00.153705 master-0 kubenswrapper[7756]: I0220 11:58:00.153694 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-tuning-conf-dir\") pod \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\" (UID: \"f2e77fc7-257a-4cf6-81f2-a4ce111f5470\") " Feb 20 11:58:00.153912 master-0 kubenswrapper[7756]: I0220 11:58:00.153860 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-ready" (OuterVolumeSpecName: "ready") pod "f2e77fc7-257a-4cf6-81f2-a4ce111f5470" (UID: "f2e77fc7-257a-4cf6-81f2-a4ce111f5470"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 11:58:00.153989 master-0 kubenswrapper[7756]: I0220 11:58:00.153912 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "f2e77fc7-257a-4cf6-81f2-a4ce111f5470" (UID: "f2e77fc7-257a-4cf6-81f2-a4ce111f5470"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:58:00.154201 master-0 kubenswrapper[7756]: I0220 11:58:00.154148 7756 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:58:00.154274 master-0 kubenswrapper[7756]: I0220 11:58:00.154208 7756 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Feb 20 11:58:00.154274 master-0 kubenswrapper[7756]: I0220 11:58:00.154237 7756 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-ready\") on node \"master-0\" DevicePath \"\"" Feb 20 11:58:00.470611 master-0 kubenswrapper[7756]: I0220 11:58:00.458137 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-kube-api-access-2sc8b" (OuterVolumeSpecName: "kube-api-access-2sc8b") pod "f2e77fc7-257a-4cf6-81f2-a4ce111f5470" (UID: "f2e77fc7-257a-4cf6-81f2-a4ce111f5470"). InnerVolumeSpecName "kube-api-access-2sc8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:58:00.480082 master-0 kubenswrapper[7756]: I0220 11:58:00.475673 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sc8b\" (UniqueName: \"kubernetes.io/projected/f2e77fc7-257a-4cf6-81f2-a4ce111f5470-kube-api-access-2sc8b\") on node \"master-0\" DevicePath \"\"" Feb 20 11:58:00.702867 master-0 kubenswrapper[7756]: I0220 11:58:00.702783 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-942hp"] Feb 20 11:58:00.710758 master-0 kubenswrapper[7756]: I0220 11:58:00.710678 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-942hp"] Feb 20 11:58:00.880499 master-0 kubenswrapper[7756]: I0220 11:58:00.880412 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:58:00.880499 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:58:00.880499 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:58:00.880499 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:58:00.880499 master-0 kubenswrapper[7756]: I0220 11:58:00.880489 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:58:01.879622 master-0 kubenswrapper[7756]: I0220 11:58:01.879503 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:58:01.879622 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:58:01.879622 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:58:01.879622 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:58:01.880084 master-0 kubenswrapper[7756]: I0220 11:58:01.879652 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:58:02.594807 master-0 kubenswrapper[7756]: I0220 11:58:02.594696 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2e77fc7-257a-4cf6-81f2-a4ce111f5470" path="/var/lib/kubelet/pods/f2e77fc7-257a-4cf6-81f2-a4ce111f5470/volumes" Feb 20 11:58:02.879970 master-0 kubenswrapper[7756]: I0220 11:58:02.879793 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:58:02.879970 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:58:02.879970 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:58:02.879970 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:58:02.879970 master-0 kubenswrapper[7756]: I0220 11:58:02.879907 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:58:03.880797 master-0 kubenswrapper[7756]: I0220 11:58:03.880709 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:58:03.880797 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:58:03.880797 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:58:03.880797 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:58:03.881733 master-0 kubenswrapper[7756]: I0220 11:58:03.880832 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:58:04.880713 master-0 kubenswrapper[7756]: I0220 11:58:04.880616 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:58:04.880713 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:58:04.880713 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:58:04.880713 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:58:04.881523 master-0 kubenswrapper[7756]: I0220 11:58:04.880725 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:58:05.879706 master-0 kubenswrapper[7756]: I0220 11:58:05.879630 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:58:05.879706 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:58:05.879706 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:58:05.879706 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:58:05.880115 master-0 kubenswrapper[7756]: I0220 11:58:05.879719 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:58:06.146310 master-0 kubenswrapper[7756]: I0220 11:58:06.146152 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-jgv89_dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783/multus-admission-controller/0.log" Feb 20 11:58:06.146310 master-0 kubenswrapper[7756]: I0220 11:58:06.146244 7756 generic.go:334] "Generic (PLEG): container finished" podID="dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783" containerID="6e0d344ebc9083ae093b3615560303e004f95402f791a1230a823e11b3266557" exitCode=137 Feb 20 11:58:06.147179 master-0 kubenswrapper[7756]: I0220 11:58:06.146306 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" event={"ID":"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783","Type":"ContainerDied","Data":"6e0d344ebc9083ae093b3615560303e004f95402f791a1230a823e11b3266557"} Feb 20 11:58:06.869746 master-0 kubenswrapper[7756]: I0220 11:58:06.869694 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-jgv89_dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783/multus-admission-controller/0.log" Feb 20 11:58:06.869936 master-0 kubenswrapper[7756]: I0220 11:58:06.869773 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:58:06.879025 master-0 kubenswrapper[7756]: I0220 11:58:06.878990 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:58:06.879025 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:58:06.879025 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:58:06.879025 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:58:06.879250 master-0 kubenswrapper[7756]: I0220 11:58:06.879036 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:58:06.978287 master-0 kubenswrapper[7756]: I0220 11:58:06.978205 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") pod \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " Feb 20 11:58:06.978498 master-0 kubenswrapper[7756]: I0220 11:58:06.978434 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx26k\" (UniqueName: \"kubernetes.io/projected/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-kube-api-access-jx26k\") pod \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\" (UID: \"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783\") " Feb 20 11:58:06.983076 master-0 kubenswrapper[7756]: I0220 11:58:06.983028 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783" (UID: "dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 11:58:06.983217 master-0 kubenswrapper[7756]: I0220 11:58:06.983156 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-kube-api-access-jx26k" (OuterVolumeSpecName: "kube-api-access-jx26k") pod "dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783" (UID: "dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783"). InnerVolumeSpecName "kube-api-access-jx26k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:58:07.080801 master-0 kubenswrapper[7756]: I0220 11:58:07.080640 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx26k\" (UniqueName: \"kubernetes.io/projected/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-kube-api-access-jx26k\") on node \"master-0\" DevicePath \"\"" Feb 20 11:58:07.080801 master-0 kubenswrapper[7756]: I0220 11:58:07.080676 7756 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783-webhook-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 11:58:07.168968 master-0 kubenswrapper[7756]: I0220 11:58:07.168874 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-jgv89_dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783/multus-admission-controller/0.log" Feb 20 11:58:07.170057 master-0 kubenswrapper[7756]: I0220 11:58:07.168972 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" event={"ID":"dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783","Type":"ContainerDied","Data":"bd39daabfdce6754d4a4f78c48fcaecbdad1e1d29636e311b156f098b7cc24fe"} Feb 20 11:58:07.170057 master-0 kubenswrapper[7756]: I0220 11:58:07.169042 7756 scope.go:117] "RemoveContainer" containerID="6cceab7cff3eceea2a18c3f9dabbbeccd1e0ebcb1b3ce52fedc88dcebb268425" Feb 20 11:58:07.170057 master-0 kubenswrapper[7756]: I0220 11:58:07.169104 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89" Feb 20 11:58:07.208401 master-0 kubenswrapper[7756]: I0220 11:58:07.207423 7756 scope.go:117] "RemoveContainer" containerID="6e0d344ebc9083ae093b3615560303e004f95402f791a1230a823e11b3266557" Feb 20 11:58:07.231896 master-0 kubenswrapper[7756]: I0220 11:58:07.231774 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89"] Feb 20 11:58:07.239214 master-0 kubenswrapper[7756]: I0220 11:58:07.239144 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-jgv89"] Feb 20 11:58:07.881838 master-0 kubenswrapper[7756]: I0220 11:58:07.881712 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:58:07.881838 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:58:07.881838 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:58:07.881838 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:58:07.882212 master-0 kubenswrapper[7756]: I0220 11:58:07.881832 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:58:08.591584 master-0 kubenswrapper[7756]: I0220 11:58:08.591500 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783" path="/var/lib/kubelet/pods/dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783/volumes" Feb 20 11:58:08.880897 master-0 kubenswrapper[7756]: I0220 11:58:08.880741 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:58:08.880897 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:58:08.880897 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:58:08.880897 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:58:08.880897 master-0 kubenswrapper[7756]: I0220 11:58:08.880822 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:58:09.880582 master-0 kubenswrapper[7756]: I0220 11:58:09.880485 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:58:09.880582 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:58:09.880582 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:58:09.880582 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:58:09.881553 master-0 kubenswrapper[7756]: I0220 11:58:09.880623 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:58:09.881553 master-0 kubenswrapper[7756]: I0220 11:58:09.880691 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:58:09.881553 master-0 kubenswrapper[7756]: I0220 11:58:09.881470 7756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"e665c0ba7cf5562cef899fea3b259e95ae91076c695d828d8b5ee4e482dac445"} pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" containerMessage="Container router failed startup probe, will be restarted" Feb 20 11:58:09.881553 master-0 kubenswrapper[7756]: I0220 11:58:09.881518 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" containerID="cri-o://e665c0ba7cf5562cef899fea3b259e95ae91076c695d828d8b5ee4e482dac445" gracePeriod=3600 Feb 20 11:58:14.945550 master-0 kubenswrapper[7756]: I0220 11:58:14.945468 7756 scope.go:117] "RemoveContainer" containerID="91bf4bc38d2da6c505ee04354464ef749c6984385a6a3cb062fc7393534e0bd7" Feb 20 11:58:16.273434 master-0 kubenswrapper[7756]: I0220 11:58:16.273358 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 20 11:58:16.274346 master-0 kubenswrapper[7756]: E0220 11:58:16.273913 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e77fc7-257a-4cf6-81f2-a4ce111f5470" containerName="kube-multus-additional-cni-plugins" Feb 20 11:58:16.274346 master-0 kubenswrapper[7756]: I0220 11:58:16.273940 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e77fc7-257a-4cf6-81f2-a4ce111f5470" containerName="kube-multus-additional-cni-plugins" Feb 20 11:58:16.274346 master-0 kubenswrapper[7756]: E0220 11:58:16.273964 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783" containerName="multus-admission-controller" Feb 20 11:58:16.274346 master-0 kubenswrapper[7756]: I0220 11:58:16.273977 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783" containerName="multus-admission-controller" Feb 20 11:58:16.274346 master-0 kubenswrapper[7756]: E0220 11:58:16.274010 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783" containerName="kube-rbac-proxy" Feb 20 11:58:16.274346 master-0 kubenswrapper[7756]: I0220 11:58:16.274023 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783" containerName="kube-rbac-proxy" Feb 20 11:58:16.274346 master-0 kubenswrapper[7756]: I0220 11:58:16.274221 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783" containerName="multus-admission-controller" Feb 20 11:58:16.274346 master-0 kubenswrapper[7756]: I0220 11:58:16.274245 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e77fc7-257a-4cf6-81f2-a4ce111f5470" containerName="kube-multus-additional-cni-plugins" Feb 20 11:58:16.274346 master-0 kubenswrapper[7756]: I0220 11:58:16.274264 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc02a6b2-9d5c-4b11-a2da-4aebf6a1d783" containerName="kube-rbac-proxy" Feb 20 11:58:16.275022 master-0 kubenswrapper[7756]: I0220 11:58:16.274991 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 20 11:58:16.278784 master-0 kubenswrapper[7756]: I0220 11:58:16.278724 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-7l66c" Feb 20 11:58:16.278784 master-0 kubenswrapper[7756]: I0220 11:58:16.278738 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 20 11:58:16.291294 master-0 kubenswrapper[7756]: I0220 11:58:16.291225 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 20 11:58:16.322606 master-0 kubenswrapper[7756]: I0220 11:58:16.322509 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/305f625e-16b0-4840-a9e2-25571b49ad2a-var-lock\") pod \"installer-2-master-0\" (UID: \"305f625e-16b0-4840-a9e2-25571b49ad2a\") " pod="openshift-etcd/installer-2-master-0" Feb 20 11:58:16.322873 master-0 kubenswrapper[7756]: I0220 11:58:16.322660 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/305f625e-16b0-4840-a9e2-25571b49ad2a-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"305f625e-16b0-4840-a9e2-25571b49ad2a\") " pod="openshift-etcd/installer-2-master-0" Feb 20 11:58:16.322873 master-0 kubenswrapper[7756]: I0220 11:58:16.322743 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/305f625e-16b0-4840-a9e2-25571b49ad2a-kube-api-access\") pod \"installer-2-master-0\" (UID: \"305f625e-16b0-4840-a9e2-25571b49ad2a\") " pod="openshift-etcd/installer-2-master-0" Feb 20 11:58:16.424114 master-0 kubenswrapper[7756]: I0220 11:58:16.424003 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/305f625e-16b0-4840-a9e2-25571b49ad2a-kube-api-access\") pod \"installer-2-master-0\" (UID: \"305f625e-16b0-4840-a9e2-25571b49ad2a\") " pod="openshift-etcd/installer-2-master-0" Feb 20 11:58:16.424376 master-0 kubenswrapper[7756]: I0220 11:58:16.424176 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/305f625e-16b0-4840-a9e2-25571b49ad2a-var-lock\") pod \"installer-2-master-0\" (UID: \"305f625e-16b0-4840-a9e2-25571b49ad2a\") " pod="openshift-etcd/installer-2-master-0" Feb 20 11:58:16.424376 master-0 kubenswrapper[7756]: I0220 11:58:16.424282 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/305f625e-16b0-4840-a9e2-25571b49ad2a-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"305f625e-16b0-4840-a9e2-25571b49ad2a\") " pod="openshift-etcd/installer-2-master-0" Feb 20 11:58:16.424515 master-0 kubenswrapper[7756]: I0220 11:58:16.424426 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/305f625e-16b0-4840-a9e2-25571b49ad2a-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"305f625e-16b0-4840-a9e2-25571b49ad2a\") " pod="openshift-etcd/installer-2-master-0" Feb 20 11:58:16.424626 master-0 kubenswrapper[7756]: I0220 11:58:16.424510 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/305f625e-16b0-4840-a9e2-25571b49ad2a-var-lock\") pod \"installer-2-master-0\" (UID: \"305f625e-16b0-4840-a9e2-25571b49ad2a\") " pod="openshift-etcd/installer-2-master-0" Feb 20 11:58:16.460209 master-0 kubenswrapper[7756]: I0220 11:58:16.460139 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/305f625e-16b0-4840-a9e2-25571b49ad2a-kube-api-access\") pod \"installer-2-master-0\" (UID: \"305f625e-16b0-4840-a9e2-25571b49ad2a\") " pod="openshift-etcd/installer-2-master-0" Feb 20 11:58:16.621124 master-0 kubenswrapper[7756]: I0220 11:58:16.620972 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 20 11:58:17.082897 master-0 kubenswrapper[7756]: I0220 11:58:17.082836 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 20 11:58:17.093157 master-0 kubenswrapper[7756]: W0220 11:58:17.093078 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod305f625e_16b0_4840_a9e2_25571b49ad2a.slice/crio-bf890effc236ee0e21e9e57ddce2324a331c4793a53024dbaa2deb40164eb945 WatchSource:0}: Error finding container bf890effc236ee0e21e9e57ddce2324a331c4793a53024dbaa2deb40164eb945: Status 404 returned error can't find the container with id bf890effc236ee0e21e9e57ddce2324a331c4793a53024dbaa2deb40164eb945 Feb 20 11:58:17.272024 master-0 kubenswrapper[7756]: I0220 11:58:17.271941 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"305f625e-16b0-4840-a9e2-25571b49ad2a","Type":"ContainerStarted","Data":"bf890effc236ee0e21e9e57ddce2324a331c4793a53024dbaa2deb40164eb945"} Feb 20 11:58:18.282112 master-0 kubenswrapper[7756]: I0220 11:58:18.282028 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"305f625e-16b0-4840-a9e2-25571b49ad2a","Type":"ContainerStarted","Data":"aa7475b04d1f2f206998430be0a72c2f43703844dbdb13b2c6bf74e325b14f62"} Feb 20 11:58:18.320782 master-0 kubenswrapper[7756]: I0220 11:58:18.320679 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.320649787 podStartE2EDuration="2.320649787s" podCreationTimestamp="2026-02-20 11:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:58:18.319949756 +0000 UTC m=+544.062197774" watchObservedRunningTime="2026-02-20 11:58:18.320649787 +0000 UTC m=+544.062897815" Feb 20 11:58:31.858962 master-0 kubenswrapper[7756]: I0220 11:58:31.858855 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 20 11:58:31.860418 master-0 kubenswrapper[7756]: I0220 11:58:31.860383 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 11:58:31.862919 master-0 kubenswrapper[7756]: I0220 11:58:31.862841 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 20 11:58:31.863450 master-0 kubenswrapper[7756]: I0220 11:58:31.863401 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-9fz4f" Feb 20 11:58:31.874368 master-0 kubenswrapper[7756]: I0220 11:58:31.874292 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f22083d-dc18-4acd-aa7f-d01d407c7837-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"3f22083d-dc18-4acd-aa7f-d01d407c7837\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 11:58:31.874579 master-0 kubenswrapper[7756]: I0220 11:58:31.874457 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f22083d-dc18-4acd-aa7f-d01d407c7837-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"3f22083d-dc18-4acd-aa7f-d01d407c7837\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 11:58:31.874886 master-0 kubenswrapper[7756]: I0220 11:58:31.874828 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f22083d-dc18-4acd-aa7f-d01d407c7837-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"3f22083d-dc18-4acd-aa7f-d01d407c7837\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 11:58:31.892842 master-0 kubenswrapper[7756]: I0220 11:58:31.892768 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 20 11:58:31.976169 master-0 kubenswrapper[7756]: I0220 11:58:31.976124 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f22083d-dc18-4acd-aa7f-d01d407c7837-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"3f22083d-dc18-4acd-aa7f-d01d407c7837\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 11:58:31.976376 master-0 kubenswrapper[7756]: I0220 11:58:31.976350 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f22083d-dc18-4acd-aa7f-d01d407c7837-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"3f22083d-dc18-4acd-aa7f-d01d407c7837\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 11:58:31.976416 master-0 kubenswrapper[7756]: I0220 11:58:31.976395 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f22083d-dc18-4acd-aa7f-d01d407c7837-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"3f22083d-dc18-4acd-aa7f-d01d407c7837\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 11:58:31.976469 master-0 kubenswrapper[7756]: I0220 11:58:31.976438 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f22083d-dc18-4acd-aa7f-d01d407c7837-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"3f22083d-dc18-4acd-aa7f-d01d407c7837\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 11:58:31.976505 master-0 kubenswrapper[7756]: I0220 11:58:31.976456 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f22083d-dc18-4acd-aa7f-d01d407c7837-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"3f22083d-dc18-4acd-aa7f-d01d407c7837\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 11:58:32.003556 master-0 kubenswrapper[7756]: I0220 11:58:32.003461 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f22083d-dc18-4acd-aa7f-d01d407c7837-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"3f22083d-dc18-4acd-aa7f-d01d407c7837\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 11:58:32.209207 master-0 kubenswrapper[7756]: I0220 11:58:32.209013 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 11:58:32.703029 master-0 kubenswrapper[7756]: I0220 11:58:32.702975 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 20 11:58:32.710124 master-0 kubenswrapper[7756]: W0220 11:58:32.710066 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3f22083d_dc18_4acd_aa7f_d01d407c7837.slice/crio-5d441a95da6316299324b66295c9678ac2bf954531cfa49dee0901be523a79b8 WatchSource:0}: Error finding container 5d441a95da6316299324b66295c9678ac2bf954531cfa49dee0901be523a79b8: Status 404 returned error can't find the container with id 5d441a95da6316299324b66295c9678ac2bf954531cfa49dee0901be523a79b8 Feb 20 11:58:33.413492 master-0 kubenswrapper[7756]: I0220 11:58:33.413333 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"3f22083d-dc18-4acd-aa7f-d01d407c7837","Type":"ContainerStarted","Data":"c7a969f9b5267c87dc133b4672c13f279bd5a2e8609fe160678e9d8227090dd2"} Feb 20 11:58:33.413492 master-0 kubenswrapper[7756]: I0220 11:58:33.413415 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"3f22083d-dc18-4acd-aa7f-d01d407c7837","Type":"ContainerStarted","Data":"5d441a95da6316299324b66295c9678ac2bf954531cfa49dee0901be523a79b8"} Feb 20 11:58:33.445609 master-0 kubenswrapper[7756]: I0220 11:58:33.441919 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=2.441895903 podStartE2EDuration="2.441895903s" podCreationTimestamp="2026-02-20 11:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:58:33.43648442 +0000 UTC m=+559.178732468" watchObservedRunningTime="2026-02-20 11:58:33.441895903 +0000 UTC m=+559.184143951" Feb 20 11:58:36.256701 master-0 kubenswrapper[7756]: I0220 11:58:36.256578 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 20 11:58:36.257734 master-0 kubenswrapper[7756]: I0220 11:58:36.256878 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="3f22083d-dc18-4acd-aa7f-d01d407c7837" containerName="installer" containerID="cri-o://c7a969f9b5267c87dc133b4672c13f279bd5a2e8609fe160678e9d8227090dd2" gracePeriod=30 Feb 20 11:58:39.272696 master-0 kubenswrapper[7756]: I0220 11:58:39.272608 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-796b9bd86f-sp4fc"] Feb 20 11:58:39.276746 master-0 kubenswrapper[7756]: I0220 11:58:39.276618 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.280588 master-0 kubenswrapper[7756]: I0220 11:58:39.280271 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 20 11:58:39.280588 master-0 kubenswrapper[7756]: I0220 11:58:39.280279 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 20 11:58:39.282718 master-0 kubenswrapper[7756]: I0220 11:58:39.280919 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 20 11:58:39.282718 master-0 kubenswrapper[7756]: I0220 11:58:39.281153 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-zhr86" Feb 20 11:58:39.282718 master-0 kubenswrapper[7756]: I0220 11:58:39.281264 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 20 11:58:39.282718 master-0 kubenswrapper[7756]: I0220 11:58:39.281461 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 20 11:58:39.297492 master-0 kubenswrapper[7756]: I0220 11:58:39.297412 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-796b9bd86f-sp4fc"] Feb 20 11:58:39.300882 master-0 kubenswrapper[7756]: I0220 11:58:39.300723 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-trusted-ca-bundle\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.300882 master-0 kubenswrapper[7756]: I0220 11:58:39.300790 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqzpj\" (UniqueName: \"kubernetes.io/projected/aae1df07-cf9f-47a3-b146-2a0adb182660-kube-api-access-qqzpj\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.300882 master-0 kubenswrapper[7756]: I0220 11:58:39.300854 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.300882 master-0 kubenswrapper[7756]: I0220 11:58:39.300883 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-serving-certs-ca-bundle\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.301186 master-0 kubenswrapper[7756]: I0220 11:58:39.300980 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-metrics-client-ca\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.301186 master-0 kubenswrapper[7756]: I0220 11:58:39.301108 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-federate-client-tls\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.301329 master-0 kubenswrapper[7756]: I0220 11:58:39.301190 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-client-tls\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.301329 master-0 kubenswrapper[7756]: I0220 11:58:39.301230 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.302380 master-0 kubenswrapper[7756]: I0220 11:58:39.301670 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 20 11:58:39.402619 master-0 kubenswrapper[7756]: I0220 11:58:39.402068 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-metrics-client-ca\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.402619 master-0 kubenswrapper[7756]: I0220 11:58:39.402162 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-federate-client-tls\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.402619 master-0 kubenswrapper[7756]: I0220 11:58:39.402207 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-client-tls\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.402619 master-0 kubenswrapper[7756]: I0220 11:58:39.402229 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.402619 master-0 kubenswrapper[7756]: I0220 11:58:39.402258 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-trusted-ca-bundle\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.402619 master-0 kubenswrapper[7756]: I0220 11:58:39.402286 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqzpj\" (UniqueName: \"kubernetes.io/projected/aae1df07-cf9f-47a3-b146-2a0adb182660-kube-api-access-qqzpj\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.402619 master-0 kubenswrapper[7756]: I0220 11:58:39.402319 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.402619 master-0 kubenswrapper[7756]: I0220 11:58:39.402346 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-serving-certs-ca-bundle\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.404613 master-0 kubenswrapper[7756]: I0220 11:58:39.404568 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-metrics-client-ca\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.404776 master-0 kubenswrapper[7756]: I0220 11:58:39.404740 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-trusted-ca-bundle\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.404896 master-0 kubenswrapper[7756]: I0220 11:58:39.404845 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-serving-certs-ca-bundle\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.407799 master-0 kubenswrapper[7756]: I0220 11:58:39.407741 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-federate-client-tls\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.408210 master-0 kubenswrapper[7756]: I0220 11:58:39.408170 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-client-tls\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.409502 master-0 kubenswrapper[7756]: I0220 11:58:39.409463 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.410078 master-0 kubenswrapper[7756]: I0220 11:58:39.410028 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.438900 master-0 kubenswrapper[7756]: I0220 11:58:39.438820 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqzpj\" (UniqueName: \"kubernetes.io/projected/aae1df07-cf9f-47a3-b146-2a0adb182660-kube-api-access-qqzpj\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:39.615138 master-0 kubenswrapper[7756]: I0220 11:58:39.614978 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 11:58:40.128848 master-0 kubenswrapper[7756]: I0220 11:58:40.128745 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-796b9bd86f-sp4fc"] Feb 20 11:58:40.137200 master-0 kubenswrapper[7756]: W0220 11:58:40.137125 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae1df07_cf9f_47a3_b146_2a0adb182660.slice/crio-d3506d2533f5948044615b3daf194c86dee0685849b66763860811b20d32f418 WatchSource:0}: Error finding container d3506d2533f5948044615b3daf194c86dee0685849b66763860811b20d32f418: Status 404 returned error can't find the container with id d3506d2533f5948044615b3daf194c86dee0685849b66763860811b20d32f418 Feb 20 11:58:40.140387 master-0 kubenswrapper[7756]: I0220 11:58:40.140332 7756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 11:58:40.474910 master-0 kubenswrapper[7756]: I0220 11:58:40.474737 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" event={"ID":"aae1df07-cf9f-47a3-b146-2a0adb182660","Type":"ContainerStarted","Data":"d3506d2533f5948044615b3daf194c86dee0685849b66763860811b20d32f418"} Feb 20 11:58:40.850957 master-0 kubenswrapper[7756]: I0220 11:58:40.850890 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 20 11:58:40.851925 master-0 kubenswrapper[7756]: I0220 11:58:40.851894 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 11:58:40.875872 master-0 kubenswrapper[7756]: I0220 11:58:40.875813 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 20 11:58:40.927297 master-0 kubenswrapper[7756]: I0220 11:58:40.927213 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7de8fb9d-34f7-49bc-867d-827a0f9a11e7\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 11:58:40.927297 master-0 kubenswrapper[7756]: I0220 11:58:40.927277 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-var-lock\") pod \"installer-2-master-0\" (UID: \"7de8fb9d-34f7-49bc-867d-827a0f9a11e7\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 11:58:40.927746 master-0 kubenswrapper[7756]: I0220 11:58:40.927647 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7de8fb9d-34f7-49bc-867d-827a0f9a11e7\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 11:58:41.029764 master-0 kubenswrapper[7756]: I0220 11:58:41.029657 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7de8fb9d-34f7-49bc-867d-827a0f9a11e7\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 11:58:41.030033 master-0 kubenswrapper[7756]: I0220 11:58:41.029778 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-var-lock\") pod \"installer-2-master-0\" (UID: \"7de8fb9d-34f7-49bc-867d-827a0f9a11e7\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 11:58:41.030033 master-0 kubenswrapper[7756]: I0220 11:58:41.029922 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7de8fb9d-34f7-49bc-867d-827a0f9a11e7\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 11:58:41.030033 master-0 kubenswrapper[7756]: I0220 11:58:41.029925 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-var-lock\") pod \"installer-2-master-0\" (UID: \"7de8fb9d-34f7-49bc-867d-827a0f9a11e7\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 11:58:41.030989 master-0 kubenswrapper[7756]: I0220 11:58:41.030119 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7de8fb9d-34f7-49bc-867d-827a0f9a11e7\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 11:58:41.058862 master-0 kubenswrapper[7756]: I0220 11:58:41.058775 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7de8fb9d-34f7-49bc-867d-827a0f9a11e7\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 11:58:41.182437 master-0 kubenswrapper[7756]: I0220 11:58:41.182283 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 11:58:41.745149 master-0 kubenswrapper[7756]: W0220 11:58:41.745085 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7de8fb9d_34f7_49bc_867d_827a0f9a11e7.slice/crio-52931bca33b633a8f7b4a404b3d376c51a9562b00ed924bbb1fbf19380cd707f WatchSource:0}: Error finding container 52931bca33b633a8f7b4a404b3d376c51a9562b00ed924bbb1fbf19380cd707f: Status 404 returned error can't find the container with id 52931bca33b633a8f7b4a404b3d376c51a9562b00ed924bbb1fbf19380cd707f Feb 20 11:58:41.745868 master-0 kubenswrapper[7756]: I0220 11:58:41.745222 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 20 11:58:42.494477 master-0 kubenswrapper[7756]: I0220 11:58:42.494329 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"7de8fb9d-34f7-49bc-867d-827a0f9a11e7","Type":"ContainerStarted","Data":"56ae66462a4df6b3b10343480cd4dc180d6cf045523fb628f58018d2caac8f02"} Feb 20 11:58:42.494477 master-0 kubenswrapper[7756]: I0220 11:58:42.494421 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"7de8fb9d-34f7-49bc-867d-827a0f9a11e7","Type":"ContainerStarted","Data":"52931bca33b633a8f7b4a404b3d376c51a9562b00ed924bbb1fbf19380cd707f"} Feb 20 11:58:42.512849 master-0 kubenswrapper[7756]: I0220 11:58:42.512745 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.51272576 podStartE2EDuration="2.51272576s" podCreationTimestamp="2026-02-20 11:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 11:58:42.508840573 +0000 UTC m=+568.251088571" watchObservedRunningTime="2026-02-20 11:58:42.51272576 +0000 UTC m=+568.254973768" Feb 20 11:58:43.508342 master-0 kubenswrapper[7756]: I0220 11:58:43.508205 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" event={"ID":"aae1df07-cf9f-47a3-b146-2a0adb182660","Type":"ContainerStarted","Data":"c32210d547e8228aed7b6c19ad7b28e09fc89234196b0c301820f00ada729e4d"} Feb 20 11:58:44.518478 master-0 kubenswrapper[7756]: I0220 11:58:44.518412 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" event={"ID":"aae1df07-cf9f-47a3-b146-2a0adb182660","Type":"ContainerStarted","Data":"061c49058451e46fbb25aac2a7afcc50532d9ff5b080e215c69d61be33e98b96"} Feb 20 11:58:45.530983 master-0 kubenswrapper[7756]: I0220 11:58:45.530869 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" event={"ID":"aae1df07-cf9f-47a3-b146-2a0adb182660","Type":"ContainerStarted","Data":"64c899002ff3b4ae177a856aab03184a587944a22340c37394ab77468ee36a67"} Feb 20 11:58:45.576236 master-0 kubenswrapper[7756]: I0220 11:58:45.576094 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" podStartSLOduration=2.423631716 podStartE2EDuration="6.576062408s" podCreationTimestamp="2026-02-20 11:58:39 +0000 UTC" firstStartedPulling="2026-02-20 11:58:40.14023533 +0000 UTC m=+565.882483368" lastFinishedPulling="2026-02-20 11:58:44.292666052 +0000 UTC m=+570.034914060" observedRunningTime="2026-02-20 11:58:45.571989646 +0000 UTC m=+571.314237734" watchObservedRunningTime="2026-02-20 11:58:45.576062408 +0000 UTC m=+571.318310456" Feb 20 11:58:48.926392 master-0 kubenswrapper[7756]: I0220 11:58:48.926331 7756 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 20 11:58:48.927225 master-0 kubenswrapper[7756]: I0220 11:58:48.926910 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" containerID="cri-o://424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5" gracePeriod=30 Feb 20 11:58:48.927225 master-0 kubenswrapper[7756]: I0220 11:58:48.926979 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" containerID="cri-o://2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512" gracePeriod=30 Feb 20 11:58:48.927225 master-0 kubenswrapper[7756]: I0220 11:58:48.927065 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" containerID="cri-o://5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5" gracePeriod=30 Feb 20 11:58:48.927225 master-0 kubenswrapper[7756]: I0220 11:58:48.927019 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" containerID="cri-o://ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe" gracePeriod=30 Feb 20 11:58:48.927225 master-0 kubenswrapper[7756]: I0220 11:58:48.927077 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" containerID="cri-o://4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954" gracePeriod=30 Feb 20 11:58:48.931703 master-0 kubenswrapper[7756]: I0220 11:58:48.931603 7756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 20 11:58:48.932123 master-0 kubenswrapper[7756]: E0220 11:58:48.932069 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 20 11:58:48.932123 master-0 kubenswrapper[7756]: I0220 11:58:48.932112 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 20 11:58:48.932355 master-0 kubenswrapper[7756]: E0220 11:58:48.932146 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 20 11:58:48.932355 master-0 kubenswrapper[7756]: I0220 11:58:48.932162 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 20 11:58:48.932355 master-0 kubenswrapper[7756]: E0220 11:58:48.932183 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-resources-copy" Feb 20 11:58:48.932355 master-0 kubenswrapper[7756]: I0220 11:58:48.932200 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-resources-copy" Feb 20 11:58:48.932355 master-0 kubenswrapper[7756]: E0220 11:58:48.932232 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 20 11:58:48.932355 master-0 kubenswrapper[7756]: I0220 11:58:48.932248 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 20 11:58:48.932355 master-0 kubenswrapper[7756]: E0220 11:58:48.932267 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-ensure-env-vars" Feb 20 11:58:48.932355 master-0 kubenswrapper[7756]: I0220 11:58:48.932282 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-ensure-env-vars" Feb 20 11:58:48.932355 master-0 kubenswrapper[7756]: E0220 11:58:48.932302 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="setup" Feb 20 11:58:48.932355 master-0 kubenswrapper[7756]: I0220 11:58:48.932314 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="setup" Feb 20 11:58:48.932355 master-0 kubenswrapper[7756]: E0220 11:58:48.932334 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 20 11:58:48.932355 master-0 kubenswrapper[7756]: I0220 11:58:48.932346 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 20 11:58:48.932355 master-0 kubenswrapper[7756]: E0220 11:58:48.932370 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 20 11:58:48.933152 master-0 kubenswrapper[7756]: I0220 11:58:48.932383 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 20 11:58:48.933152 master-0 kubenswrapper[7756]: I0220 11:58:48.932618 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 20 11:58:48.933152 master-0 kubenswrapper[7756]: I0220 11:58:48.932643 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 20 11:58:48.933152 master-0 kubenswrapper[7756]: I0220 11:58:48.932676 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 20 11:58:48.933152 master-0 kubenswrapper[7756]: I0220 11:58:48.932696 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 20 11:58:48.933152 master-0 kubenswrapper[7756]: I0220 11:58:48.932714 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 20 11:58:48.989489 master-0 kubenswrapper[7756]: I0220 11:58:48.989404 7756 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" start-of-body= Feb 20 11:58:48.989690 master-0 kubenswrapper[7756]: I0220 11:58:48.989509 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" Feb 20 11:58:48.999197 master-0 kubenswrapper[7756]: I0220 11:58:48.999136 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:48.999314 master-0 kubenswrapper[7756]: I0220 11:58:48.999246 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:48.999314 master-0 kubenswrapper[7756]: I0220 11:58:48.999286 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:48.999454 master-0 kubenswrapper[7756]: I0220 11:58:48.999326 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:48.999454 master-0 kubenswrapper[7756]: I0220 11:58:48.999369 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:48.999454 master-0 kubenswrapper[7756]: I0220 11:58:48.999408 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:49.101579 master-0 kubenswrapper[7756]: I0220 11:58:49.101492 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:49.101742 master-0 kubenswrapper[7756]: I0220 11:58:49.101666 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:49.101742 master-0 kubenswrapper[7756]: I0220 11:58:49.101709 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:49.101871 master-0 kubenswrapper[7756]: I0220 11:58:49.101752 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:49.101871 master-0 kubenswrapper[7756]: I0220 11:58:49.101786 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:49.101871 master-0 kubenswrapper[7756]: I0220 11:58:49.101859 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:49.102232 master-0 kubenswrapper[7756]: I0220 11:58:49.101905 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:49.102232 master-0 kubenswrapper[7756]: I0220 11:58:49.102157 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:49.102232 master-0 kubenswrapper[7756]: I0220 11:58:49.102170 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:49.102232 master-0 kubenswrapper[7756]: I0220 11:58:49.101912 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:49.102232 master-0 kubenswrapper[7756]: I0220 11:58:49.102118 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:49.102520 master-0 kubenswrapper[7756]: I0220 11:58:49.102354 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 11:58:49.572206 master-0 kubenswrapper[7756]: I0220 11:58:49.572125 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 20 11:58:49.573678 master-0 kubenswrapper[7756]: I0220 11:58:49.573630 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 20 11:58:49.576424 master-0 kubenswrapper[7756]: I0220 11:58:49.576359 7756 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512" exitCode=2 Feb 20 11:58:49.576424 master-0 kubenswrapper[7756]: I0220 11:58:49.576406 7756 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5" exitCode=0 Feb 20 11:58:49.576424 master-0 kubenswrapper[7756]: I0220 11:58:49.576422 7756 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe" exitCode=2 Feb 20 11:58:56.671617 master-0 kubenswrapper[7756]: I0220 11:58:56.671503 7756 generic.go:334] "Generic (PLEG): container finished" podID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerID="e665c0ba7cf5562cef899fea3b259e95ae91076c695d828d8b5ee4e482dac445" exitCode=0 Feb 20 11:58:56.672711 master-0 kubenswrapper[7756]: I0220 11:58:56.672651 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" event={"ID":"9c078827-3bdb-4509-aeb3-eb558df1f6e7","Type":"ContainerDied","Data":"e665c0ba7cf5562cef899fea3b259e95ae91076c695d828d8b5ee4e482dac445"} Feb 20 11:58:56.672907 master-0 kubenswrapper[7756]: I0220 11:58:56.672877 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" event={"ID":"9c078827-3bdb-4509-aeb3-eb558df1f6e7","Type":"ContainerStarted","Data":"59be86e8d4a5781613fee8a9f98dc6c90430b05bfb61e001a26978b78f148625"} Feb 20 11:58:56.673073 master-0 kubenswrapper[7756]: I0220 11:58:56.672964 7756 scope.go:117] "RemoveContainer" containerID="59934f71df55065f6ab9cbdff084344dc055464c00d5db2644ae6d5d661e4e89" Feb 20 11:58:56.877078 master-0 kubenswrapper[7756]: I0220 11:58:56.876946 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:58:56.880437 master-0 kubenswrapper[7756]: I0220 11:58:56.880370 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:58:56.880437 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:58:56.880437 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:58:56.880437 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:58:56.880801 master-0 kubenswrapper[7756]: I0220 11:58:56.880446 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:58:57.880100 master-0 kubenswrapper[7756]: I0220 11:58:57.879984 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:58:57.880100 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:58:57.880100 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:58:57.880100 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:58:57.880100 master-0 kubenswrapper[7756]: I0220 11:58:57.880089 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:58:58.880429 master-0 kubenswrapper[7756]: I0220 11:58:58.880329 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:58:58.880429 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:58:58.880429 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:58:58.880429 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:58:58.880429 master-0 kubenswrapper[7756]: I0220 11:58:58.880416 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:58:59.877800 master-0 kubenswrapper[7756]: I0220 11:58:59.877691 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 11:58:59.880892 master-0 kubenswrapper[7756]: I0220 11:58:59.880839 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:58:59.880892 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:58:59.880892 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:58:59.880892 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:58:59.881873 master-0 kubenswrapper[7756]: I0220 11:58:59.880913 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:00.879756 master-0 kubenswrapper[7756]: I0220 11:59:00.879701 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:00.879756 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:00.879756 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:00.879756 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:00.880476 master-0 kubenswrapper[7756]: I0220 11:59:00.880432 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:01.743418 master-0 kubenswrapper[7756]: I0220 11:59:01.743286 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager/0.log" Feb 20 11:59:01.744347 master-0 kubenswrapper[7756]: I0220 11:59:01.743414 7756 generic.go:334] "Generic (PLEG): container finished" podID="a767e0793175d588147a983384ee43db" containerID="ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da" exitCode=1 Feb 20 11:59:01.744347 master-0 kubenswrapper[7756]: I0220 11:59:01.743500 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerDied","Data":"ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da"} Feb 20 11:59:01.744784 master-0 kubenswrapper[7756]: I0220 11:59:01.744723 7756 scope.go:117] "RemoveContainer" containerID="ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da" Feb 20 11:59:01.880591 master-0 kubenswrapper[7756]: I0220 11:59:01.880471 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:01.880591 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:01.880591 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:01.880591 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:01.880943 master-0 kubenswrapper[7756]: I0220 11:59:01.880632 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:01.969158 master-0 kubenswrapper[7756]: I0220 11:59:01.969048 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:59:01.969158 master-0 kubenswrapper[7756]: I0220 11:59:01.969124 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:59:01.969158 master-0 kubenswrapper[7756]: I0220 11:59:01.969143 7756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:59:02.758176 master-0 kubenswrapper[7756]: I0220 11:59:02.758067 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager/0.log" Feb 20 11:59:02.759073 master-0 kubenswrapper[7756]: I0220 11:59:02.758208 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerStarted","Data":"fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23"} Feb 20 11:59:02.880039 master-0 kubenswrapper[7756]: I0220 11:59:02.879915 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:02.880039 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:02.880039 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:02.880039 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:02.880424 master-0 kubenswrapper[7756]: I0220 11:59:02.880077 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:03.768661 master-0 kubenswrapper[7756]: I0220 11:59:03.768580 7756 generic.go:334] "Generic (PLEG): container finished" podID="305f625e-16b0-4840-a9e2-25571b49ad2a" containerID="aa7475b04d1f2f206998430be0a72c2f43703844dbdb13b2c6bf74e325b14f62" exitCode=0 Feb 20 11:59:03.769411 master-0 kubenswrapper[7756]: I0220 11:59:03.768674 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"305f625e-16b0-4840-a9e2-25571b49ad2a","Type":"ContainerDied","Data":"aa7475b04d1f2f206998430be0a72c2f43703844dbdb13b2c6bf74e325b14f62"} Feb 20 11:59:03.880655 master-0 kubenswrapper[7756]: I0220 11:59:03.880569 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:03.880655 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:03.880655 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:03.880655 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:03.881075 master-0 kubenswrapper[7756]: I0220 11:59:03.880664 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:04.770084 master-0 kubenswrapper[7756]: I0220 11:59:04.770005 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_3f22083d-dc18-4acd-aa7f-d01d407c7837/installer/0.log" Feb 20 11:59:04.770812 master-0 kubenswrapper[7756]: I0220 11:59:04.770119 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 11:59:04.780746 master-0 kubenswrapper[7756]: I0220 11:59:04.780696 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_3f22083d-dc18-4acd-aa7f-d01d407c7837/installer/0.log" Feb 20 11:59:04.780879 master-0 kubenswrapper[7756]: I0220 11:59:04.780768 7756 generic.go:334] "Generic (PLEG): container finished" podID="3f22083d-dc18-4acd-aa7f-d01d407c7837" containerID="c7a969f9b5267c87dc133b4672c13f279bd5a2e8609fe160678e9d8227090dd2" exitCode=1 Feb 20 11:59:04.780949 master-0 kubenswrapper[7756]: I0220 11:59:04.780854 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"3f22083d-dc18-4acd-aa7f-d01d407c7837","Type":"ContainerDied","Data":"c7a969f9b5267c87dc133b4672c13f279bd5a2e8609fe160678e9d8227090dd2"} Feb 20 11:59:04.780949 master-0 kubenswrapper[7756]: I0220 11:59:04.780880 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 11:59:04.781085 master-0 kubenswrapper[7756]: I0220 11:59:04.780951 7756 scope.go:117] "RemoveContainer" containerID="c7a969f9b5267c87dc133b4672c13f279bd5a2e8609fe160678e9d8227090dd2" Feb 20 11:59:04.781143 master-0 kubenswrapper[7756]: I0220 11:59:04.780933 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"3f22083d-dc18-4acd-aa7f-d01d407c7837","Type":"ContainerDied","Data":"5d441a95da6316299324b66295c9678ac2bf954531cfa49dee0901be523a79b8"} Feb 20 11:59:04.805025 master-0 kubenswrapper[7756]: I0220 11:59:04.804956 7756 scope.go:117] "RemoveContainer" containerID="c7a969f9b5267c87dc133b4672c13f279bd5a2e8609fe160678e9d8227090dd2" Feb 20 11:59:04.805666 master-0 kubenswrapper[7756]: E0220 11:59:04.805594 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7a969f9b5267c87dc133b4672c13f279bd5a2e8609fe160678e9d8227090dd2\": container with ID starting with c7a969f9b5267c87dc133b4672c13f279bd5a2e8609fe160678e9d8227090dd2 not found: ID does not exist" containerID="c7a969f9b5267c87dc133b4672c13f279bd5a2e8609fe160678e9d8227090dd2" Feb 20 11:59:04.805783 master-0 kubenswrapper[7756]: I0220 11:59:04.805658 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7a969f9b5267c87dc133b4672c13f279bd5a2e8609fe160678e9d8227090dd2"} err="failed to get container status \"c7a969f9b5267c87dc133b4672c13f279bd5a2e8609fe160678e9d8227090dd2\": rpc error: code = NotFound desc = could not find container \"c7a969f9b5267c87dc133b4672c13f279bd5a2e8609fe160678e9d8227090dd2\": container with ID starting with c7a969f9b5267c87dc133b4672c13f279bd5a2e8609fe160678e9d8227090dd2 not found: ID does not exist" Feb 20 11:59:04.865749 master-0 kubenswrapper[7756]: I0220 11:59:04.865654 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f22083d-dc18-4acd-aa7f-d01d407c7837-kube-api-access\") pod \"3f22083d-dc18-4acd-aa7f-d01d407c7837\" (UID: \"3f22083d-dc18-4acd-aa7f-d01d407c7837\") " Feb 20 11:59:04.866039 master-0 kubenswrapper[7756]: I0220 11:59:04.865874 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f22083d-dc18-4acd-aa7f-d01d407c7837-var-lock\") pod \"3f22083d-dc18-4acd-aa7f-d01d407c7837\" (UID: \"3f22083d-dc18-4acd-aa7f-d01d407c7837\") " Feb 20 11:59:04.866039 master-0 kubenswrapper[7756]: I0220 11:59:04.865919 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f22083d-dc18-4acd-aa7f-d01d407c7837-kubelet-dir\") pod \"3f22083d-dc18-4acd-aa7f-d01d407c7837\" (UID: \"3f22083d-dc18-4acd-aa7f-d01d407c7837\") " Feb 20 11:59:04.866177 master-0 kubenswrapper[7756]: I0220 11:59:04.866107 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f22083d-dc18-4acd-aa7f-d01d407c7837-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3f22083d-dc18-4acd-aa7f-d01d407c7837" (UID: "3f22083d-dc18-4acd-aa7f-d01d407c7837"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:59:04.866177 master-0 kubenswrapper[7756]: I0220 11:59:04.866121 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f22083d-dc18-4acd-aa7f-d01d407c7837-var-lock" (OuterVolumeSpecName: "var-lock") pod "3f22083d-dc18-4acd-aa7f-d01d407c7837" (UID: "3f22083d-dc18-4acd-aa7f-d01d407c7837"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:59:04.866497 master-0 kubenswrapper[7756]: I0220 11:59:04.866451 7756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f22083d-dc18-4acd-aa7f-d01d407c7837-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 11:59:04.866497 master-0 kubenswrapper[7756]: I0220 11:59:04.866475 7756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f22083d-dc18-4acd-aa7f-d01d407c7837-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:59:04.870801 master-0 kubenswrapper[7756]: I0220 11:59:04.870738 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f22083d-dc18-4acd-aa7f-d01d407c7837-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3f22083d-dc18-4acd-aa7f-d01d407c7837" (UID: "3f22083d-dc18-4acd-aa7f-d01d407c7837"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:59:04.881186 master-0 kubenswrapper[7756]: I0220 11:59:04.881107 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:04.881186 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:04.881186 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:04.881186 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:04.881186 master-0 kubenswrapper[7756]: I0220 11:59:04.881161 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:04.968101 master-0 kubenswrapper[7756]: I0220 11:59:04.967911 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f22083d-dc18-4acd-aa7f-d01d407c7837-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 11:59:05.185762 master-0 kubenswrapper[7756]: I0220 11:59:05.185692 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 20 11:59:05.272204 master-0 kubenswrapper[7756]: I0220 11:59:05.272115 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/305f625e-16b0-4840-a9e2-25571b49ad2a-var-lock\") pod \"305f625e-16b0-4840-a9e2-25571b49ad2a\" (UID: \"305f625e-16b0-4840-a9e2-25571b49ad2a\") " Feb 20 11:59:05.272461 master-0 kubenswrapper[7756]: I0220 11:59:05.272208 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/305f625e-16b0-4840-a9e2-25571b49ad2a-var-lock" (OuterVolumeSpecName: "var-lock") pod "305f625e-16b0-4840-a9e2-25571b49ad2a" (UID: "305f625e-16b0-4840-a9e2-25571b49ad2a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:59:05.272461 master-0 kubenswrapper[7756]: I0220 11:59:05.272223 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/305f625e-16b0-4840-a9e2-25571b49ad2a-kube-api-access\") pod \"305f625e-16b0-4840-a9e2-25571b49ad2a\" (UID: \"305f625e-16b0-4840-a9e2-25571b49ad2a\") " Feb 20 11:59:05.272461 master-0 kubenswrapper[7756]: I0220 11:59:05.272404 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/305f625e-16b0-4840-a9e2-25571b49ad2a-kubelet-dir\") pod \"305f625e-16b0-4840-a9e2-25571b49ad2a\" (UID: \"305f625e-16b0-4840-a9e2-25571b49ad2a\") " Feb 20 11:59:05.272742 master-0 kubenswrapper[7756]: I0220 11:59:05.272514 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/305f625e-16b0-4840-a9e2-25571b49ad2a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "305f625e-16b0-4840-a9e2-25571b49ad2a" (UID: "305f625e-16b0-4840-a9e2-25571b49ad2a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:59:05.273054 master-0 kubenswrapper[7756]: I0220 11:59:05.272996 7756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/305f625e-16b0-4840-a9e2-25571b49ad2a-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 11:59:05.273128 master-0 kubenswrapper[7756]: I0220 11:59:05.273052 7756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/305f625e-16b0-4840-a9e2-25571b49ad2a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:59:05.276908 master-0 kubenswrapper[7756]: I0220 11:59:05.276857 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305f625e-16b0-4840-a9e2-25571b49ad2a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "305f625e-16b0-4840-a9e2-25571b49ad2a" (UID: "305f625e-16b0-4840-a9e2-25571b49ad2a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 11:59:05.375661 master-0 kubenswrapper[7756]: I0220 11:59:05.375523 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/305f625e-16b0-4840-a9e2-25571b49ad2a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 11:59:05.809087 master-0 kubenswrapper[7756]: I0220 11:59:05.809009 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"305f625e-16b0-4840-a9e2-25571b49ad2a","Type":"ContainerDied","Data":"bf890effc236ee0e21e9e57ddce2324a331c4793a53024dbaa2deb40164eb945"} Feb 20 11:59:05.809087 master-0 kubenswrapper[7756]: I0220 11:59:05.809081 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf890effc236ee0e21e9e57ddce2324a331c4793a53024dbaa2deb40164eb945" Feb 20 11:59:05.810023 master-0 kubenswrapper[7756]: I0220 11:59:05.809058 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 20 11:59:05.879589 master-0 kubenswrapper[7756]: I0220 11:59:05.879482 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:05.879589 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:05.879589 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:05.879589 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:05.879945 master-0 kubenswrapper[7756]: I0220 11:59:05.879597 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:06.171317 master-0 kubenswrapper[7756]: E0220 11:59:06.170563 7756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:59:06.880200 master-0 kubenswrapper[7756]: I0220 11:59:06.880104 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:06.880200 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:06.880200 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:06.880200 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:06.881396 master-0 kubenswrapper[7756]: I0220 11:59:06.880211 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:07.379810 master-0 kubenswrapper[7756]: E0220 11:59:07.379698 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:58:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:58:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:58:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:58:57Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:59:07.880353 master-0 kubenswrapper[7756]: I0220 11:59:07.880256 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:07.880353 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:07.880353 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:07.880353 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:07.881385 master-0 kubenswrapper[7756]: I0220 11:59:07.880361 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:08.881294 master-0 kubenswrapper[7756]: I0220 11:59:08.881212 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:08.881294 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:08.881294 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:08.881294 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:08.881927 master-0 kubenswrapper[7756]: I0220 11:59:08.881314 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:09.881131 master-0 kubenswrapper[7756]: I0220 11:59:09.881034 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:09.881131 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:09.881131 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:09.881131 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:09.882177 master-0 kubenswrapper[7756]: I0220 11:59:09.881131 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:10.880831 master-0 kubenswrapper[7756]: I0220 11:59:10.880714 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:10.880831 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:10.880831 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:10.880831 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:10.880831 master-0 kubenswrapper[7756]: I0220 11:59:10.880797 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:11.881034 master-0 kubenswrapper[7756]: I0220 11:59:11.880908 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:11.881034 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:11.881034 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:11.881034 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:11.881034 master-0 kubenswrapper[7756]: I0220 11:59:11.881002 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:11.968738 master-0 kubenswrapper[7756]: I0220 11:59:11.968642 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:59:11.968738 master-0 kubenswrapper[7756]: I0220 11:59:11.968726 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:59:11.969217 master-0 kubenswrapper[7756]: I0220 11:59:11.968908 7756 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 20 11:59:11.969217 master-0 kubenswrapper[7756]: I0220 11:59:11.968992 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 20 11:59:12.868255 master-0 kubenswrapper[7756]: I0220 11:59:12.868055 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/3.log" Feb 20 11:59:12.869242 master-0 kubenswrapper[7756]: I0220 11:59:12.869172 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/2.log" Feb 20 11:59:12.869886 master-0 kubenswrapper[7756]: I0220 11:59:12.869817 7756 generic.go:334] "Generic (PLEG): container finished" podID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" containerID="095bad8ef44d69b4cc26fcf2dd343a67938137ce3213cb7022a98a05d1eb31af" exitCode=1 Feb 20 11:59:12.870056 master-0 kubenswrapper[7756]: I0220 11:59:12.869877 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" event={"ID":"db2a7cb1-1d05-4b24-86ed-f823fad5013e","Type":"ContainerDied","Data":"095bad8ef44d69b4cc26fcf2dd343a67938137ce3213cb7022a98a05d1eb31af"} Feb 20 11:59:12.870056 master-0 kubenswrapper[7756]: I0220 11:59:12.869956 7756 scope.go:117] "RemoveContainer" containerID="1c5678620badef46cf4ec23ce00f114b50bbc4d668fc0a3de390930731198bcb" Feb 20 11:59:12.870786 master-0 kubenswrapper[7756]: I0220 11:59:12.870721 7756 scope.go:117] "RemoveContainer" containerID="095bad8ef44d69b4cc26fcf2dd343a67938137ce3213cb7022a98a05d1eb31af" Feb 20 11:59:12.871191 master-0 kubenswrapper[7756]: E0220 11:59:12.871126 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-kw2v6_openshift-ingress-operator(db2a7cb1-1d05-4b24-86ed-f823fad5013e)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" podUID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" Feb 20 11:59:12.880683 master-0 kubenswrapper[7756]: I0220 11:59:12.880614 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:12.880683 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:12.880683 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:12.880683 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:12.880997 master-0 kubenswrapper[7756]: I0220 11:59:12.880737 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:13.879873 master-0 kubenswrapper[7756]: I0220 11:59:13.879761 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:13.879873 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:13.879873 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:13.879873 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:13.879873 master-0 kubenswrapper[7756]: I0220 11:59:13.879848 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:13.880904 master-0 kubenswrapper[7756]: I0220 11:59:13.879910 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/3.log" Feb 20 11:59:14.882372 master-0 kubenswrapper[7756]: I0220 11:59:14.882271 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:14.882372 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:14.882372 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:14.882372 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:14.883721 master-0 kubenswrapper[7756]: I0220 11:59:14.883636 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:15.880395 master-0 kubenswrapper[7756]: I0220 11:59:15.880217 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:15.880395 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:15.880395 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:15.880395 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:15.880395 master-0 kubenswrapper[7756]: I0220 11:59:15.880319 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:16.171607 master-0 kubenswrapper[7756]: E0220 11:59:16.171382 7756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:59:16.880778 master-0 kubenswrapper[7756]: I0220 11:59:16.880645 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:16.880778 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:16.880778 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:16.880778 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:16.880778 master-0 kubenswrapper[7756]: I0220 11:59:16.880744 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:17.381023 master-0 kubenswrapper[7756]: E0220 11:59:17.380779 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:59:17.881214 master-0 kubenswrapper[7756]: I0220 11:59:17.881143 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:17.881214 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:17.881214 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:17.881214 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:17.881887 master-0 kubenswrapper[7756]: I0220 11:59:17.881807 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:18.880318 master-0 kubenswrapper[7756]: I0220 11:59:18.880173 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:18.880318 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:18.880318 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:18.880318 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:18.881368 master-0 kubenswrapper[7756]: I0220 11:59:18.880332 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:19.552976 master-0 kubenswrapper[7756]: I0220 11:59:19.552885 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 20 11:59:19.554382 master-0 kubenswrapper[7756]: I0220 11:59:19.554335 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 20 11:59:19.555519 master-0 kubenswrapper[7756]: I0220 11:59:19.555485 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 20 11:59:19.557462 master-0 kubenswrapper[7756]: I0220 11:59:19.557434 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 20 11:59:19.703985 master-0 kubenswrapper[7756]: I0220 11:59:19.703929 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 20 11:59:19.704290 master-0 kubenswrapper[7756]: I0220 11:59:19.704031 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 20 11:59:19.704290 master-0 kubenswrapper[7756]: I0220 11:59:19.704126 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 20 11:59:19.704290 master-0 kubenswrapper[7756]: I0220 11:59:19.704120 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:59:19.704290 master-0 kubenswrapper[7756]: I0220 11:59:19.704162 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 20 11:59:19.704290 master-0 kubenswrapper[7756]: I0220 11:59:19.704202 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir" (OuterVolumeSpecName: "data-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:59:19.704290 master-0 kubenswrapper[7756]: I0220 11:59:19.704220 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 20 11:59:19.704290 master-0 kubenswrapper[7756]: I0220 11:59:19.704239 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:59:19.704290 master-0 kubenswrapper[7756]: I0220 11:59:19.704285 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 20 11:59:19.705057 master-0 kubenswrapper[7756]: I0220 11:59:19.704275 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:59:19.705057 master-0 kubenswrapper[7756]: I0220 11:59:19.704368 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir" (OuterVolumeSpecName: "log-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:59:19.705057 master-0 kubenswrapper[7756]: I0220 11:59:19.704472 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 11:59:19.705314 master-0 kubenswrapper[7756]: I0220 11:59:19.705091 7756 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:59:19.705314 master-0 kubenswrapper[7756]: I0220 11:59:19.705123 7756 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:59:19.705314 master-0 kubenswrapper[7756]: I0220 11:59:19.705144 7756 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:59:19.705314 master-0 kubenswrapper[7756]: I0220 11:59:19.705162 7756 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:59:19.705314 master-0 kubenswrapper[7756]: I0220 11:59:19.705180 7756 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Feb 20 11:59:19.705314 master-0 kubenswrapper[7756]: I0220 11:59:19.705198 7756 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 11:59:19.880750 master-0 kubenswrapper[7756]: I0220 11:59:19.880649 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:19.880750 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:19.880750 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:19.880750 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:19.880750 master-0 kubenswrapper[7756]: I0220 11:59:19.880737 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:19.940070 master-0 kubenswrapper[7756]: I0220 11:59:19.939949 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 20 11:59:19.941736 master-0 kubenswrapper[7756]: I0220 11:59:19.941676 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 20 11:59:19.943246 master-0 kubenswrapper[7756]: I0220 11:59:19.943183 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 20 11:59:19.944708 master-0 kubenswrapper[7756]: I0220 11:59:19.944642 7756 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954" exitCode=0 Feb 20 11:59:19.944708 master-0 kubenswrapper[7756]: I0220 11:59:19.944690 7756 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5" exitCode=137 Feb 20 11:59:19.945050 master-0 kubenswrapper[7756]: I0220 11:59:19.944768 7756 scope.go:117] "RemoveContainer" containerID="2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512" Feb 20 11:59:19.945050 master-0 kubenswrapper[7756]: I0220 11:59:19.944785 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 20 11:59:19.975436 master-0 kubenswrapper[7756]: I0220 11:59:19.975066 7756 scope.go:117] "RemoveContainer" containerID="5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5" Feb 20 11:59:19.999066 master-0 kubenswrapper[7756]: I0220 11:59:19.998995 7756 scope.go:117] "RemoveContainer" containerID="ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe" Feb 20 11:59:20.017941 master-0 kubenswrapper[7756]: I0220 11:59:20.017877 7756 scope.go:117] "RemoveContainer" containerID="4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954" Feb 20 11:59:20.042340 master-0 kubenswrapper[7756]: I0220 11:59:20.042276 7756 scope.go:117] "RemoveContainer" containerID="424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5" Feb 20 11:59:20.066071 master-0 kubenswrapper[7756]: I0220 11:59:20.066009 7756 scope.go:117] "RemoveContainer" containerID="ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27" Feb 20 11:59:20.102304 master-0 kubenswrapper[7756]: I0220 11:59:20.102265 7756 scope.go:117] "RemoveContainer" containerID="97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0" Feb 20 11:59:20.136715 master-0 kubenswrapper[7756]: I0220 11:59:20.136516 7756 scope.go:117] "RemoveContainer" containerID="cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872" Feb 20 11:59:20.162159 master-0 kubenswrapper[7756]: I0220 11:59:20.161954 7756 scope.go:117] "RemoveContainer" containerID="2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512" Feb 20 11:59:20.165103 master-0 kubenswrapper[7756]: E0220 11:59:20.164693 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512\": container with ID starting with 2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512 not found: ID does not exist" containerID="2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512" Feb 20 11:59:20.165103 master-0 kubenswrapper[7756]: I0220 11:59:20.164724 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512"} err="failed to get container status \"2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512\": rpc error: code = NotFound desc = could not find container \"2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512\": container with ID starting with 2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512 not found: ID does not exist" Feb 20 11:59:20.165103 master-0 kubenswrapper[7756]: I0220 11:59:20.164746 7756 scope.go:117] "RemoveContainer" containerID="5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5" Feb 20 11:59:20.165509 master-0 kubenswrapper[7756]: E0220 11:59:20.165461 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5\": container with ID starting with 5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5 not found: ID does not exist" containerID="5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5" Feb 20 11:59:20.165509 master-0 kubenswrapper[7756]: I0220 11:59:20.165488 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5"} err="failed to get container status \"5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5\": rpc error: code = NotFound desc = could not find container \"5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5\": container with ID starting with 5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5 not found: ID does not exist" Feb 20 11:59:20.165509 master-0 kubenswrapper[7756]: I0220 11:59:20.165503 7756 scope.go:117] "RemoveContainer" containerID="ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe" Feb 20 11:59:20.166078 master-0 kubenswrapper[7756]: E0220 11:59:20.166032 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe\": container with ID starting with ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe not found: ID does not exist" containerID="ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe" Feb 20 11:59:20.166078 master-0 kubenswrapper[7756]: I0220 11:59:20.166060 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe"} err="failed to get container status \"ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe\": rpc error: code = NotFound desc = could not find container \"ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe\": container with ID starting with ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe not found: ID does not exist" Feb 20 11:59:20.166078 master-0 kubenswrapper[7756]: I0220 11:59:20.166077 7756 scope.go:117] "RemoveContainer" containerID="4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954" Feb 20 11:59:20.166935 master-0 kubenswrapper[7756]: E0220 11:59:20.166876 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954\": container with ID starting with 4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954 not found: ID does not exist" containerID="4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954" Feb 20 11:59:20.167050 master-0 kubenswrapper[7756]: I0220 11:59:20.166931 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954"} err="failed to get container status \"4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954\": rpc error: code = NotFound desc = could not find container \"4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954\": container with ID starting with 4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954 not found: ID does not exist" Feb 20 11:59:20.167050 master-0 kubenswrapper[7756]: I0220 11:59:20.166959 7756 scope.go:117] "RemoveContainer" containerID="424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5" Feb 20 11:59:20.167372 master-0 kubenswrapper[7756]: E0220 11:59:20.167327 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5\": container with ID starting with 424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5 not found: ID does not exist" containerID="424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5" Feb 20 11:59:20.167372 master-0 kubenswrapper[7756]: I0220 11:59:20.167360 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5"} err="failed to get container status \"424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5\": rpc error: code = NotFound desc = could not find container \"424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5\": container with ID starting with 424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5 not found: ID does not exist" Feb 20 11:59:20.167509 master-0 kubenswrapper[7756]: I0220 11:59:20.167378 7756 scope.go:117] "RemoveContainer" containerID="ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27" Feb 20 11:59:20.167807 master-0 kubenswrapper[7756]: E0220 11:59:20.167744 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27\": container with ID starting with ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27 not found: ID does not exist" containerID="ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27" Feb 20 11:59:20.167892 master-0 kubenswrapper[7756]: I0220 11:59:20.167805 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27"} err="failed to get container status \"ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27\": rpc error: code = NotFound desc = could not find container \"ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27\": container with ID starting with ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27 not found: ID does not exist" Feb 20 11:59:20.167892 master-0 kubenswrapper[7756]: I0220 11:59:20.167843 7756 scope.go:117] "RemoveContainer" containerID="97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0" Feb 20 11:59:20.168223 master-0 kubenswrapper[7756]: E0220 11:59:20.168179 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0\": container with ID starting with 97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0 not found: ID does not exist" containerID="97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0" Feb 20 11:59:20.168223 master-0 kubenswrapper[7756]: I0220 11:59:20.168203 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0"} err="failed to get container status \"97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0\": rpc error: code = NotFound desc = could not find container \"97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0\": container with ID starting with 97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0 not found: ID does not exist" Feb 20 11:59:20.168223 master-0 kubenswrapper[7756]: I0220 11:59:20.168217 7756 scope.go:117] "RemoveContainer" containerID="cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872" Feb 20 11:59:20.168467 master-0 kubenswrapper[7756]: E0220 11:59:20.168441 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872\": container with ID starting with cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872 not found: ID does not exist" containerID="cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872" Feb 20 11:59:20.168467 master-0 kubenswrapper[7756]: I0220 11:59:20.168461 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872"} err="failed to get container status \"cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872\": rpc error: code = NotFound desc = could not find container \"cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872\": container with ID starting with cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872 not found: ID does not exist" Feb 20 11:59:20.168633 master-0 kubenswrapper[7756]: I0220 11:59:20.168475 7756 scope.go:117] "RemoveContainer" containerID="2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512" Feb 20 11:59:20.168921 master-0 kubenswrapper[7756]: I0220 11:59:20.168858 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512"} err="failed to get container status \"2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512\": rpc error: code = NotFound desc = could not find container \"2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512\": container with ID starting with 2ba79b8a61ad1e014dab9a41ae6ffca563e069adc1e857a9b7296444c95d8512 not found: ID does not exist" Feb 20 11:59:20.168921 master-0 kubenswrapper[7756]: I0220 11:59:20.168909 7756 scope.go:117] "RemoveContainer" containerID="5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5" Feb 20 11:59:20.169338 master-0 kubenswrapper[7756]: I0220 11:59:20.169283 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5"} err="failed to get container status \"5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5\": rpc error: code = NotFound desc = could not find container \"5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5\": container with ID starting with 5257a24b6a69f35461df87a0817b6576530a572a647d20b2e3a19e37f47dcaa5 not found: ID does not exist" Feb 20 11:59:20.169338 master-0 kubenswrapper[7756]: I0220 11:59:20.169326 7756 scope.go:117] "RemoveContainer" containerID="ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe" Feb 20 11:59:20.169761 master-0 kubenswrapper[7756]: I0220 11:59:20.169675 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe"} err="failed to get container status \"ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe\": rpc error: code = NotFound desc = could not find container \"ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe\": container with ID starting with ce94720cc445ccff2ee1397043ad292d69f0879debd3afb2f0e39a0f65c52bbe not found: ID does not exist" Feb 20 11:59:20.169761 master-0 kubenswrapper[7756]: I0220 11:59:20.169755 7756 scope.go:117] "RemoveContainer" containerID="4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954" Feb 20 11:59:20.170196 master-0 kubenswrapper[7756]: I0220 11:59:20.170125 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954"} err="failed to get container status \"4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954\": rpc error: code = NotFound desc = could not find container \"4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954\": container with ID starting with 4df463d5d890f14d4f3c05caace2f021658f60fa6b13e17335c44c17f8add954 not found: ID does not exist" Feb 20 11:59:20.170196 master-0 kubenswrapper[7756]: I0220 11:59:20.170182 7756 scope.go:117] "RemoveContainer" containerID="424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5" Feb 20 11:59:20.170601 master-0 kubenswrapper[7756]: I0220 11:59:20.170550 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5"} err="failed to get container status \"424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5\": rpc error: code = NotFound desc = could not find container \"424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5\": container with ID starting with 424a1346766897159cc0fc256ead9f392f819750618494c245b97d86cf522bb5 not found: ID does not exist" Feb 20 11:59:20.170601 master-0 kubenswrapper[7756]: I0220 11:59:20.170588 7756 scope.go:117] "RemoveContainer" containerID="ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27" Feb 20 11:59:20.170855 master-0 kubenswrapper[7756]: I0220 11:59:20.170815 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27"} err="failed to get container status \"ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27\": rpc error: code = NotFound desc = could not find container \"ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27\": container with ID starting with ec73658c66839d121bf4942c2b2e906eace68e6da22da9e3367147ea6212bd27 not found: ID does not exist" Feb 20 11:59:20.170855 master-0 kubenswrapper[7756]: I0220 11:59:20.170838 7756 scope.go:117] "RemoveContainer" containerID="97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0" Feb 20 11:59:20.171141 master-0 kubenswrapper[7756]: I0220 11:59:20.171090 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0"} err="failed to get container status \"97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0\": rpc error: code = NotFound desc = could not find container \"97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0\": container with ID starting with 97a6b8441c81147839bc75f3684697abcb8f0ede6b55ccf394d0a12e8a1d1aa0 not found: ID does not exist" Feb 20 11:59:20.171141 master-0 kubenswrapper[7756]: I0220 11:59:20.171128 7756 scope.go:117] "RemoveContainer" containerID="cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872" Feb 20 11:59:20.171364 master-0 kubenswrapper[7756]: I0220 11:59:20.171336 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872"} err="failed to get container status \"cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872\": rpc error: code = NotFound desc = could not find container \"cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872\": container with ID starting with cbd810e236f920cd9507382e01ac44153e239020595b4417ee07a294e0984872 not found: ID does not exist" Feb 20 11:59:20.589919 master-0 kubenswrapper[7756]: I0220 11:59:20.589808 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a83278819db2092fa26d8274eb3f00" path="/var/lib/kubelet/pods/18a83278819db2092fa26d8274eb3f00/volumes" Feb 20 11:59:20.880728 master-0 kubenswrapper[7756]: I0220 11:59:20.880554 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:20.880728 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:20.880728 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:20.880728 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:20.880728 master-0 kubenswrapper[7756]: I0220 11:59:20.880692 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:21.881020 master-0 kubenswrapper[7756]: I0220 11:59:21.880941 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:21.881020 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:21.881020 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:21.881020 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:21.881982 master-0 kubenswrapper[7756]: I0220 11:59:21.881029 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:21.979362 master-0 kubenswrapper[7756]: I0220 11:59:21.979313 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:59:21.986047 master-0 kubenswrapper[7756]: I0220 11:59:21.986016 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 11:59:22.880818 master-0 kubenswrapper[7756]: I0220 11:59:22.880733 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:22.880818 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:22.880818 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:22.880818 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:22.880818 master-0 kubenswrapper[7756]: I0220 11:59:22.880810 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:22.959025 master-0 kubenswrapper[7756]: E0220 11:59:22.958871 7756 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.1895f2900bc6986b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:18a83278819db2092fa26d8274eb3f00,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Killing,Message:Stopping container etcd-rev,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:58:48.926935147 +0000 UTC m=+574.669183195,LastTimestamp:2026-02-20 11:58:48.926935147 +0000 UTC m=+574.669183195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:59:23.880983 master-0 kubenswrapper[7756]: I0220 11:59:23.880888 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:23.880983 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:23.880983 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:23.880983 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:23.881947 master-0 kubenswrapper[7756]: I0220 11:59:23.880980 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:24.880188 master-0 kubenswrapper[7756]: I0220 11:59:24.880101 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:24.880188 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:24.880188 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:24.880188 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:24.880746 master-0 kubenswrapper[7756]: I0220 11:59:24.880187 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:25.879632 master-0 kubenswrapper[7756]: I0220 11:59:25.879562 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:25.879632 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:25.879632 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:25.879632 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:25.880566 master-0 kubenswrapper[7756]: I0220 11:59:25.879643 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:26.172491 master-0 kubenswrapper[7756]: E0220 11:59:26.172088 7756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:59:26.880757 master-0 kubenswrapper[7756]: I0220 11:59:26.880667 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:26.880757 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:26.880757 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:26.880757 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:26.881792 master-0 kubenswrapper[7756]: I0220 11:59:26.880784 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:27.381577 master-0 kubenswrapper[7756]: E0220 11:59:27.381448 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:59:27.579040 master-0 kubenswrapper[7756]: I0220 11:59:27.578955 7756 scope.go:117] "RemoveContainer" containerID="095bad8ef44d69b4cc26fcf2dd343a67938137ce3213cb7022a98a05d1eb31af" Feb 20 11:59:27.579446 master-0 kubenswrapper[7756]: E0220 11:59:27.579388 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-kw2v6_openshift-ingress-operator(db2a7cb1-1d05-4b24-86ed-f823fad5013e)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" podUID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" Feb 20 11:59:27.880066 master-0 kubenswrapper[7756]: I0220 11:59:27.879978 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:27.880066 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:27.880066 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:27.880066 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:27.880556 master-0 kubenswrapper[7756]: I0220 11:59:27.880085 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:28.578113 master-0 kubenswrapper[7756]: I0220 11:59:28.578015 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 20 11:59:28.604349 master-0 kubenswrapper[7756]: I0220 11:59:28.604267 7756 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="20df98a1-7355-4d48-920c-675e1211ec9c" Feb 20 11:59:28.604349 master-0 kubenswrapper[7756]: I0220 11:59:28.604326 7756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="20df98a1-7355-4d48-920c-675e1211ec9c" Feb 20 11:59:28.881018 master-0 kubenswrapper[7756]: I0220 11:59:28.880779 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:28.881018 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:28.881018 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:28.881018 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:28.881018 master-0 kubenswrapper[7756]: I0220 11:59:28.880952 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:29.880741 master-0 kubenswrapper[7756]: I0220 11:59:29.880650 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:29.880741 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:29.880741 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:29.880741 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:29.881768 master-0 kubenswrapper[7756]: I0220 11:59:29.880742 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:30.880704 master-0 kubenswrapper[7756]: I0220 11:59:30.880603 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:30.880704 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:30.880704 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:30.880704 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:30.880704 master-0 kubenswrapper[7756]: I0220 11:59:30.880699 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:31.880414 master-0 kubenswrapper[7756]: I0220 11:59:31.880291 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:31.880414 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:31.880414 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:31.880414 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:31.880879 master-0 kubenswrapper[7756]: I0220 11:59:31.880427 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:32.881460 master-0 kubenswrapper[7756]: I0220 11:59:32.881360 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:32.881460 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:32.881460 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:32.881460 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:32.882658 master-0 kubenswrapper[7756]: I0220 11:59:32.881465 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:33.879453 master-0 kubenswrapper[7756]: I0220 11:59:33.879372 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:33.879453 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:33.879453 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:33.879453 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:33.879920 master-0 kubenswrapper[7756]: I0220 11:59:33.879464 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:34.880074 master-0 kubenswrapper[7756]: I0220 11:59:34.879990 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:34.880074 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:34.880074 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:34.880074 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:34.881208 master-0 kubenswrapper[7756]: I0220 11:59:34.880108 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:35.880167 master-0 kubenswrapper[7756]: I0220 11:59:35.880085 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:35.880167 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:35.880167 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:35.880167 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:35.881139 master-0 kubenswrapper[7756]: I0220 11:59:35.880180 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:36.172603 master-0 kubenswrapper[7756]: E0220 11:59:36.172368 7756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 20 11:59:36.880464 master-0 kubenswrapper[7756]: I0220 11:59:36.880356 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:36.880464 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:36.880464 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:36.880464 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:36.881388 master-0 kubenswrapper[7756]: I0220 11:59:36.880512 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:37.381828 master-0 kubenswrapper[7756]: E0220 11:59:37.381712 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Feb 20 11:59:37.880448 master-0 kubenswrapper[7756]: I0220 11:59:37.880355 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:37.880448 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:37.880448 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:37.880448 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:37.881445 master-0 kubenswrapper[7756]: I0220 11:59:37.880456 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:38.088253 master-0 kubenswrapper[7756]: I0220 11:59:38.088154 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-psm4s_836a6d7e-9b26-425f-ae21-00422515d7fe/approver/1.log" Feb 20 11:59:38.088978 master-0 kubenswrapper[7756]: I0220 11:59:38.088917 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-psm4s_836a6d7e-9b26-425f-ae21-00422515d7fe/approver/0.log" Feb 20 11:59:38.089447 master-0 kubenswrapper[7756]: I0220 11:59:38.089390 7756 generic.go:334] "Generic (PLEG): container finished" podID="836a6d7e-9b26-425f-ae21-00422515d7fe" containerID="8ee62624db1bf28c038634c2f6ef81ccfdeef3084369265ba22b099552cdd3a8" exitCode=1 Feb 20 11:59:38.089447 master-0 kubenswrapper[7756]: I0220 11:59:38.089417 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-psm4s" event={"ID":"836a6d7e-9b26-425f-ae21-00422515d7fe","Type":"ContainerDied","Data":"8ee62624db1bf28c038634c2f6ef81ccfdeef3084369265ba22b099552cdd3a8"} Feb 20 11:59:38.089705 master-0 kubenswrapper[7756]: I0220 11:59:38.089469 7756 scope.go:117] "RemoveContainer" containerID="ace904c5f4a3faa1035b1dcf89c693ce9b93dceae341e4edfb98ee1576eea9b6" Feb 20 11:59:38.089816 master-0 kubenswrapper[7756]: I0220 11:59:38.089795 7756 scope.go:117] "RemoveContainer" containerID="8ee62624db1bf28c038634c2f6ef81ccfdeef3084369265ba22b099552cdd3a8" Feb 20 11:59:38.090008 master-0 kubenswrapper[7756]: E0220 11:59:38.089954 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-psm4s_openshift-network-node-identity(836a6d7e-9b26-425f-ae21-00422515d7fe)\"" pod="openshift-network-node-identity/network-node-identity-psm4s" podUID="836a6d7e-9b26-425f-ae21-00422515d7fe" Feb 20 11:59:38.880689 master-0 kubenswrapper[7756]: I0220 11:59:38.880620 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:38.880689 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:38.880689 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:38.880689 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:38.881756 master-0 kubenswrapper[7756]: I0220 11:59:38.880725 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:39.100788 master-0 kubenswrapper[7756]: I0220 11:59:39.100736 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-psm4s_836a6d7e-9b26-425f-ae21-00422515d7fe/approver/1.log" Feb 20 11:59:39.880821 master-0 kubenswrapper[7756]: I0220 11:59:39.880718 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:39.880821 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:39.880821 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:39.880821 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:39.880821 master-0 kubenswrapper[7756]: I0220 11:59:39.880818 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:40.579382 master-0 kubenswrapper[7756]: I0220 11:59:40.579221 7756 scope.go:117] "RemoveContainer" containerID="095bad8ef44d69b4cc26fcf2dd343a67938137ce3213cb7022a98a05d1eb31af" Feb 20 11:59:40.580057 master-0 kubenswrapper[7756]: E0220 11:59:40.579805 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-kw2v6_openshift-ingress-operator(db2a7cb1-1d05-4b24-86ed-f823fad5013e)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" podUID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" Feb 20 11:59:40.917790 master-0 kubenswrapper[7756]: I0220 11:59:40.917643 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:40.917790 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:40.917790 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:40.917790 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:40.917790 master-0 kubenswrapper[7756]: I0220 11:59:40.917722 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:41.880050 master-0 kubenswrapper[7756]: I0220 11:59:41.879952 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:41.880050 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:41.880050 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:41.880050 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:41.880460 master-0 kubenswrapper[7756]: I0220 11:59:41.880061 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:42.880834 master-0 kubenswrapper[7756]: I0220 11:59:42.880729 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:42.880834 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:42.880834 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:42.880834 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:42.881841 master-0 kubenswrapper[7756]: I0220 11:59:42.880852 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:43.879966 master-0 kubenswrapper[7756]: I0220 11:59:43.879833 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:43.879966 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:43.879966 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:43.879966 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:43.879966 master-0 kubenswrapper[7756]: I0220 11:59:43.879941 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:44.880659 master-0 kubenswrapper[7756]: I0220 11:59:44.880572 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:44.880659 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:44.880659 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:44.880659 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:44.881722 master-0 kubenswrapper[7756]: I0220 11:59:44.880670 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:45.880371 master-0 kubenswrapper[7756]: I0220 11:59:45.880296 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:45.880371 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:45.880371 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:45.880371 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:45.881505 master-0 kubenswrapper[7756]: I0220 11:59:45.880389 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:46.173041 master-0 kubenswrapper[7756]: E0220 11:59:46.172750 7756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" Feb 20 11:59:46.173041 master-0 kubenswrapper[7756]: I0220 11:59:46.172818 7756 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 20 11:59:46.880911 master-0 kubenswrapper[7756]: I0220 11:59:46.880816 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:46.880911 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:46.880911 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:46.880911 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:46.881874 master-0 kubenswrapper[7756]: I0220 11:59:46.880937 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:47.382441 master-0 kubenswrapper[7756]: E0220 11:59:47.382325 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 11:59:47.382441 master-0 kubenswrapper[7756]: E0220 11:59:47.382387 7756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 20 11:59:47.880744 master-0 kubenswrapper[7756]: I0220 11:59:47.880636 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:47.880744 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:47.880744 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:47.880744 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:47.881837 master-0 kubenswrapper[7756]: I0220 11:59:47.880759 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:48.880099 master-0 kubenswrapper[7756]: I0220 11:59:48.880003 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:48.880099 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:48.880099 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:48.880099 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:48.880707 master-0 kubenswrapper[7756]: I0220 11:59:48.880107 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:49.880919 master-0 kubenswrapper[7756]: I0220 11:59:49.880850 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:49.880919 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:49.880919 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:49.880919 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:49.881901 master-0 kubenswrapper[7756]: I0220 11:59:49.880928 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:50.580273 master-0 kubenswrapper[7756]: I0220 11:59:50.580156 7756 scope.go:117] "RemoveContainer" containerID="8ee62624db1bf28c038634c2f6ef81ccfdeef3084369265ba22b099552cdd3a8" Feb 20 11:59:50.880507 master-0 kubenswrapper[7756]: I0220 11:59:50.880340 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:50.880507 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:50.880507 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:50.880507 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:50.880507 master-0 kubenswrapper[7756]: I0220 11:59:50.880432 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:51.196781 master-0 kubenswrapper[7756]: I0220 11:59:51.196613 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-psm4s_836a6d7e-9b26-425f-ae21-00422515d7fe/approver/1.log" Feb 20 11:59:51.197566 master-0 kubenswrapper[7756]: I0220 11:59:51.197246 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-psm4s" event={"ID":"836a6d7e-9b26-425f-ae21-00422515d7fe","Type":"ContainerStarted","Data":"12404b45a90d2125027e14f996b49fb4d59745902e3494cfebb55913e8ce1499"} Feb 20 11:59:51.880678 master-0 kubenswrapper[7756]: I0220 11:59:51.880374 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:51.880678 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:51.880678 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:51.880678 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:51.880678 master-0 kubenswrapper[7756]: I0220 11:59:51.880671 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:52.879901 master-0 kubenswrapper[7756]: I0220 11:59:52.879786 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:52.879901 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:52.879901 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:52.879901 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:52.879901 master-0 kubenswrapper[7756]: I0220 11:59:52.879891 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:53.880434 master-0 kubenswrapper[7756]: I0220 11:59:53.880322 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:53.880434 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:53.880434 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:53.880434 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:53.880434 master-0 kubenswrapper[7756]: I0220 11:59:53.880424 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:54.879794 master-0 kubenswrapper[7756]: I0220 11:59:54.879695 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:54.879794 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:54.879794 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:54.879794 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:54.879794 master-0 kubenswrapper[7756]: I0220 11:59:54.879774 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:55.579488 master-0 kubenswrapper[7756]: I0220 11:59:55.579401 7756 scope.go:117] "RemoveContainer" containerID="095bad8ef44d69b4cc26fcf2dd343a67938137ce3213cb7022a98a05d1eb31af" Feb 20 11:59:55.879740 master-0 kubenswrapper[7756]: I0220 11:59:55.879575 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:55.879740 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:55.879740 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:55.879740 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:55.879740 master-0 kubenswrapper[7756]: I0220 11:59:55.879638 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:56.173594 master-0 kubenswrapper[7756]: E0220 11:59:56.173366 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 20 11:59:56.240730 master-0 kubenswrapper[7756]: I0220 11:59:56.240650 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/3.log" Feb 20 11:59:56.241609 master-0 kubenswrapper[7756]: I0220 11:59:56.241494 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" event={"ID":"db2a7cb1-1d05-4b24-86ed-f823fad5013e","Type":"ContainerStarted","Data":"0590c15c418b11d5d4116f66dd28e44110553b9745f97ed4ab83a1781e3345eb"} Feb 20 11:59:56.674862 master-0 kubenswrapper[7756]: I0220 11:59:56.674773 7756 status_manager.go:851] "Failed to get status for pod" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods router-default-7b65dc9fcb-fkkd5)" Feb 20 11:59:56.879845 master-0 kubenswrapper[7756]: I0220 11:59:56.879756 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:56.879845 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:56.879845 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:56.879845 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:56.880279 master-0 kubenswrapper[7756]: I0220 11:59:56.879853 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:56.962784 master-0 kubenswrapper[7756]: E0220 11:59:56.962421 7756 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{router-default-7b65dc9fcb-fkkd5.1895f26ad0a4aab0 openshift-ingress 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-7b65dc9fcb-fkkd5,UID:9c078827-3bdb-4509-aeb3-eb558df1f6e7,APIVersion:v1,ResourceVersion:10391,FieldPath:spec.containers{router},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:56:09.021065904 +0000 UTC m=+414.763313942,LastTimestamp:2026-02-20 11:58:56.001604399 +0000 UTC m=+581.743852437,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 11:59:57.880549 master-0 kubenswrapper[7756]: I0220 11:59:57.880455 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:57.880549 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:57.880549 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:57.880549 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:57.881781 master-0 kubenswrapper[7756]: I0220 11:59:57.880566 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:58.880278 master-0 kubenswrapper[7756]: I0220 11:59:58.880185 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:58.880278 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:58.880278 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:58.880278 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:58.881244 master-0 kubenswrapper[7756]: I0220 11:59:58.880294 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 11:59:59.880475 master-0 kubenswrapper[7756]: I0220 11:59:59.880389 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 11:59:59.880475 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 11:59:59.880475 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 11:59:59.880475 master-0 kubenswrapper[7756]: healthz check failed Feb 20 11:59:59.881453 master-0 kubenswrapper[7756]: I0220 11:59:59.880483 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:00.880596 master-0 kubenswrapper[7756]: I0220 12:00:00.880479 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:00.880596 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:00.880596 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:00.880596 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:00.882150 master-0 kubenswrapper[7756]: I0220 12:00:00.880608 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:01.284362 master-0 kubenswrapper[7756]: I0220 12:00:01.284273 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_7de8fb9d-34f7-49bc-867d-827a0f9a11e7/installer/0.log" Feb 20 12:00:01.284670 master-0 kubenswrapper[7756]: I0220 12:00:01.284373 7756 generic.go:334] "Generic (PLEG): container finished" podID="7de8fb9d-34f7-49bc-867d-827a0f9a11e7" containerID="56ae66462a4df6b3b10343480cd4dc180d6cf045523fb628f58018d2caac8f02" exitCode=1 Feb 20 12:00:01.284670 master-0 kubenswrapper[7756]: I0220 12:00:01.284431 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"7de8fb9d-34f7-49bc-867d-827a0f9a11e7","Type":"ContainerDied","Data":"56ae66462a4df6b3b10343480cd4dc180d6cf045523fb628f58018d2caac8f02"} Feb 20 12:00:01.880240 master-0 kubenswrapper[7756]: I0220 12:00:01.880131 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:01.880240 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:01.880240 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:01.880240 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:01.880240 master-0 kubenswrapper[7756]: I0220 12:00:01.880209 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:02.607326 master-0 kubenswrapper[7756]: E0220 12:00:02.607178 7756 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 20 12:00:02.607955 master-0 kubenswrapper[7756]: I0220 12:00:02.607780 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 20 12:00:02.718591 master-0 kubenswrapper[7756]: I0220 12:00:02.718457 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_7de8fb9d-34f7-49bc-867d-827a0f9a11e7/installer/0.log" Feb 20 12:00:02.718591 master-0 kubenswrapper[7756]: I0220 12:00:02.718539 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 12:00:02.879288 master-0 kubenswrapper[7756]: I0220 12:00:02.879178 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:02.879288 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:02.879288 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:02.879288 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:02.879288 master-0 kubenswrapper[7756]: I0220 12:00:02.879251 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:02.880210 master-0 kubenswrapper[7756]: I0220 12:00:02.880172 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-kubelet-dir\") pod \"7de8fb9d-34f7-49bc-867d-827a0f9a11e7\" (UID: \"7de8fb9d-34f7-49bc-867d-827a0f9a11e7\") " Feb 20 12:00:02.880419 master-0 kubenswrapper[7756]: I0220 12:00:02.880391 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-var-lock\") pod \"7de8fb9d-34f7-49bc-867d-827a0f9a11e7\" (UID: \"7de8fb9d-34f7-49bc-867d-827a0f9a11e7\") " Feb 20 12:00:02.880671 master-0 kubenswrapper[7756]: I0220 12:00:02.880309 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7de8fb9d-34f7-49bc-867d-827a0f9a11e7" (UID: "7de8fb9d-34f7-49bc-867d-827a0f9a11e7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:00:02.880776 master-0 kubenswrapper[7756]: I0220 12:00:02.880445 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-var-lock" (OuterVolumeSpecName: "var-lock") pod "7de8fb9d-34f7-49bc-867d-827a0f9a11e7" (UID: "7de8fb9d-34f7-49bc-867d-827a0f9a11e7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:00:02.881006 master-0 kubenswrapper[7756]: I0220 12:00:02.880647 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-kube-api-access\") pod \"7de8fb9d-34f7-49bc-867d-827a0f9a11e7\" (UID: \"7de8fb9d-34f7-49bc-867d-827a0f9a11e7\") " Feb 20 12:00:02.881697 master-0 kubenswrapper[7756]: I0220 12:00:02.881656 7756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 12:00:02.881923 master-0 kubenswrapper[7756]: I0220 12:00:02.881889 7756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:00:02.885383 master-0 kubenswrapper[7756]: I0220 12:00:02.885322 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7de8fb9d-34f7-49bc-867d-827a0f9a11e7" (UID: "7de8fb9d-34f7-49bc-867d-827a0f9a11e7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:00:02.984023 master-0 kubenswrapper[7756]: I0220 12:00:02.983943 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7de8fb9d-34f7-49bc-867d-827a0f9a11e7-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 12:00:03.302641 master-0 kubenswrapper[7756]: I0220 12:00:03.302569 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_7de8fb9d-34f7-49bc-867d-827a0f9a11e7/installer/0.log" Feb 20 12:00:03.303059 master-0 kubenswrapper[7756]: I0220 12:00:03.302783 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 12:00:03.303059 master-0 kubenswrapper[7756]: I0220 12:00:03.302777 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"7de8fb9d-34f7-49bc-867d-827a0f9a11e7","Type":"ContainerDied","Data":"52931bca33b633a8f7b4a404b3d376c51a9562b00ed924bbb1fbf19380cd707f"} Feb 20 12:00:03.303059 master-0 kubenswrapper[7756]: I0220 12:00:03.302877 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52931bca33b633a8f7b4a404b3d376c51a9562b00ed924bbb1fbf19380cd707f" Feb 20 12:00:03.306195 master-0 kubenswrapper[7756]: I0220 12:00:03.306136 7756 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="75dfcf2c7e75ed34e7d8254c990b8555834b339c0315692edbac504af1d4c6bd" exitCode=0 Feb 20 12:00:03.306341 master-0 kubenswrapper[7756]: I0220 12:00:03.306203 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"75dfcf2c7e75ed34e7d8254c990b8555834b339c0315692edbac504af1d4c6bd"} Feb 20 12:00:03.306341 master-0 kubenswrapper[7756]: I0220 12:00:03.306262 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"9cc7b181ab55ab6abb3242c925ed6067592af711ebb394b812dbd9cfe003dfbd"} Feb 20 12:00:03.307066 master-0 kubenswrapper[7756]: I0220 12:00:03.306756 7756 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="20df98a1-7355-4d48-920c-675e1211ec9c" Feb 20 12:00:03.307066 master-0 kubenswrapper[7756]: I0220 12:00:03.306788 7756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="20df98a1-7355-4d48-920c-675e1211ec9c" Feb 20 12:00:03.880205 master-0 kubenswrapper[7756]: I0220 12:00:03.880126 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:03.880205 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:03.880205 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:03.880205 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:03.881113 master-0 kubenswrapper[7756]: I0220 12:00:03.880214 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:04.880675 master-0 kubenswrapper[7756]: I0220 12:00:04.880586 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:04.880675 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:04.880675 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:04.880675 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:04.881654 master-0 kubenswrapper[7756]: I0220 12:00:04.880684 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:05.880139 master-0 kubenswrapper[7756]: I0220 12:00:05.880056 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:05.880139 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:05.880139 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:05.880139 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:05.880624 master-0 kubenswrapper[7756]: I0220 12:00:05.880150 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:06.374209 master-0 kubenswrapper[7756]: E0220 12:00:06.374073 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 20 12:00:06.879807 master-0 kubenswrapper[7756]: I0220 12:00:06.879727 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:06.879807 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:06.879807 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:06.879807 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:06.880296 master-0 kubenswrapper[7756]: I0220 12:00:06.879819 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:07.714882 master-0 kubenswrapper[7756]: E0220 12:00:07.714559 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:59:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:59:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:59:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T11:59:57Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:584b5d125dad1fa4f8d03e6ace2e4901c173569ff1ed9536da6915c56fa52bc0\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8124eb3839b25af23303e9fdde35728bfd24d7c0c47530e77852cba1dd9d1ffb\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702755272},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:18622d3875e4a2dd9fde1633a737ae82af1df960d3bbcbda22c44df6cea6aa74\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:2d4cac2da3d445443ee7ac3918878091ebecdaadbd2742424bb1a02391a1c5b3\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1235965143},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2458acf77e6551a99656a2a1643e7ef4bf008f6bf792157614710eb9b28e0e64\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:3c45f047394ebd29a640afe4c1e96739e5155ec608b61170a2274911bdf56a3d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210258627},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c\\\"],\\\"sizeBytes\\\":480427687},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb\\\"],\\\"sizeBytes\\\":471325816}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:00:07.880729 master-0 kubenswrapper[7756]: I0220 12:00:07.880620 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:07.880729 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:07.880729 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:07.880729 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:07.881150 master-0 kubenswrapper[7756]: I0220 12:00:07.880730 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:08.880500 master-0 kubenswrapper[7756]: I0220 12:00:08.880415 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:08.880500 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:08.880500 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:08.880500 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:08.881487 master-0 kubenswrapper[7756]: I0220 12:00:08.880502 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:09.880158 master-0 kubenswrapper[7756]: I0220 12:00:09.880061 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:09.880158 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:09.880158 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:09.880158 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:09.880632 master-0 kubenswrapper[7756]: I0220 12:00:09.880158 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:10.880680 master-0 kubenswrapper[7756]: I0220 12:00:10.880522 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:10.880680 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:10.880680 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:10.880680 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:10.880680 master-0 kubenswrapper[7756]: I0220 12:00:10.880669 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:11.881023 master-0 kubenswrapper[7756]: I0220 12:00:11.880916 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:11.881023 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:11.881023 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:11.881023 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:11.881023 master-0 kubenswrapper[7756]: I0220 12:00:11.881009 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:12.880350 master-0 kubenswrapper[7756]: I0220 12:00:12.880256 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:12.880350 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:12.880350 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:12.880350 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:12.880850 master-0 kubenswrapper[7756]: I0220 12:00:12.880358 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:13.880811 master-0 kubenswrapper[7756]: I0220 12:00:13.880698 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:13.880811 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:13.880811 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:13.880811 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:13.880811 master-0 kubenswrapper[7756]: I0220 12:00:13.880792 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:14.880746 master-0 kubenswrapper[7756]: I0220 12:00:14.880661 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:14.880746 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:14.880746 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:14.880746 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:14.881737 master-0 kubenswrapper[7756]: I0220 12:00:14.880771 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:15.063430 master-0 kubenswrapper[7756]: I0220 12:00:15.063360 7756 scope.go:117] "RemoveContainer" containerID="4dfeade65eb878550b91a87841f50892c43c67d9c3d37a72dc5c09f4d1bfeb67" Feb 20 12:00:15.086929 master-0 kubenswrapper[7756]: I0220 12:00:15.086887 7756 scope.go:117] "RemoveContainer" containerID="610ed904564d38a9663079b5791a3bed3f3fde288a983c4b6a5a9408be5ffc50" Feb 20 12:00:15.109076 master-0 kubenswrapper[7756]: I0220 12:00:15.108997 7756 scope.go:117] "RemoveContainer" containerID="e9300776eee7b9f506a4b0f31aa2e187971b5654c77d21860bda2d88ce86d8a4" Feb 20 12:00:15.146859 master-0 kubenswrapper[7756]: I0220 12:00:15.146806 7756 scope.go:117] "RemoveContainer" containerID="1d1e4f19b4b937664918df87724b0ce6399cbc186e4b82d3db56d2fb037a5e05" Feb 20 12:00:15.880745 master-0 kubenswrapper[7756]: I0220 12:00:15.880636 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:15.880745 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:15.880745 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:15.880745 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:15.881717 master-0 kubenswrapper[7756]: I0220 12:00:15.880744 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:16.775778 master-0 kubenswrapper[7756]: E0220 12:00:16.775656 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 20 12:00:16.880667 master-0 kubenswrapper[7756]: I0220 12:00:16.880518 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:16.880667 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:16.880667 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:16.880667 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:16.880667 master-0 kubenswrapper[7756]: I0220 12:00:16.880662 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:17.716856 master-0 kubenswrapper[7756]: E0220 12:00:17.716647 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:00:17.881574 master-0 kubenswrapper[7756]: I0220 12:00:17.881467 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:17.881574 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:17.881574 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:17.881574 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:17.882823 master-0 kubenswrapper[7756]: I0220 12:00:17.881605 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:18.880130 master-0 kubenswrapper[7756]: I0220 12:00:18.880029 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:18.880130 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:18.880130 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:18.880130 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:18.880130 master-0 kubenswrapper[7756]: I0220 12:00:18.880126 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:19.880155 master-0 kubenswrapper[7756]: I0220 12:00:19.880060 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:19.880155 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:19.880155 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:19.880155 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:19.881111 master-0 kubenswrapper[7756]: I0220 12:00:19.880158 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:20.880219 master-0 kubenswrapper[7756]: I0220 12:00:20.880106 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:20.880219 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:20.880219 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:20.880219 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:20.881289 master-0 kubenswrapper[7756]: I0220 12:00:20.880241 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:21.880433 master-0 kubenswrapper[7756]: I0220 12:00:21.880346 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:21.880433 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:21.880433 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:21.880433 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:21.881376 master-0 kubenswrapper[7756]: I0220 12:00:21.880444 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:22.880774 master-0 kubenswrapper[7756]: I0220 12:00:22.880678 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:22.880774 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:22.880774 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:22.880774 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:22.881774 master-0 kubenswrapper[7756]: I0220 12:00:22.880776 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:23.880564 master-0 kubenswrapper[7756]: I0220 12:00:23.880427 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:23.880564 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:23.880564 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:23.880564 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:23.880564 master-0 kubenswrapper[7756]: I0220 12:00:23.880511 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:24.880581 master-0 kubenswrapper[7756]: I0220 12:00:24.880405 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:24.880581 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:24.880581 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:24.880581 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:24.880581 master-0 kubenswrapper[7756]: I0220 12:00:24.880569 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:25.880078 master-0 kubenswrapper[7756]: I0220 12:00:25.879990 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:25.880078 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:25.880078 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:25.880078 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:25.880504 master-0 kubenswrapper[7756]: I0220 12:00:25.880090 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:26.880879 master-0 kubenswrapper[7756]: I0220 12:00:26.880780 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:26.880879 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:26.880879 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:26.880879 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:26.880879 master-0 kubenswrapper[7756]: I0220 12:00:26.880873 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:27.577363 master-0 kubenswrapper[7756]: E0220 12:00:27.576983 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 20 12:00:27.717697 master-0 kubenswrapper[7756]: E0220 12:00:27.717627 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:00:27.880798 master-0 kubenswrapper[7756]: I0220 12:00:27.880588 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:27.880798 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:27.880798 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:27.880798 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:27.880798 master-0 kubenswrapper[7756]: I0220 12:00:27.880674 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:28.880686 master-0 kubenswrapper[7756]: I0220 12:00:28.880572 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:28.880686 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:28.880686 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:28.880686 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:28.880686 master-0 kubenswrapper[7756]: I0220 12:00:28.880673 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:29.880858 master-0 kubenswrapper[7756]: I0220 12:00:29.880753 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:29.880858 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:29.880858 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:29.880858 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:29.880858 master-0 kubenswrapper[7756]: I0220 12:00:29.880847 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:30.880199 master-0 kubenswrapper[7756]: I0220 12:00:30.880078 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:30.880199 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:30.880199 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:30.880199 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:30.880199 master-0 kubenswrapper[7756]: I0220 12:00:30.880167 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:30.967490 master-0 kubenswrapper[7756]: E0220 12:00:30.967300 7756 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.1895f23f44d120fd openshift-kube-controller-manager 10186 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:a767e0793175d588147a983384ee43db,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:53:01 +0000 UTC,LastTimestamp:2026-02-20 11:59:01.746460585 +0000 UTC m=+587.488708623,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 12:00:31.879727 master-0 kubenswrapper[7756]: I0220 12:00:31.879638 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:31.879727 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:31.879727 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:31.879727 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:31.880251 master-0 kubenswrapper[7756]: I0220 12:00:31.879739 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:32.880107 master-0 kubenswrapper[7756]: I0220 12:00:32.879991 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:32.880107 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:32.880107 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:32.880107 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:32.880107 master-0 kubenswrapper[7756]: I0220 12:00:32.880096 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:33.880332 master-0 kubenswrapper[7756]: I0220 12:00:33.880227 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:33.880332 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:33.880332 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:33.880332 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:33.880332 master-0 kubenswrapper[7756]: I0220 12:00:33.880328 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:34.879771 master-0 kubenswrapper[7756]: I0220 12:00:34.879669 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:34.879771 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:34.879771 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:34.879771 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:34.880232 master-0 kubenswrapper[7756]: I0220 12:00:34.879771 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:35.880439 master-0 kubenswrapper[7756]: I0220 12:00:35.880379 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:35.880439 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:35.880439 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:35.880439 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:35.881507 master-0 kubenswrapper[7756]: I0220 12:00:35.881459 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:36.880287 master-0 kubenswrapper[7756]: I0220 12:00:36.880178 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:36.880287 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:36.880287 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:36.880287 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:36.880287 master-0 kubenswrapper[7756]: I0220 12:00:36.880277 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:37.309187 master-0 kubenswrapper[7756]: E0220 12:00:37.309110 7756 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 20 12:00:37.718232 master-0 kubenswrapper[7756]: E0220 12:00:37.718149 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:00:37.882269 master-0 kubenswrapper[7756]: I0220 12:00:37.881917 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:37.882269 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:37.882269 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:37.882269 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:37.882269 master-0 kubenswrapper[7756]: I0220 12:00:37.882041 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:38.597389 master-0 kubenswrapper[7756]: I0220 12:00:38.597287 7756 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="da92cbde4f74d2a7379dc50dca70ba345568f184d4de102a2743c4569e81bf1e" exitCode=0 Feb 20 12:00:38.597389 master-0 kubenswrapper[7756]: I0220 12:00:38.597367 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"da92cbde4f74d2a7379dc50dca70ba345568f184d4de102a2743c4569e81bf1e"} Feb 20 12:00:38.597863 master-0 kubenswrapper[7756]: I0220 12:00:38.597815 7756 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="20df98a1-7355-4d48-920c-675e1211ec9c" Feb 20 12:00:38.597863 master-0 kubenswrapper[7756]: I0220 12:00:38.597851 7756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="20df98a1-7355-4d48-920c-675e1211ec9c" Feb 20 12:00:38.880789 master-0 kubenswrapper[7756]: I0220 12:00:38.880617 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:38.880789 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:38.880789 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:38.880789 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:38.880789 master-0 kubenswrapper[7756]: I0220 12:00:38.880722 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:39.178822 master-0 kubenswrapper[7756]: E0220 12:00:39.178616 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 20 12:00:39.880227 master-0 kubenswrapper[7756]: I0220 12:00:39.880145 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:39.880227 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:39.880227 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:39.880227 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:39.880713 master-0 kubenswrapper[7756]: I0220 12:00:39.880247 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:40.881210 master-0 kubenswrapper[7756]: I0220 12:00:40.881118 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:40.881210 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:40.881210 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:40.881210 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:40.882279 master-0 kubenswrapper[7756]: I0220 12:00:40.881218 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:41.880042 master-0 kubenswrapper[7756]: I0220 12:00:41.879979 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:41.880042 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:41.880042 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:41.880042 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:41.880648 master-0 kubenswrapper[7756]: I0220 12:00:41.880604 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:42.630562 master-0 kubenswrapper[7756]: I0220 12:00:42.630444 7756 generic.go:334] "Generic (PLEG): container finished" podID="6dfca740-0387-428a-b957-3e8a09c6e352" containerID="30cc0163534ef05cf8f1af8016be6ca5a9410b7c83b47a06334775bed42b37ab" exitCode=0 Feb 20 12:00:42.630562 master-0 kubenswrapper[7756]: I0220 12:00:42.630513 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" event={"ID":"6dfca740-0387-428a-b957-3e8a09c6e352","Type":"ContainerDied","Data":"30cc0163534ef05cf8f1af8016be6ca5a9410b7c83b47a06334775bed42b37ab"} Feb 20 12:00:42.631729 master-0 kubenswrapper[7756]: I0220 12:00:42.630596 7756 scope.go:117] "RemoveContainer" containerID="3d84b64b15cc0bdfd81208f0d2d2402b1dd43fcf0c81056aa6b599a33f0ef14d" Feb 20 12:00:42.632034 master-0 kubenswrapper[7756]: I0220 12:00:42.631983 7756 scope.go:117] "RemoveContainer" containerID="30cc0163534ef05cf8f1af8016be6ca5a9410b7c83b47a06334775bed42b37ab" Feb 20 12:00:42.633321 master-0 kubenswrapper[7756]: E0220 12:00:42.632483 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-6f5488b997-nr4tg_openshift-marketplace(6dfca740-0387-428a-b957-3e8a09c6e352)\"" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" podUID="6dfca740-0387-428a-b957-3e8a09c6e352" Feb 20 12:00:42.635003 master-0 kubenswrapper[7756]: I0220 12:00:42.634947 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-vs87f_b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1/manager/1.log" Feb 20 12:00:42.636367 master-0 kubenswrapper[7756]: I0220 12:00:42.636323 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-vs87f_b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1/manager/0.log" Feb 20 12:00:42.636490 master-0 kubenswrapper[7756]: I0220 12:00:42.636391 7756 generic.go:334] "Generic (PLEG): container finished" podID="b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1" containerID="c71d66a4b93651a9ca77699b6ac7544e90310b6a6968e997721a5f52319085ac" exitCode=1 Feb 20 12:00:42.636490 master-0 kubenswrapper[7756]: I0220 12:00:42.636441 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" event={"ID":"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1","Type":"ContainerDied","Data":"c71d66a4b93651a9ca77699b6ac7544e90310b6a6968e997721a5f52319085ac"} Feb 20 12:00:42.637027 master-0 kubenswrapper[7756]: I0220 12:00:42.636976 7756 scope.go:117] "RemoveContainer" containerID="c71d66a4b93651a9ca77699b6ac7544e90310b6a6968e997721a5f52319085ac" Feb 20 12:00:42.637413 master-0 kubenswrapper[7756]: E0220 12:00:42.637361 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-9cc7d7bb-vs87f_openshift-operator-controller(b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1)\"" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" podUID="b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1" Feb 20 12:00:42.662336 master-0 kubenswrapper[7756]: I0220 12:00:42.662288 7756 scope.go:117] "RemoveContainer" containerID="5bf57c12fc70c17e6a09a820bf2ab5c2dd4edbb89e20cced0e4474b7e6ce7231" Feb 20 12:00:42.880682 master-0 kubenswrapper[7756]: I0220 12:00:42.880508 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:42.880682 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:42.880682 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:42.880682 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:42.880682 master-0 kubenswrapper[7756]: I0220 12:00:42.880633 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:43.648341 master-0 kubenswrapper[7756]: I0220 12:00:43.648235 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-vs87f_b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1/manager/1.log" Feb 20 12:00:43.880474 master-0 kubenswrapper[7756]: I0220 12:00:43.880402 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:43.880474 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:43.880474 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:43.880474 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:43.880936 master-0 kubenswrapper[7756]: I0220 12:00:43.880505 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:44.879314 master-0 kubenswrapper[7756]: I0220 12:00:44.879229 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:44.879314 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:44.879314 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:44.879314 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:44.880350 master-0 kubenswrapper[7756]: I0220 12:00:44.879355 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:45.879576 master-0 kubenswrapper[7756]: I0220 12:00:45.879477 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:45.879576 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:45.879576 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:45.879576 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:45.880698 master-0 kubenswrapper[7756]: I0220 12:00:45.879587 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:46.825437 master-0 kubenswrapper[7756]: I0220 12:00:46.825335 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:00:46.826196 master-0 kubenswrapper[7756]: I0220 12:00:46.826149 7756 scope.go:117] "RemoveContainer" containerID="c71d66a4b93651a9ca77699b6ac7544e90310b6a6968e997721a5f52319085ac" Feb 20 12:00:46.826615 master-0 kubenswrapper[7756]: E0220 12:00:46.826564 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-9cc7d7bb-vs87f_openshift-operator-controller(b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1)\"" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" podUID="b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1" Feb 20 12:00:46.880153 master-0 kubenswrapper[7756]: I0220 12:00:46.880058 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:46.880153 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:46.880153 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:46.880153 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:46.881096 master-0 kubenswrapper[7756]: I0220 12:00:46.880161 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:47.613812 master-0 kubenswrapper[7756]: I0220 12:00:47.613710 7756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 12:00:47.613812 master-0 kubenswrapper[7756]: I0220 12:00:47.613818 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 12:00:47.614423 master-0 kubenswrapper[7756]: I0220 12:00:47.614378 7756 scope.go:117] "RemoveContainer" containerID="30cc0163534ef05cf8f1af8016be6ca5a9410b7c83b47a06334775bed42b37ab" Feb 20 12:00:47.614774 master-0 kubenswrapper[7756]: E0220 12:00:47.614723 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-6f5488b997-nr4tg_openshift-marketplace(6dfca740-0387-428a-b957-3e8a09c6e352)\"" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" podUID="6dfca740-0387-428a-b957-3e8a09c6e352" Feb 20 12:00:47.718769 master-0 kubenswrapper[7756]: E0220 12:00:47.718716 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:00:47.719240 master-0 kubenswrapper[7756]: E0220 12:00:47.719214 7756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 20 12:00:47.880773 master-0 kubenswrapper[7756]: I0220 12:00:47.880628 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:47.880773 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:47.880773 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:47.880773 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:47.880773 master-0 kubenswrapper[7756]: I0220 12:00:47.880725 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:48.880629 master-0 kubenswrapper[7756]: I0220 12:00:48.880558 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:48.880629 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:48.880629 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:48.880629 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:48.881568 master-0 kubenswrapper[7756]: I0220 12:00:48.880661 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:49.879875 master-0 kubenswrapper[7756]: I0220 12:00:49.879787 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:49.879875 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:49.879875 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:49.879875 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:49.880318 master-0 kubenswrapper[7756]: I0220 12:00:49.879894 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:50.880574 master-0 kubenswrapper[7756]: I0220 12:00:50.880481 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:50.880574 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:50.880574 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:50.880574 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:50.881728 master-0 kubenswrapper[7756]: I0220 12:00:50.880623 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:51.719648 master-0 kubenswrapper[7756]: I0220 12:00:51.719570 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-k8vs5_d9f9442b-25b9-420f-b748-bb13423809fe/manager/1.log" Feb 20 12:00:51.720915 master-0 kubenswrapper[7756]: I0220 12:00:51.720847 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-k8vs5_d9f9442b-25b9-420f-b748-bb13423809fe/manager/0.log" Feb 20 12:00:51.721564 master-0 kubenswrapper[7756]: I0220 12:00:51.721471 7756 generic.go:334] "Generic (PLEG): container finished" podID="d9f9442b-25b9-420f-b748-bb13423809fe" containerID="84ef230cc54cd476fcc604e2b0f1b7222d22839f67c943242d5c00ce3857fed6" exitCode=1 Feb 20 12:00:51.721564 master-0 kubenswrapper[7756]: I0220 12:00:51.721559 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" event={"ID":"d9f9442b-25b9-420f-b748-bb13423809fe","Type":"ContainerDied","Data":"84ef230cc54cd476fcc604e2b0f1b7222d22839f67c943242d5c00ce3857fed6"} Feb 20 12:00:51.721747 master-0 kubenswrapper[7756]: I0220 12:00:51.721617 7756 scope.go:117] "RemoveContainer" containerID="fd156bc7a5466d6b67b1239ac8613c9df410e89cc9c884ed83f3394a7c8ae304" Feb 20 12:00:51.722779 master-0 kubenswrapper[7756]: I0220 12:00:51.722707 7756 scope.go:117] "RemoveContainer" containerID="84ef230cc54cd476fcc604e2b0f1b7222d22839f67c943242d5c00ce3857fed6" Feb 20 12:00:51.724924 master-0 kubenswrapper[7756]: E0220 12:00:51.723847 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-84b8d9d697-k8vs5_openshift-catalogd(d9f9442b-25b9-420f-b748-bb13423809fe)\"" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" podUID="d9f9442b-25b9-420f-b748-bb13423809fe" Feb 20 12:00:51.881035 master-0 kubenswrapper[7756]: I0220 12:00:51.880945 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:51.881035 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:51.881035 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:51.881035 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:51.882000 master-0 kubenswrapper[7756]: I0220 12:00:51.881053 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:52.380002 master-0 kubenswrapper[7756]: E0220 12:00:52.379902 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 20 12:00:52.732336 master-0 kubenswrapper[7756]: I0220 12:00:52.732141 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-k8vs5_d9f9442b-25b9-420f-b748-bb13423809fe/manager/1.log" Feb 20 12:00:52.880251 master-0 kubenswrapper[7756]: I0220 12:00:52.880167 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:52.880251 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:52.880251 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:52.880251 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:52.880967 master-0 kubenswrapper[7756]: I0220 12:00:52.880259 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:53.879823 master-0 kubenswrapper[7756]: I0220 12:00:53.879740 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:53.879823 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:53.879823 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:53.879823 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:53.879823 master-0 kubenswrapper[7756]: I0220 12:00:53.879818 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:54.880178 master-0 kubenswrapper[7756]: I0220 12:00:54.880080 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:54.880178 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:54.880178 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:54.880178 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:54.881150 master-0 kubenswrapper[7756]: I0220 12:00:54.880187 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:55.880473 master-0 kubenswrapper[7756]: I0220 12:00:55.880383 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:00:55.880473 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:00:55.880473 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:00:55.880473 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:00:55.881401 master-0 kubenswrapper[7756]: I0220 12:00:55.880502 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:00:55.881401 master-0 kubenswrapper[7756]: I0220 12:00:55.880648 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:00:55.881734 master-0 kubenswrapper[7756]: I0220 12:00:55.881669 7756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"59be86e8d4a5781613fee8a9f98dc6c90430b05bfb61e001a26978b78f148625"} pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" containerMessage="Container router failed startup probe, will be restarted" Feb 20 12:00:55.881819 master-0 kubenswrapper[7756]: I0220 12:00:55.881751 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" containerID="cri-o://59be86e8d4a5781613fee8a9f98dc6c90430b05bfb61e001a26978b78f148625" gracePeriod=3600 Feb 20 12:00:56.642295 master-0 kubenswrapper[7756]: I0220 12:00:56.642210 7756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:00:56.642295 master-0 kubenswrapper[7756]: I0220 12:00:56.642291 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:00:56.643054 master-0 kubenswrapper[7756]: I0220 12:00:56.643001 7756 scope.go:117] "RemoveContainer" containerID="84ef230cc54cd476fcc604e2b0f1b7222d22839f67c943242d5c00ce3857fed6" Feb 20 12:00:56.643371 master-0 kubenswrapper[7756]: E0220 12:00:56.643303 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-84b8d9d697-k8vs5_openshift-catalogd(d9f9442b-25b9-420f-b748-bb13423809fe)\"" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" podUID="d9f9442b-25b9-420f-b748-bb13423809fe" Feb 20 12:00:56.676968 master-0 kubenswrapper[7756]: I0220 12:00:56.676884 7756 status_manager.go:851] "Failed to get status for pod" podUID="3f22083d-dc18-4acd-aa7f-d01d407c7837" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-retry-1-master-0)" Feb 20 12:00:56.825438 master-0 kubenswrapper[7756]: I0220 12:00:56.825346 7756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:00:56.826252 master-0 kubenswrapper[7756]: I0220 12:00:56.826216 7756 scope.go:117] "RemoveContainer" containerID="c71d66a4b93651a9ca77699b6ac7544e90310b6a6968e997721a5f52319085ac" Feb 20 12:00:57.775395 master-0 kubenswrapper[7756]: I0220 12:00:57.775261 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg_e8c48a22-ed96-42c5-ac4a-dd7d4f204539/cluster-cloud-controller-manager/0.log" Feb 20 12:00:57.776445 master-0 kubenswrapper[7756]: I0220 12:00:57.776394 7756 generic.go:334] "Generic (PLEG): container finished" podID="e8c48a22-ed96-42c5-ac4a-dd7d4f204539" containerID="5ae28e0dd7617cbe98b911e55270072130fade6b7dce5510c67c9d3d17bc60bf" exitCode=1 Feb 20 12:00:57.776740 master-0 kubenswrapper[7756]: I0220 12:00:57.776556 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" event={"ID":"e8c48a22-ed96-42c5-ac4a-dd7d4f204539","Type":"ContainerDied","Data":"5ae28e0dd7617cbe98b911e55270072130fade6b7dce5510c67c9d3d17bc60bf"} Feb 20 12:00:57.777810 master-0 kubenswrapper[7756]: I0220 12:00:57.777774 7756 scope.go:117] "RemoveContainer" containerID="5ae28e0dd7617cbe98b911e55270072130fade6b7dce5510c67c9d3d17bc60bf" Feb 20 12:00:57.779494 master-0 kubenswrapper[7756]: I0220 12:00:57.779420 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-vs87f_b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1/manager/1.log" Feb 20 12:00:57.780048 master-0 kubenswrapper[7756]: I0220 12:00:57.779985 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" event={"ID":"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1","Type":"ContainerStarted","Data":"06a36df68a6cf0070b0953b923c195dd77c4937477a5a3ed8c0f0b6e5e3d9d63"} Feb 20 12:00:57.780314 master-0 kubenswrapper[7756]: I0220 12:00:57.780279 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:00:58.793304 master-0 kubenswrapper[7756]: I0220 12:00:58.793185 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg_e8c48a22-ed96-42c5-ac4a-dd7d4f204539/cluster-cloud-controller-manager/0.log" Feb 20 12:00:58.794291 master-0 kubenswrapper[7756]: I0220 12:00:58.793363 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" event={"ID":"e8c48a22-ed96-42c5-ac4a-dd7d4f204539","Type":"ContainerStarted","Data":"e6dd3bb3dad312496a09cee44a1608c5a9688092b68835cfb3f775dbb6bcce96"} Feb 20 12:01:00.811471 master-0 kubenswrapper[7756]: I0220 12:01:00.811326 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/1.log" Feb 20 12:01:00.812353 master-0 kubenswrapper[7756]: I0220 12:01:00.812280 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/0.log" Feb 20 12:01:00.812442 master-0 kubenswrapper[7756]: I0220 12:01:00.812392 7756 generic.go:334] "Generic (PLEG): container finished" podID="bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4" containerID="60fcf3fcd8aaaa40b7dfe96f543b72ba8310165975661dc288cd77f2a4374875" exitCode=1 Feb 20 12:01:00.812546 master-0 kubenswrapper[7756]: I0220 12:01:00.812453 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" event={"ID":"bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4","Type":"ContainerDied","Data":"60fcf3fcd8aaaa40b7dfe96f543b72ba8310165975661dc288cd77f2a4374875"} Feb 20 12:01:00.812625 master-0 kubenswrapper[7756]: I0220 12:01:00.812518 7756 scope.go:117] "RemoveContainer" containerID="dda80c885f92b57bca602a3a57fe7a72f775d424964427877643f5139f187abf" Feb 20 12:01:00.813377 master-0 kubenswrapper[7756]: I0220 12:01:00.813302 7756 scope.go:117] "RemoveContainer" containerID="60fcf3fcd8aaaa40b7dfe96f543b72ba8310165975661dc288cd77f2a4374875" Feb 20 12:01:00.813764 master-0 kubenswrapper[7756]: E0220 12:01:00.813700 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-792hn_openshift-cluster-storage-operator(bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" podUID="bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4" Feb 20 12:01:01.579418 master-0 kubenswrapper[7756]: I0220 12:01:01.579309 7756 scope.go:117] "RemoveContainer" containerID="30cc0163534ef05cf8f1af8016be6ca5a9410b7c83b47a06334775bed42b37ab" Feb 20 12:01:01.824480 master-0 kubenswrapper[7756]: I0220 12:01:01.824357 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" event={"ID":"6dfca740-0387-428a-b957-3e8a09c6e352","Type":"ContainerStarted","Data":"e5d47075f2d4f624c295ddde735d199b2d51ad697bbb189f80e108c16001b038"} Feb 20 12:01:01.825224 master-0 kubenswrapper[7756]: I0220 12:01:01.824913 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 12:01:01.826497 master-0 kubenswrapper[7756]: I0220 12:01:01.826331 7756 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-nr4tg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.27:8080/healthz\": dial tcp 10.128.0.27:8080: connect: connection refused" start-of-body= Feb 20 12:01:01.826497 master-0 kubenswrapper[7756]: I0220 12:01:01.826426 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" podUID="6dfca740-0387-428a-b957-3e8a09c6e352" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.27:8080/healthz\": dial tcp 10.128.0.27:8080: connect: connection refused" Feb 20 12:01:01.829197 master-0 kubenswrapper[7756]: I0220 12:01:01.829161 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg_e8c48a22-ed96-42c5-ac4a-dd7d4f204539/config-sync-controllers/0.log" Feb 20 12:01:01.830252 master-0 kubenswrapper[7756]: I0220 12:01:01.830013 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg_e8c48a22-ed96-42c5-ac4a-dd7d4f204539/cluster-cloud-controller-manager/0.log" Feb 20 12:01:01.830252 master-0 kubenswrapper[7756]: I0220 12:01:01.830112 7756 generic.go:334] "Generic (PLEG): container finished" podID="e8c48a22-ed96-42c5-ac4a-dd7d4f204539" containerID="3de8f37a5f333a2a0c06335a1e1da92af4239f0f86ce6fc2f55eb1e6b9d57ccf" exitCode=1 Feb 20 12:01:01.830464 master-0 kubenswrapper[7756]: I0220 12:01:01.830237 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" event={"ID":"e8c48a22-ed96-42c5-ac4a-dd7d4f204539","Type":"ContainerDied","Data":"3de8f37a5f333a2a0c06335a1e1da92af4239f0f86ce6fc2f55eb1e6b9d57ccf"} Feb 20 12:01:01.831678 master-0 kubenswrapper[7756]: I0220 12:01:01.831514 7756 scope.go:117] "RemoveContainer" containerID="3de8f37a5f333a2a0c06335a1e1da92af4239f0f86ce6fc2f55eb1e6b9d57ccf" Feb 20 12:01:01.834742 master-0 kubenswrapper[7756]: I0220 12:01:01.834615 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/1.log" Feb 20 12:01:02.848744 master-0 kubenswrapper[7756]: I0220 12:01:02.848668 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg_e8c48a22-ed96-42c5-ac4a-dd7d4f204539/config-sync-controllers/0.log" Feb 20 12:01:02.849843 master-0 kubenswrapper[7756]: I0220 12:01:02.849782 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg_e8c48a22-ed96-42c5-ac4a-dd7d4f204539/cluster-cloud-controller-manager/0.log" Feb 20 12:01:02.849962 master-0 kubenswrapper[7756]: I0220 12:01:02.849915 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" event={"ID":"e8c48a22-ed96-42c5-ac4a-dd7d4f204539","Type":"ContainerStarted","Data":"20e892600b1060d890225ff0711469607ac5ff5d97beefb0a3b577ea00a977a4"} Feb 20 12:01:02.855651 master-0 kubenswrapper[7756]: I0220 12:01:02.855600 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 12:01:04.971552 master-0 kubenswrapper[7756]: E0220 12:01:04.971359 7756 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.1895f23f518ff0e0 openshift-kube-controller-manager 10196 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:a767e0793175d588147a983384ee43db,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:53:02 +0000 UTC,LastTimestamp:2026-02-20 11:59:02.087080124 +0000 UTC m=+587.829328162,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 12:01:06.828259 master-0 kubenswrapper[7756]: I0220 12:01:06.828150 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:01:08.088459 master-0 kubenswrapper[7756]: E0220 12:01:08.088195 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T12:00:58Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T12:00:58Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T12:00:58Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T12:00:58Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:584b5d125dad1fa4f8d03e6ace2e4901c173569ff1ed9536da6915c56fa52bc0\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8124eb3839b25af23303e9fdde35728bfd24d7c0c47530e77852cba1dd9d1ffb\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702755272},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:18622d3875e4a2dd9fde1633a737ae82af1df960d3bbcbda22c44df6cea6aa74\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:2d4cac2da3d445443ee7ac3918878091ebecdaadbd2742424bb1a02391a1c5b3\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1235965143},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2458acf77e6551a99656a2a1643e7ef4bf008f6bf792157614710eb9b28e0e64\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:3c45f047394ebd29a640afe4c1e96739e5155ec608b61170a2274911bdf56a3d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210258627},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c\\\"],\\\"sizeBytes\\\":480427687},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb\\\"],\\\"sizeBytes\\\":471325816}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:01:08.581148 master-0 kubenswrapper[7756]: I0220 12:01:08.580988 7756 scope.go:117] "RemoveContainer" containerID="84ef230cc54cd476fcc604e2b0f1b7222d22839f67c943242d5c00ce3857fed6" Feb 20 12:01:08.781868 master-0 kubenswrapper[7756]: E0220 12:01:08.781762 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 12:01:09.918466 master-0 kubenswrapper[7756]: I0220 12:01:09.918365 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-k8vs5_d9f9442b-25b9-420f-b748-bb13423809fe/manager/1.log" Feb 20 12:01:09.919519 master-0 kubenswrapper[7756]: I0220 12:01:09.919292 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" event={"ID":"d9f9442b-25b9-420f-b748-bb13423809fe","Type":"ContainerStarted","Data":"7f0b0b1d35c6ba0da3f95feda9e9e958e95f8fce13809117a49c171bf095439b"} Feb 20 12:01:09.919659 master-0 kubenswrapper[7756]: I0220 12:01:09.919606 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:01:12.600885 master-0 kubenswrapper[7756]: E0220 12:01:12.600805 7756 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 20 12:01:13.960287 master-0 kubenswrapper[7756]: I0220 12:01:13.959967 7756 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="5fc806dcdedd7688a77f47260543d32b11e6e7c063979fee3300f4d963557c80" exitCode=0 Feb 20 12:01:13.960287 master-0 kubenswrapper[7756]: I0220 12:01:13.960188 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"5fc806dcdedd7688a77f47260543d32b11e6e7c063979fee3300f4d963557c80"} Feb 20 12:01:13.961167 master-0 kubenswrapper[7756]: I0220 12:01:13.960865 7756 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="20df98a1-7355-4d48-920c-675e1211ec9c" Feb 20 12:01:13.961167 master-0 kubenswrapper[7756]: I0220 12:01:13.960902 7756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="20df98a1-7355-4d48-920c-675e1211ec9c" Feb 20 12:01:15.579680 master-0 kubenswrapper[7756]: I0220 12:01:15.579623 7756 scope.go:117] "RemoveContainer" containerID="60fcf3fcd8aaaa40b7dfe96f543b72ba8310165975661dc288cd77f2a4374875" Feb 20 12:01:15.978818 master-0 kubenswrapper[7756]: I0220 12:01:15.978646 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/1.log" Feb 20 12:01:15.978818 master-0 kubenswrapper[7756]: I0220 12:01:15.978774 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" event={"ID":"bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4","Type":"ContainerStarted","Data":"f6b6d42109f301154569dbcc355083057306381c6e4ddc5df7a556bda8392333"} Feb 20 12:01:16.645087 master-0 kubenswrapper[7756]: I0220 12:01:16.645003 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:01:18.089434 master-0 kubenswrapper[7756]: E0220 12:01:18.089295 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:01:23.036648 master-0 kubenswrapper[7756]: I0220 12:01:23.036521 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_aa2f6c0cf73fadd0d96a26150bb4dbb3/kube-scheduler/0.log" Feb 20 12:01:23.037622 master-0 kubenswrapper[7756]: I0220 12:01:23.037489 7756 generic.go:334] "Generic (PLEG): container finished" podID="aa2f6c0cf73fadd0d96a26150bb4dbb3" containerID="7e6c16941011718bcf6a9f94acdb17c25246b75a0407ed5d83ac4536ca1a0a88" exitCode=1 Feb 20 12:01:23.037622 master-0 kubenswrapper[7756]: I0220 12:01:23.037597 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerDied","Data":"7e6c16941011718bcf6a9f94acdb17c25246b75a0407ed5d83ac4536ca1a0a88"} Feb 20 12:01:23.038406 master-0 kubenswrapper[7756]: I0220 12:01:23.038357 7756 scope.go:117] "RemoveContainer" containerID="7e6c16941011718bcf6a9f94acdb17c25246b75a0407ed5d83ac4536ca1a0a88" Feb 20 12:01:23.654797 master-0 kubenswrapper[7756]: I0220 12:01:23.654575 7756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 12:01:23.654797 master-0 kubenswrapper[7756]: I0220 12:01:23.654677 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 12:01:24.050578 master-0 kubenswrapper[7756]: I0220 12:01:24.050446 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_aa2f6c0cf73fadd0d96a26150bb4dbb3/kube-scheduler/0.log" Feb 20 12:01:24.051650 master-0 kubenswrapper[7756]: I0220 12:01:24.051206 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerStarted","Data":"6328c06f948b47accbad70a96e466cffb8cc3d168855718a0c1e45d2ff1a2d20"} Feb 20 12:01:24.051767 master-0 kubenswrapper[7756]: I0220 12:01:24.051701 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 12:01:25.783513 master-0 kubenswrapper[7756]: E0220 12:01:25.783415 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 12:01:28.085084 master-0 kubenswrapper[7756]: I0220 12:01:28.084983 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-qg84l_7635c0ff-4d40-4310-8187-230323e504e0/machine-approver-controller/0.log" Feb 20 12:01:28.085970 master-0 kubenswrapper[7756]: I0220 12:01:28.085847 7756 generic.go:334] "Generic (PLEG): container finished" podID="7635c0ff-4d40-4310-8187-230323e504e0" containerID="43e3bfd2d03db486eaa07c471fb4184138af1fd2a51e7d71dbadb2ebc26dee9d" exitCode=255 Feb 20 12:01:28.085970 master-0 kubenswrapper[7756]: I0220 12:01:28.085899 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" event={"ID":"7635c0ff-4d40-4310-8187-230323e504e0","Type":"ContainerDied","Data":"43e3bfd2d03db486eaa07c471fb4184138af1fd2a51e7d71dbadb2ebc26dee9d"} Feb 20 12:01:28.086769 master-0 kubenswrapper[7756]: I0220 12:01:28.086723 7756 scope.go:117] "RemoveContainer" containerID="43e3bfd2d03db486eaa07c471fb4184138af1fd2a51e7d71dbadb2ebc26dee9d" Feb 20 12:01:28.089983 master-0 kubenswrapper[7756]: E0220 12:01:28.089935 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:01:29.101675 master-0 kubenswrapper[7756]: I0220 12:01:29.101598 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager/0.log" Feb 20 12:01:29.102579 master-0 kubenswrapper[7756]: I0220 12:01:29.101698 7756 generic.go:334] "Generic (PLEG): container finished" podID="a767e0793175d588147a983384ee43db" containerID="0616a5d031f34cdf4ba086c5e6e13dc1c06dc0cc61473c6faf71fc5fd1759c28" exitCode=0 Feb 20 12:01:29.102579 master-0 kubenswrapper[7756]: I0220 12:01:29.101760 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerDied","Data":"0616a5d031f34cdf4ba086c5e6e13dc1c06dc0cc61473c6faf71fc5fd1759c28"} Feb 20 12:01:29.102793 master-0 kubenswrapper[7756]: I0220 12:01:29.102645 7756 scope.go:117] "RemoveContainer" containerID="0616a5d031f34cdf4ba086c5e6e13dc1c06dc0cc61473c6faf71fc5fd1759c28" Feb 20 12:01:29.105833 master-0 kubenswrapper[7756]: I0220 12:01:29.105773 7756 generic.go:334] "Generic (PLEG): container finished" podID="31969539-bfd1-466f-8697-f13cbbd957df" containerID="61a6b1802bd2528d8da1d6327d61e384e195f07e99b735a85a4645765053313c" exitCode=0 Feb 20 12:01:29.106057 master-0 kubenswrapper[7756]: I0220 12:01:29.105871 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" event={"ID":"31969539-bfd1-466f-8697-f13cbbd957df","Type":"ContainerDied","Data":"61a6b1802bd2528d8da1d6327d61e384e195f07e99b735a85a4645765053313c"} Feb 20 12:01:29.106843 master-0 kubenswrapper[7756]: I0220 12:01:29.106664 7756 scope.go:117] "RemoveContainer" containerID="61a6b1802bd2528d8da1d6327d61e384e195f07e99b735a85a4645765053313c" Feb 20 12:01:29.114275 master-0 kubenswrapper[7756]: I0220 12:01:29.114230 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-fn7j5_21e8e44b-b883-4afb-af90-d6c1265edf34/control-plane-machine-set-operator/0.log" Feb 20 12:01:29.114366 master-0 kubenswrapper[7756]: I0220 12:01:29.114297 7756 generic.go:334] "Generic (PLEG): container finished" podID="21e8e44b-b883-4afb-af90-d6c1265edf34" containerID="35ae4f95ab0c966594fd2d547d61e743ca73d94994a40e72e5d8f5673d88afb4" exitCode=1 Feb 20 12:01:29.114563 master-0 kubenswrapper[7756]: I0220 12:01:29.114470 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" event={"ID":"21e8e44b-b883-4afb-af90-d6c1265edf34","Type":"ContainerDied","Data":"35ae4f95ab0c966594fd2d547d61e743ca73d94994a40e72e5d8f5673d88afb4"} Feb 20 12:01:29.115504 master-0 kubenswrapper[7756]: I0220 12:01:29.115472 7756 scope.go:117] "RemoveContainer" containerID="35ae4f95ab0c966594fd2d547d61e743ca73d94994a40e72e5d8f5673d88afb4" Feb 20 12:01:29.118096 master-0 kubenswrapper[7756]: I0220 12:01:29.117988 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k95mq_bd609bd3-2525-4b88-8f07-94a0418fb582/cluster-baremetal-operator/0.log" Feb 20 12:01:29.118945 master-0 kubenswrapper[7756]: I0220 12:01:29.118103 7756 generic.go:334] "Generic (PLEG): container finished" podID="bd609bd3-2525-4b88-8f07-94a0418fb582" containerID="7e35d0d46d086257733502e192ec247382f7e26c3f0f6b4f8392900b3f91657b" exitCode=1 Feb 20 12:01:29.118945 master-0 kubenswrapper[7756]: I0220 12:01:29.118185 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" event={"ID":"bd609bd3-2525-4b88-8f07-94a0418fb582","Type":"ContainerDied","Data":"7e35d0d46d086257733502e192ec247382f7e26c3f0f6b4f8392900b3f91657b"} Feb 20 12:01:29.119127 master-0 kubenswrapper[7756]: I0220 12:01:29.119038 7756 scope.go:117] "RemoveContainer" containerID="7e35d0d46d086257733502e192ec247382f7e26c3f0f6b4f8392900b3f91657b" Feb 20 12:01:29.122254 master-0 kubenswrapper[7756]: I0220 12:01:29.122192 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-qg84l_7635c0ff-4d40-4310-8187-230323e504e0/machine-approver-controller/0.log" Feb 20 12:01:29.122849 master-0 kubenswrapper[7756]: I0220 12:01:29.122787 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" event={"ID":"7635c0ff-4d40-4310-8187-230323e504e0","Type":"ContainerStarted","Data":"8c5461d15d7eab278a7d6ba7f412ddd0416b9d86f734dccb7a0b7a984f4d1c02"} Feb 20 12:01:30.135211 master-0 kubenswrapper[7756]: I0220 12:01:30.135149 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" event={"ID":"31969539-bfd1-466f-8697-f13cbbd957df","Type":"ContainerStarted","Data":"792c74a643b2db968e16ba865c225925287fe66d760e0e3cf46d73e10974f2aa"} Feb 20 12:01:30.139341 master-0 kubenswrapper[7756]: I0220 12:01:30.139288 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-fn7j5_21e8e44b-b883-4afb-af90-d6c1265edf34/control-plane-machine-set-operator/0.log" Feb 20 12:01:30.139644 master-0 kubenswrapper[7756]: I0220 12:01:30.139578 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" event={"ID":"21e8e44b-b883-4afb-af90-d6c1265edf34","Type":"ContainerStarted","Data":"a7fdbb3fde31a83e859b00c818230aae4b04a7966bc3554710ff7cf601daf49a"} Feb 20 12:01:30.144261 master-0 kubenswrapper[7756]: I0220 12:01:30.144162 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k95mq_bd609bd3-2525-4b88-8f07-94a0418fb582/cluster-baremetal-operator/0.log" Feb 20 12:01:30.144515 master-0 kubenswrapper[7756]: I0220 12:01:30.144435 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" event={"ID":"bd609bd3-2525-4b88-8f07-94a0418fb582","Type":"ContainerStarted","Data":"1f0c874f0434630bd93de4bc13495f67300659cb1712b213b4e98726a3091219"} Feb 20 12:01:30.150654 master-0 kubenswrapper[7756]: I0220 12:01:30.150584 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager/0.log" Feb 20 12:01:30.150778 master-0 kubenswrapper[7756]: I0220 12:01:30.150702 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerStarted","Data":"3c1968a342bc2b1038295e83962863fa158ee7e02e808b6ba1e7db9cacdb32ad"} Feb 20 12:01:31.968791 master-0 kubenswrapper[7756]: I0220 12:01:31.968720 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:01:31.969457 master-0 kubenswrapper[7756]: I0220 12:01:31.968806 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:01:33.177516 master-0 kubenswrapper[7756]: I0220 12:01:33.177431 7756 generic.go:334] "Generic (PLEG): container finished" podID="98226a59-5234-48f3-a9cd-21de305810dc" containerID="1b7b0cda43f9601273f5b828026cbdd290a92a99bdd94b1cd74e1268067e317e" exitCode=0 Feb 20 12:01:33.177516 master-0 kubenswrapper[7756]: I0220 12:01:33.177504 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" event={"ID":"98226a59-5234-48f3-a9cd-21de305810dc","Type":"ContainerDied","Data":"1b7b0cda43f9601273f5b828026cbdd290a92a99bdd94b1cd74e1268067e317e"} Feb 20 12:01:33.178299 master-0 kubenswrapper[7756]: I0220 12:01:33.177598 7756 scope.go:117] "RemoveContainer" containerID="c5857fd0f578f323286023fc24db8dcdefabd0753d52c557c0cb0421ff06a92f" Feb 20 12:01:33.178299 master-0 kubenswrapper[7756]: I0220 12:01:33.178255 7756 scope.go:117] "RemoveContainer" containerID="1b7b0cda43f9601273f5b828026cbdd290a92a99bdd94b1cd74e1268067e317e" Feb 20 12:01:33.178664 master-0 kubenswrapper[7756]: E0220 12:01:33.178612 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-599c7886f5-zltnd_openshift-controller-manager(98226a59-5234-48f3-a9cd-21de305810dc)\"" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" podUID="98226a59-5234-48f3-a9cd-21de305810dc" Feb 20 12:01:34.969833 master-0 kubenswrapper[7756]: I0220 12:01:34.969709 7756 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 12:01:34.969833 master-0 kubenswrapper[7756]: I0220 12:01:34.969768 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 12:01:35.785827 master-0 kubenswrapper[7756]: I0220 12:01:35.785734 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:01:35.785827 master-0 kubenswrapper[7756]: I0220 12:01:35.785801 7756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:01:35.786330 master-0 kubenswrapper[7756]: I0220 12:01:35.786287 7756 scope.go:117] "RemoveContainer" containerID="1b7b0cda43f9601273f5b828026cbdd290a92a99bdd94b1cd74e1268067e317e" Feb 20 12:01:35.786600 master-0 kubenswrapper[7756]: E0220 12:01:35.786568 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-599c7886f5-zltnd_openshift-controller-manager(98226a59-5234-48f3-a9cd-21de305810dc)\"" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" podUID="98226a59-5234-48f3-a9cd-21de305810dc" Feb 20 12:01:38.090634 master-0 kubenswrapper[7756]: E0220 12:01:38.090472 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:01:38.974708 master-0 kubenswrapper[7756]: E0220 12:01:38.974476 7756 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.1895f23f523d84eb openshift-kube-controller-manager 10197 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:a767e0793175d588147a983384ee43db,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:53:02 +0000 UTC,LastTimestamp:2026-02-20 11:59:02.101165408 +0000 UTC m=+587.843413456,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 12:01:42.258807 master-0 kubenswrapper[7756]: I0220 12:01:42.258722 7756 generic.go:334] "Generic (PLEG): container finished" podID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerID="59be86e8d4a5781613fee8a9f98dc6c90430b05bfb61e001a26978b78f148625" exitCode=0 Feb 20 12:01:42.258807 master-0 kubenswrapper[7756]: I0220 12:01:42.258795 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" event={"ID":"9c078827-3bdb-4509-aeb3-eb558df1f6e7","Type":"ContainerDied","Data":"59be86e8d4a5781613fee8a9f98dc6c90430b05bfb61e001a26978b78f148625"} Feb 20 12:01:42.259997 master-0 kubenswrapper[7756]: I0220 12:01:42.258837 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" event={"ID":"9c078827-3bdb-4509-aeb3-eb558df1f6e7","Type":"ContainerStarted","Data":"e6a70c0e0f237b900ba323a2d2250f1ed5e02a069194617f8e9507c1f16cde63"} Feb 20 12:01:42.259997 master-0 kubenswrapper[7756]: I0220 12:01:42.258862 7756 scope.go:117] "RemoveContainer" containerID="e665c0ba7cf5562cef899fea3b259e95ae91076c695d828d8b5ee4e482dac445" Feb 20 12:01:42.785719 master-0 kubenswrapper[7756]: E0220 12:01:42.785573 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 12:01:42.877469 master-0 kubenswrapper[7756]: I0220 12:01:42.877345 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:01:42.883287 master-0 kubenswrapper[7756]: I0220 12:01:42.883203 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:42.883287 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:42.883287 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:42.883287 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:42.883877 master-0 kubenswrapper[7756]: I0220 12:01:42.883311 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:43.881581 master-0 kubenswrapper[7756]: I0220 12:01:43.881418 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:43.881581 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:43.881581 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:43.881581 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:43.881581 master-0 kubenswrapper[7756]: I0220 12:01:43.881524 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:44.880493 master-0 kubenswrapper[7756]: I0220 12:01:44.880374 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:44.880493 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:44.880493 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:44.880493 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:44.880493 master-0 kubenswrapper[7756]: I0220 12:01:44.880460 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:44.970059 master-0 kubenswrapper[7756]: I0220 12:01:44.969969 7756 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 12:01:44.970790 master-0 kubenswrapper[7756]: I0220 12:01:44.970062 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 12:01:45.880897 master-0 kubenswrapper[7756]: I0220 12:01:45.880821 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:45.880897 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:45.880897 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:45.880897 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:45.881259 master-0 kubenswrapper[7756]: I0220 12:01:45.880908 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:46.298553 master-0 kubenswrapper[7756]: I0220 12:01:46.298473 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/2.log" Feb 20 12:01:46.299495 master-0 kubenswrapper[7756]: I0220 12:01:46.299187 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/1.log" Feb 20 12:01:46.299495 master-0 kubenswrapper[7756]: I0220 12:01:46.299254 7756 generic.go:334] "Generic (PLEG): container finished" podID="bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4" containerID="f6b6d42109f301154569dbcc355083057306381c6e4ddc5df7a556bda8392333" exitCode=1 Feb 20 12:01:46.299495 master-0 kubenswrapper[7756]: I0220 12:01:46.299313 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" event={"ID":"bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4","Type":"ContainerDied","Data":"f6b6d42109f301154569dbcc355083057306381c6e4ddc5df7a556bda8392333"} Feb 20 12:01:46.299495 master-0 kubenswrapper[7756]: I0220 12:01:46.299400 7756 scope.go:117] "RemoveContainer" containerID="60fcf3fcd8aaaa40b7dfe96f543b72ba8310165975661dc288cd77f2a4374875" Feb 20 12:01:46.300188 master-0 kubenswrapper[7756]: I0220 12:01:46.300126 7756 scope.go:117] "RemoveContainer" containerID="f6b6d42109f301154569dbcc355083057306381c6e4ddc5df7a556bda8392333" Feb 20 12:01:46.300559 master-0 kubenswrapper[7756]: E0220 12:01:46.300476 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-792hn_openshift-cluster-storage-operator(bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" podUID="bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4" Feb 20 12:01:46.880442 master-0 kubenswrapper[7756]: I0220 12:01:46.880349 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:46.880442 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:46.880442 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:46.880442 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:46.880928 master-0 kubenswrapper[7756]: I0220 12:01:46.880452 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:47.308803 master-0 kubenswrapper[7756]: I0220 12:01:47.308722 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/2.log" Feb 20 12:01:47.880893 master-0 kubenswrapper[7756]: I0220 12:01:47.880757 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:47.880893 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:47.880893 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:47.880893 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:47.880893 master-0 kubenswrapper[7756]: I0220 12:01:47.880874 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:47.964599 master-0 kubenswrapper[7756]: E0220 12:01:47.964436 7756 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 20 12:01:48.096920 master-0 kubenswrapper[7756]: E0220 12:01:48.096778 7756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:01:48.096920 master-0 kubenswrapper[7756]: E0220 12:01:48.096871 7756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 20 12:01:48.321907 master-0 kubenswrapper[7756]: I0220 12:01:48.321646 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"eaf506e1d0783590de31091ab32fce9d35d713eecc3c017ce74b9f3f24f2dadf"} Feb 20 12:01:48.880887 master-0 kubenswrapper[7756]: I0220 12:01:48.880786 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:48.880887 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:48.880887 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:48.880887 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:48.881314 master-0 kubenswrapper[7756]: I0220 12:01:48.880895 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:49.338009 master-0 kubenswrapper[7756]: I0220 12:01:49.337892 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"5dcb14edc89c8213a50a7e6739f83d85ebe48b452d347e79ab4d89bb7e065fc0"} Feb 20 12:01:49.343095 master-0 kubenswrapper[7756]: I0220 12:01:49.338016 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"b422dd2ce4dd289728186b260c4a3879d1a3b820c3ec1d35590e2886b5db5a66"} Feb 20 12:01:49.877576 master-0 kubenswrapper[7756]: I0220 12:01:49.877383 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:01:49.880138 master-0 kubenswrapper[7756]: I0220 12:01:49.880082 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:49.880138 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:49.880138 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:49.880138 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:49.880407 master-0 kubenswrapper[7756]: I0220 12:01:49.880151 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:50.352917 master-0 kubenswrapper[7756]: I0220 12:01:50.352841 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"a238930ecbb0a76e558bddf991220f2abccffd8a3149eaf2e96e10a5a7336ae9"} Feb 20 12:01:50.352917 master-0 kubenswrapper[7756]: I0220 12:01:50.352925 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"9b0fc4bd4c3cfd9b9709f31ae2aefd01c06a176b29710776ce6e72efcd897ae5"} Feb 20 12:01:50.354334 master-0 kubenswrapper[7756]: I0220 12:01:50.354289 7756 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="20df98a1-7355-4d48-920c-675e1211ec9c" Feb 20 12:01:50.354503 master-0 kubenswrapper[7756]: I0220 12:01:50.354478 7756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="20df98a1-7355-4d48-920c-675e1211ec9c" Feb 20 12:01:50.579399 master-0 kubenswrapper[7756]: I0220 12:01:50.579307 7756 scope.go:117] "RemoveContainer" containerID="1b7b0cda43f9601273f5b828026cbdd290a92a99bdd94b1cd74e1268067e317e" Feb 20 12:01:50.881427 master-0 kubenswrapper[7756]: I0220 12:01:50.881227 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:50.881427 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:50.881427 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:50.881427 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:50.881427 master-0 kubenswrapper[7756]: I0220 12:01:50.881328 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:51.367356 master-0 kubenswrapper[7756]: I0220 12:01:51.367228 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" event={"ID":"98226a59-5234-48f3-a9cd-21de305810dc","Type":"ContainerStarted","Data":"b979af759c905f991e94f3acb27f25df10266c61c084ab82e4e30ab77b2ee843"} Feb 20 12:01:51.368367 master-0 kubenswrapper[7756]: I0220 12:01:51.367716 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:01:51.373793 master-0 kubenswrapper[7756]: I0220 12:01:51.373738 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:01:51.880276 master-0 kubenswrapper[7756]: I0220 12:01:51.880167 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:51.880276 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:51.880276 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:51.880276 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:51.880276 master-0 kubenswrapper[7756]: I0220 12:01:51.880255 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:52.608577 master-0 kubenswrapper[7756]: I0220 12:01:52.608487 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 20 12:01:52.609327 master-0 kubenswrapper[7756]: I0220 12:01:52.608631 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 20 12:01:52.880240 master-0 kubenswrapper[7756]: I0220 12:01:52.880067 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:52.880240 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:52.880240 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:52.880240 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:52.880240 master-0 kubenswrapper[7756]: I0220 12:01:52.880170 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:53.880755 master-0 kubenswrapper[7756]: I0220 12:01:53.880629 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:53.880755 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:53.880755 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:53.880755 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:53.880755 master-0 kubenswrapper[7756]: I0220 12:01:53.880717 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:54.879829 master-0 kubenswrapper[7756]: I0220 12:01:54.879750 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:54.879829 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:54.879829 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:54.879829 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:54.880258 master-0 kubenswrapper[7756]: I0220 12:01:54.879828 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:54.969250 master-0 kubenswrapper[7756]: I0220 12:01:54.969200 7756 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 12:01:54.970177 master-0 kubenswrapper[7756]: I0220 12:01:54.970131 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 12:01:54.970421 master-0 kubenswrapper[7756]: I0220 12:01:54.970395 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:01:54.971710 master-0 kubenswrapper[7756]: I0220 12:01:54.971672 7756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"3c1968a342bc2b1038295e83962863fa158ee7e02e808b6ba1e7db9cacdb32ad"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 20 12:01:54.972026 master-0 kubenswrapper[7756]: I0220 12:01:54.971993 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" containerID="cri-o://3c1968a342bc2b1038295e83962863fa158ee7e02e808b6ba1e7db9cacdb32ad" gracePeriod=30 Feb 20 12:01:55.405805 master-0 kubenswrapper[7756]: I0220 12:01:55.405736 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/cluster-policy-controller/1.log" Feb 20 12:01:55.428484 master-0 kubenswrapper[7756]: I0220 12:01:55.428396 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager/0.log" Feb 20 12:01:55.428766 master-0 kubenswrapper[7756]: I0220 12:01:55.428506 7756 generic.go:334] "Generic (PLEG): container finished" podID="a767e0793175d588147a983384ee43db" containerID="3c1968a342bc2b1038295e83962863fa158ee7e02e808b6ba1e7db9cacdb32ad" exitCode=255 Feb 20 12:01:55.428766 master-0 kubenswrapper[7756]: I0220 12:01:55.428604 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerDied","Data":"3c1968a342bc2b1038295e83962863fa158ee7e02e808b6ba1e7db9cacdb32ad"} Feb 20 12:01:55.428766 master-0 kubenswrapper[7756]: I0220 12:01:55.428702 7756 scope.go:117] "RemoveContainer" containerID="0616a5d031f34cdf4ba086c5e6e13dc1c06dc0cc61473c6faf71fc5fd1759c28" Feb 20 12:01:55.879587 master-0 kubenswrapper[7756]: I0220 12:01:55.879504 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:55.879587 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:55.879587 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:55.879587 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:55.880012 master-0 kubenswrapper[7756]: I0220 12:01:55.879599 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:56.440100 master-0 kubenswrapper[7756]: I0220 12:01:56.439998 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/4.log" Feb 20 12:01:56.441166 master-0 kubenswrapper[7756]: I0220 12:01:56.440911 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/3.log" Feb 20 12:01:56.441713 master-0 kubenswrapper[7756]: I0220 12:01:56.441620 7756 generic.go:334] "Generic (PLEG): container finished" podID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" containerID="0590c15c418b11d5d4116f66dd28e44110553b9745f97ed4ab83a1781e3345eb" exitCode=1 Feb 20 12:01:56.441713 master-0 kubenswrapper[7756]: I0220 12:01:56.441691 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" event={"ID":"db2a7cb1-1d05-4b24-86ed-f823fad5013e","Type":"ContainerDied","Data":"0590c15c418b11d5d4116f66dd28e44110553b9745f97ed4ab83a1781e3345eb"} Feb 20 12:01:56.441902 master-0 kubenswrapper[7756]: I0220 12:01:56.441767 7756 scope.go:117] "RemoveContainer" containerID="095bad8ef44d69b4cc26fcf2dd343a67938137ce3213cb7022a98a05d1eb31af" Feb 20 12:01:56.442576 master-0 kubenswrapper[7756]: I0220 12:01:56.442489 7756 scope.go:117] "RemoveContainer" containerID="0590c15c418b11d5d4116f66dd28e44110553b9745f97ed4ab83a1781e3345eb" Feb 20 12:01:56.443080 master-0 kubenswrapper[7756]: E0220 12:01:56.443021 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-kw2v6_openshift-ingress-operator(db2a7cb1-1d05-4b24-86ed-f823fad5013e)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" podUID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" Feb 20 12:01:56.447203 master-0 kubenswrapper[7756]: I0220 12:01:56.447116 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/cluster-policy-controller/1.log" Feb 20 12:01:56.449750 master-0 kubenswrapper[7756]: I0220 12:01:56.449690 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager/0.log" Feb 20 12:01:56.449918 master-0 kubenswrapper[7756]: I0220 12:01:56.449791 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerStarted","Data":"75d9d37cd248f0e723c9a7084c00a12621efc7a713ce161db93fefc8ef371e85"} Feb 20 12:01:56.679056 master-0 kubenswrapper[7756]: I0220 12:01:56.678951 7756 status_manager.go:851] "Failed to get status for pod" podUID="305f625e-16b0-4840-a9e2-25571b49ad2a" pod="openshift-etcd/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Feb 20 12:01:56.880189 master-0 kubenswrapper[7756]: I0220 12:01:56.880125 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:56.880189 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:56.880189 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:56.880189 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:56.880498 master-0 kubenswrapper[7756]: I0220 12:01:56.880209 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:57.461569 master-0 kubenswrapper[7756]: I0220 12:01:57.461440 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/4.log" Feb 20 12:01:57.578766 master-0 kubenswrapper[7756]: I0220 12:01:57.578691 7756 scope.go:117] "RemoveContainer" containerID="f6b6d42109f301154569dbcc355083057306381c6e4ddc5df7a556bda8392333" Feb 20 12:01:57.579078 master-0 kubenswrapper[7756]: E0220 12:01:57.579032 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-792hn_openshift-cluster-storage-operator(bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" podUID="bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4" Feb 20 12:01:57.880089 master-0 kubenswrapper[7756]: I0220 12:01:57.879950 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:57.880089 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:57.880089 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:57.880089 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:57.880599 master-0 kubenswrapper[7756]: I0220 12:01:57.880148 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:58.880434 master-0 kubenswrapper[7756]: I0220 12:01:58.880353 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:58.880434 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:58.880434 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:58.880434 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:58.881244 master-0 kubenswrapper[7756]: I0220 12:01:58.880446 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:01:59.786759 master-0 kubenswrapper[7756]: E0220 12:01:59.786614 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 12:01:59.880691 master-0 kubenswrapper[7756]: I0220 12:01:59.880618 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:01:59.880691 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:01:59.880691 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:01:59.880691 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:01:59.881660 master-0 kubenswrapper[7756]: I0220 12:01:59.880711 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:00.880838 master-0 kubenswrapper[7756]: I0220 12:02:00.880754 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:00.880838 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:00.880838 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:00.880838 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:00.881863 master-0 kubenswrapper[7756]: I0220 12:02:00.880853 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:01.880372 master-0 kubenswrapper[7756]: I0220 12:02:01.880292 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:01.880372 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:01.880372 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:01.880372 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:01.880866 master-0 kubenswrapper[7756]: I0220 12:02:01.880374 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:01.969485 master-0 kubenswrapper[7756]: I0220 12:02:01.969388 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:02:01.970673 master-0 kubenswrapper[7756]: I0220 12:02:01.969517 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:02:02.164267 master-0 kubenswrapper[7756]: I0220 12:02:02.164037 7756 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Feb 20 12:02:02.240677 master-0 kubenswrapper[7756]: I0220 12:02:02.240599 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 20 12:02:02.243713 master-0 kubenswrapper[7756]: I0220 12:02:02.243662 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 20 12:02:02.600830 master-0 kubenswrapper[7756]: I0220 12:02:02.600767 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 20 12:02:02.611683 master-0 kubenswrapper[7756]: I0220 12:02:02.611619 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 20 12:02:02.645653 master-0 kubenswrapper[7756]: I0220 12:02:02.644290 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 20 12:02:02.881464 master-0 kubenswrapper[7756]: I0220 12:02:02.881264 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:02.881464 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:02.881464 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:02.881464 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:02.881464 master-0 kubenswrapper[7756]: I0220 12:02:02.881362 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:03.880186 master-0 kubenswrapper[7756]: I0220 12:02:03.880040 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:03.880186 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:03.880186 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:03.880186 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:03.880727 master-0 kubenswrapper[7756]: I0220 12:02:03.880225 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:04.591913 master-0 kubenswrapper[7756]: I0220 12:02:04.591796 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f22083d-dc18-4acd-aa7f-d01d407c7837" path="/var/lib/kubelet/pods/3f22083d-dc18-4acd-aa7f-d01d407c7837/volumes" Feb 20 12:02:04.880093 master-0 kubenswrapper[7756]: I0220 12:02:04.879930 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:04.880093 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:04.880093 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:04.880093 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:04.880093 master-0 kubenswrapper[7756]: I0220 12:02:04.880025 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:04.969687 master-0 kubenswrapper[7756]: I0220 12:02:04.969597 7756 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 12:02:04.969687 master-0 kubenswrapper[7756]: I0220 12:02:04.969675 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 12:02:05.880133 master-0 kubenswrapper[7756]: I0220 12:02:05.880065 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:05.880133 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:05.880133 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:05.880133 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:05.881173 master-0 kubenswrapper[7756]: I0220 12:02:05.880149 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:06.880588 master-0 kubenswrapper[7756]: I0220 12:02:06.880491 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:06.880588 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:06.880588 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:06.880588 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:06.881617 master-0 kubenswrapper[7756]: I0220 12:02:06.880615 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:07.628881 master-0 kubenswrapper[7756]: I0220 12:02:07.628806 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 20 12:02:07.880591 master-0 kubenswrapper[7756]: I0220 12:02:07.880378 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:07.880591 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:07.880591 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:07.880591 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:07.880591 master-0 kubenswrapper[7756]: I0220 12:02:07.880482 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:08.578712 master-0 kubenswrapper[7756]: I0220 12:02:08.578629 7756 scope.go:117] "RemoveContainer" containerID="f6b6d42109f301154569dbcc355083057306381c6e4ddc5df7a556bda8392333" Feb 20 12:02:08.880605 master-0 kubenswrapper[7756]: I0220 12:02:08.880342 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:08.880605 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:08.880605 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:08.880605 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:08.880605 master-0 kubenswrapper[7756]: I0220 12:02:08.880437 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:09.571630 master-0 kubenswrapper[7756]: I0220 12:02:09.571522 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/2.log" Feb 20 12:02:09.571630 master-0 kubenswrapper[7756]: I0220 12:02:09.571632 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" event={"ID":"bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4","Type":"ContainerStarted","Data":"de9e8c93c0df2890c4752dca06392332c979a4e0bdd653de93c036cd77ec19ee"} Feb 20 12:02:09.880467 master-0 kubenswrapper[7756]: I0220 12:02:09.880287 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:09.880467 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:09.880467 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:09.880467 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:09.880467 master-0 kubenswrapper[7756]: I0220 12:02:09.880389 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:10.579640 master-0 kubenswrapper[7756]: I0220 12:02:10.579563 7756 scope.go:117] "RemoveContainer" containerID="0590c15c418b11d5d4116f66dd28e44110553b9745f97ed4ab83a1781e3345eb" Feb 20 12:02:10.580080 master-0 kubenswrapper[7756]: E0220 12:02:10.579918 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-kw2v6_openshift-ingress-operator(db2a7cb1-1d05-4b24-86ed-f823fad5013e)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" podUID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" Feb 20 12:02:10.880905 master-0 kubenswrapper[7756]: I0220 12:02:10.880722 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:10.880905 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:10.880905 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:10.880905 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:10.880905 master-0 kubenswrapper[7756]: I0220 12:02:10.880831 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:11.880691 master-0 kubenswrapper[7756]: I0220 12:02:11.880580 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:11.880691 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:11.880691 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:11.880691 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:11.881798 master-0 kubenswrapper[7756]: I0220 12:02:11.880726 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:12.881081 master-0 kubenswrapper[7756]: I0220 12:02:12.880966 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:12.881081 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:12.881081 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:12.881081 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:12.881081 master-0 kubenswrapper[7756]: I0220 12:02:12.881055 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:12.981093 master-0 kubenswrapper[7756]: E0220 12:02:12.980905 7756 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Feb 20 12:02:12.981093 master-0 kubenswrapper[7756]: &Event{ObjectMeta:{kube-controller-manager-master-0.1895f29569302069 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:a767e0793175d588147a983384ee43db,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused Feb 20 12:02:12.981093 master-0 kubenswrapper[7756]: body: Feb 20 12:02:12.981093 master-0 kubenswrapper[7756]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:59:11.968968809 +0000 UTC m=+597.711216857,LastTimestamp:2026-02-20 11:59:11.968968809 +0000 UTC m=+597.711216857,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 20 12:02:12.981093 master-0 kubenswrapper[7756]: > Feb 20 12:02:13.880059 master-0 kubenswrapper[7756]: I0220 12:02:13.879954 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:13.880059 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:13.880059 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:13.880059 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:13.880059 master-0 kubenswrapper[7756]: I0220 12:02:13.880043 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:14.655140 master-0 kubenswrapper[7756]: I0220 12:02:14.655026 7756 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 12:02:14.655140 master-0 kubenswrapper[7756]: I0220 12:02:14.655124 7756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="aa2f6c0cf73fadd0d96a26150bb4dbb3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 12:02:14.656024 master-0 kubenswrapper[7756]: I0220 12:02:14.655190 7756 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 12:02:14.656024 master-0 kubenswrapper[7756]: I0220 12:02:14.655318 7756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="aa2f6c0cf73fadd0d96a26150bb4dbb3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 12:02:14.879578 master-0 kubenswrapper[7756]: I0220 12:02:14.879424 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:14.879578 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:14.879578 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:14.879578 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:14.879578 master-0 kubenswrapper[7756]: I0220 12:02:14.879496 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:14.970090 master-0 kubenswrapper[7756]: I0220 12:02:14.969871 7756 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 12:02:14.970090 master-0 kubenswrapper[7756]: I0220 12:02:14.970023 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 12:02:15.169653 master-0 kubenswrapper[7756]: E0220 12:02:15.169517 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 20 12:02:15.880941 master-0 kubenswrapper[7756]: I0220 12:02:15.880857 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:15.880941 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:15.880941 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:15.880941 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:15.880941 master-0 kubenswrapper[7756]: I0220 12:02:15.880935 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:16.788872 master-0 kubenswrapper[7756]: E0220 12:02:16.788763 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 12:02:16.880630 master-0 kubenswrapper[7756]: I0220 12:02:16.880492 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:16.880630 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:16.880630 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:16.880630 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:16.880630 master-0 kubenswrapper[7756]: I0220 12:02:16.880606 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:17.880334 master-0 kubenswrapper[7756]: I0220 12:02:17.880239 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:17.880334 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:17.880334 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:17.880334 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:17.880334 master-0 kubenswrapper[7756]: I0220 12:02:17.880323 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:18.880701 master-0 kubenswrapper[7756]: I0220 12:02:18.880597 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:18.880701 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:18.880701 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:18.880701 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:18.881731 master-0 kubenswrapper[7756]: I0220 12:02:18.880869 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:19.880710 master-0 kubenswrapper[7756]: I0220 12:02:19.880581 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:19.880710 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:19.880710 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:19.880710 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:19.880710 master-0 kubenswrapper[7756]: I0220 12:02:19.880699 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:20.880776 master-0 kubenswrapper[7756]: I0220 12:02:20.880431 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:20.880776 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:20.880776 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:20.880776 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:20.880776 master-0 kubenswrapper[7756]: I0220 12:02:20.880589 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:21.880557 master-0 kubenswrapper[7756]: I0220 12:02:21.880436 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:21.880557 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:21.880557 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:21.880557 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:21.880557 master-0 kubenswrapper[7756]: I0220 12:02:21.880545 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:22.880250 master-0 kubenswrapper[7756]: I0220 12:02:22.880153 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:22.880250 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:22.880250 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:22.880250 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:22.880668 master-0 kubenswrapper[7756]: I0220 12:02:22.880255 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:23.834702 master-0 kubenswrapper[7756]: I0220 12:02:23.834057 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 12:02:23.879956 master-0 kubenswrapper[7756]: I0220 12:02:23.879840 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:23.879956 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:23.879956 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:23.879956 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:23.879956 master-0 kubenswrapper[7756]: I0220 12:02:23.879918 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:24.880155 master-0 kubenswrapper[7756]: I0220 12:02:24.880052 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:24.880155 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:24.880155 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:24.880155 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:24.881129 master-0 kubenswrapper[7756]: I0220 12:02:24.880160 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:24.969313 master-0 kubenswrapper[7756]: I0220 12:02:24.969225 7756 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 12:02:24.969497 master-0 kubenswrapper[7756]: I0220 12:02:24.969327 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 12:02:24.969497 master-0 kubenswrapper[7756]: I0220 12:02:24.969406 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:02:24.970441 master-0 kubenswrapper[7756]: I0220 12:02:24.970382 7756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"75d9d37cd248f0e723c9a7084c00a12621efc7a713ce161db93fefc8ef371e85"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 20 12:02:24.970628 master-0 kubenswrapper[7756]: I0220 12:02:24.970582 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" containerID="cri-o://75d9d37cd248f0e723c9a7084c00a12621efc7a713ce161db93fefc8ef371e85" gracePeriod=30 Feb 20 12:02:25.579096 master-0 kubenswrapper[7756]: I0220 12:02:25.579021 7756 scope.go:117] "RemoveContainer" containerID="0590c15c418b11d5d4116f66dd28e44110553b9745f97ed4ab83a1781e3345eb" Feb 20 12:02:25.579497 master-0 kubenswrapper[7756]: E0220 12:02:25.579442 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-kw2v6_openshift-ingress-operator(db2a7cb1-1d05-4b24-86ed-f823fad5013e)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" podUID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" Feb 20 12:02:25.710600 master-0 kubenswrapper[7756]: I0220 12:02:25.710451 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/cluster-policy-controller/2.log" Feb 20 12:02:25.711204 master-0 kubenswrapper[7756]: I0220 12:02:25.711140 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/cluster-policy-controller/1.log" Feb 20 12:02:25.713751 master-0 kubenswrapper[7756]: I0220 12:02:25.713685 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager/0.log" Feb 20 12:02:25.713847 master-0 kubenswrapper[7756]: I0220 12:02:25.713796 7756 generic.go:334] "Generic (PLEG): container finished" podID="a767e0793175d588147a983384ee43db" containerID="75d9d37cd248f0e723c9a7084c00a12621efc7a713ce161db93fefc8ef371e85" exitCode=255 Feb 20 12:02:25.713932 master-0 kubenswrapper[7756]: I0220 12:02:25.713841 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerDied","Data":"75d9d37cd248f0e723c9a7084c00a12621efc7a713ce161db93fefc8ef371e85"} Feb 20 12:02:25.713932 master-0 kubenswrapper[7756]: I0220 12:02:25.713881 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerStarted","Data":"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726"} Feb 20 12:02:25.713932 master-0 kubenswrapper[7756]: I0220 12:02:25.713910 7756 scope.go:117] "RemoveContainer" containerID="3c1968a342bc2b1038295e83962863fa158ee7e02e808b6ba1e7db9cacdb32ad" Feb 20 12:02:25.880063 master-0 kubenswrapper[7756]: I0220 12:02:25.879949 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:25.880063 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:25.880063 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:25.880063 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:25.881134 master-0 kubenswrapper[7756]: I0220 12:02:25.880058 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:26.726402 master-0 kubenswrapper[7756]: I0220 12:02:26.726295 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/cluster-policy-controller/2.log" Feb 20 12:02:26.728826 master-0 kubenswrapper[7756]: I0220 12:02:26.728762 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager/0.log" Feb 20 12:02:26.880641 master-0 kubenswrapper[7756]: I0220 12:02:26.880565 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:26.880641 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:26.880641 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:26.880641 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:26.881735 master-0 kubenswrapper[7756]: I0220 12:02:26.880646 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:27.879757 master-0 kubenswrapper[7756]: I0220 12:02:27.879672 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:27.879757 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:27.879757 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:27.879757 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:27.880389 master-0 kubenswrapper[7756]: I0220 12:02:27.879771 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:28.633061 master-0 kubenswrapper[7756]: E0220 12:02:28.632957 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 20 12:02:28.880505 master-0 kubenswrapper[7756]: I0220 12:02:28.880412 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:28.880505 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:28.880505 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:28.880505 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:28.880964 master-0 kubenswrapper[7756]: I0220 12:02:28.880512 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:29.757695 master-0 kubenswrapper[7756]: I0220 12:02:29.757645 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k95mq_bd609bd3-2525-4b88-8f07-94a0418fb582/cluster-baremetal-operator/1.log" Feb 20 12:02:29.759114 master-0 kubenswrapper[7756]: I0220 12:02:29.759057 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k95mq_bd609bd3-2525-4b88-8f07-94a0418fb582/cluster-baremetal-operator/0.log" Feb 20 12:02:29.759248 master-0 kubenswrapper[7756]: I0220 12:02:29.759147 7756 generic.go:334] "Generic (PLEG): container finished" podID="bd609bd3-2525-4b88-8f07-94a0418fb582" containerID="1f0c874f0434630bd93de4bc13495f67300659cb1712b213b4e98726a3091219" exitCode=1 Feb 20 12:02:29.759248 master-0 kubenswrapper[7756]: I0220 12:02:29.759197 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" event={"ID":"bd609bd3-2525-4b88-8f07-94a0418fb582","Type":"ContainerDied","Data":"1f0c874f0434630bd93de4bc13495f67300659cb1712b213b4e98726a3091219"} Feb 20 12:02:29.759381 master-0 kubenswrapper[7756]: I0220 12:02:29.759260 7756 scope.go:117] "RemoveContainer" containerID="7e35d0d46d086257733502e192ec247382f7e26c3f0f6b4f8392900b3f91657b" Feb 20 12:02:29.759980 master-0 kubenswrapper[7756]: I0220 12:02:29.759927 7756 scope.go:117] "RemoveContainer" containerID="1f0c874f0434630bd93de4bc13495f67300659cb1712b213b4e98726a3091219" Feb 20 12:02:29.760351 master-0 kubenswrapper[7756]: E0220 12:02:29.760299 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-k95mq_openshift-machine-api(bd609bd3-2525-4b88-8f07-94a0418fb582)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" podUID="bd609bd3-2525-4b88-8f07-94a0418fb582" Feb 20 12:02:29.879483 master-0 kubenswrapper[7756]: I0220 12:02:29.879394 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:29.879483 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:29.879483 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:29.879483 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:29.879483 master-0 kubenswrapper[7756]: I0220 12:02:29.879463 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:30.769211 master-0 kubenswrapper[7756]: I0220 12:02:30.769127 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k95mq_bd609bd3-2525-4b88-8f07-94a0418fb582/cluster-baremetal-operator/1.log" Feb 20 12:02:30.880166 master-0 kubenswrapper[7756]: I0220 12:02:30.880063 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:30.880166 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:30.880166 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:30.880166 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:30.880647 master-0 kubenswrapper[7756]: I0220 12:02:30.880189 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:31.880630 master-0 kubenswrapper[7756]: I0220 12:02:31.880507 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:31.880630 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:31.880630 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:31.880630 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:31.881374 master-0 kubenswrapper[7756]: I0220 12:02:31.880632 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:31.968904 master-0 kubenswrapper[7756]: I0220 12:02:31.968803 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:02:31.968904 master-0 kubenswrapper[7756]: I0220 12:02:31.968881 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:02:32.880281 master-0 kubenswrapper[7756]: I0220 12:02:32.880168 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:32.880281 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:32.880281 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:32.880281 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:32.881582 master-0 kubenswrapper[7756]: I0220 12:02:32.880294 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:33.790058 master-0 kubenswrapper[7756]: E0220 12:02:33.789970 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 12:02:33.880403 master-0 kubenswrapper[7756]: I0220 12:02:33.880296 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:33.880403 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:33.880403 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:33.880403 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:33.881346 master-0 kubenswrapper[7756]: I0220 12:02:33.880449 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:34.880853 master-0 kubenswrapper[7756]: I0220 12:02:34.880737 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:34.880853 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:34.880853 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:34.880853 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:34.880853 master-0 kubenswrapper[7756]: I0220 12:02:34.880826 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:34.969374 master-0 kubenswrapper[7756]: I0220 12:02:34.969254 7756 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 12:02:34.969374 master-0 kubenswrapper[7756]: I0220 12:02:34.969350 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 12:02:35.880697 master-0 kubenswrapper[7756]: I0220 12:02:35.880589 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:35.880697 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:35.880697 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:35.880697 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:35.880697 master-0 kubenswrapper[7756]: I0220 12:02:35.880682 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:36.880588 master-0 kubenswrapper[7756]: I0220 12:02:36.880486 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:36.880588 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:36.880588 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:36.880588 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:36.881566 master-0 kubenswrapper[7756]: I0220 12:02:36.880602 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:37.880189 master-0 kubenswrapper[7756]: I0220 12:02:37.880129 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:37.880189 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:37.880189 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:37.880189 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:37.880776 master-0 kubenswrapper[7756]: I0220 12:02:37.880731 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:38.879937 master-0 kubenswrapper[7756]: I0220 12:02:38.879880 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:38.879937 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:38.879937 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:38.879937 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:38.880602 master-0 kubenswrapper[7756]: I0220 12:02:38.879954 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:39.580621 master-0 kubenswrapper[7756]: I0220 12:02:39.580565 7756 scope.go:117] "RemoveContainer" containerID="0590c15c418b11d5d4116f66dd28e44110553b9745f97ed4ab83a1781e3345eb" Feb 20 12:02:39.581481 master-0 kubenswrapper[7756]: E0220 12:02:39.580968 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-kw2v6_openshift-ingress-operator(db2a7cb1-1d05-4b24-86ed-f823fad5013e)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" podUID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" Feb 20 12:02:39.848093 master-0 kubenswrapper[7756]: I0220 12:02:39.847869 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/3.log" Feb 20 12:02:39.848756 master-0 kubenswrapper[7756]: I0220 12:02:39.848685 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/2.log" Feb 20 12:02:39.848907 master-0 kubenswrapper[7756]: I0220 12:02:39.848787 7756 generic.go:334] "Generic (PLEG): container finished" podID="bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4" containerID="de9e8c93c0df2890c4752dca06392332c979a4e0bdd653de93c036cd77ec19ee" exitCode=1 Feb 20 12:02:39.848907 master-0 kubenswrapper[7756]: I0220 12:02:39.848872 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" event={"ID":"bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4","Type":"ContainerDied","Data":"de9e8c93c0df2890c4752dca06392332c979a4e0bdd653de93c036cd77ec19ee"} Feb 20 12:02:39.849118 master-0 kubenswrapper[7756]: I0220 12:02:39.848971 7756 scope.go:117] "RemoveContainer" containerID="f6b6d42109f301154569dbcc355083057306381c6e4ddc5df7a556bda8392333" Feb 20 12:02:39.850379 master-0 kubenswrapper[7756]: I0220 12:02:39.850325 7756 scope.go:117] "RemoveContainer" containerID="de9e8c93c0df2890c4752dca06392332c979a4e0bdd653de93c036cd77ec19ee" Feb 20 12:02:39.851368 master-0 kubenswrapper[7756]: E0220 12:02:39.850727 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-792hn_openshift-cluster-storage-operator(bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" podUID="bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4" Feb 20 12:02:39.880350 master-0 kubenswrapper[7756]: I0220 12:02:39.880293 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:39.880350 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:39.880350 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:39.880350 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:39.882083 master-0 kubenswrapper[7756]: I0220 12:02:39.882038 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:40.860750 master-0 kubenswrapper[7756]: I0220 12:02:40.860708 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/3.log" Feb 20 12:02:40.879715 master-0 kubenswrapper[7756]: I0220 12:02:40.879651 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:40.879715 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:40.879715 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:40.879715 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:40.879999 master-0 kubenswrapper[7756]: I0220 12:02:40.879729 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:41.880109 master-0 kubenswrapper[7756]: I0220 12:02:41.879997 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:41.880109 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:41.880109 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:41.880109 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:41.880109 master-0 kubenswrapper[7756]: I0220 12:02:41.880107 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:42.879980 master-0 kubenswrapper[7756]: I0220 12:02:42.879872 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:42.879980 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:42.879980 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:42.879980 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:42.879980 master-0 kubenswrapper[7756]: I0220 12:02:42.879972 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:43.882674 master-0 kubenswrapper[7756]: I0220 12:02:43.882566 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:43.882674 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:43.882674 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:43.882674 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:43.882674 master-0 kubenswrapper[7756]: I0220 12:02:43.882649 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:44.587112 master-0 kubenswrapper[7756]: I0220 12:02:44.586978 7756 scope.go:117] "RemoveContainer" containerID="1f0c874f0434630bd93de4bc13495f67300659cb1712b213b4e98726a3091219" Feb 20 12:02:44.880336 master-0 kubenswrapper[7756]: I0220 12:02:44.880161 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:44.880336 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:44.880336 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:44.880336 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:44.880336 master-0 kubenswrapper[7756]: I0220 12:02:44.880258 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:44.897383 master-0 kubenswrapper[7756]: I0220 12:02:44.897303 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k95mq_bd609bd3-2525-4b88-8f07-94a0418fb582/cluster-baremetal-operator/1.log" Feb 20 12:02:44.898440 master-0 kubenswrapper[7756]: I0220 12:02:44.897900 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" event={"ID":"bd609bd3-2525-4b88-8f07-94a0418fb582","Type":"ContainerStarted","Data":"8c69b4193862352c262a6060adcf3ebac17385c769a24def2618758d665cba7c"} Feb 20 12:02:44.969466 master-0 kubenswrapper[7756]: I0220 12:02:44.969369 7756 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 12:02:44.969818 master-0 kubenswrapper[7756]: I0220 12:02:44.969487 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 12:02:45.879077 master-0 kubenswrapper[7756]: I0220 12:02:45.878988 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:45.879077 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:45.879077 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:45.879077 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:45.879600 master-0 kubenswrapper[7756]: I0220 12:02:45.879074 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:46.880762 master-0 kubenswrapper[7756]: I0220 12:02:46.880678 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:46.880762 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:46.880762 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:46.880762 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:46.881815 master-0 kubenswrapper[7756]: I0220 12:02:46.880788 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:46.983566 master-0 kubenswrapper[7756]: E0220 12:02:46.983368 7756 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.1895f29569310be3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:a767e0793175d588147a983384ee43db,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:59:11.969029091 +0000 UTC m=+597.711277129,LastTimestamp:2026-02-20 11:59:11.969029091 +0000 UTC m=+597.711277129,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 12:02:47.880207 master-0 kubenswrapper[7756]: I0220 12:02:47.880124 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:47.880207 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:47.880207 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:47.880207 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:47.880745 master-0 kubenswrapper[7756]: I0220 12:02:47.880215 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:48.879859 master-0 kubenswrapper[7756]: I0220 12:02:48.879772 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:48.879859 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:48.879859 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:48.879859 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:48.880887 master-0 kubenswrapper[7756]: I0220 12:02:48.879864 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:49.879835 master-0 kubenswrapper[7756]: I0220 12:02:49.879779 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:49.879835 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:49.879835 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:49.879835 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:49.880909 master-0 kubenswrapper[7756]: I0220 12:02:49.880864 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:50.791936 master-0 kubenswrapper[7756]: E0220 12:02:50.791845 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 12:02:50.879948 master-0 kubenswrapper[7756]: I0220 12:02:50.879861 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:50.879948 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:50.879948 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:50.879948 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:50.880993 master-0 kubenswrapper[7756]: I0220 12:02:50.879965 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:51.579419 master-0 kubenswrapper[7756]: I0220 12:02:51.579346 7756 scope.go:117] "RemoveContainer" containerID="de9e8c93c0df2890c4752dca06392332c979a4e0bdd653de93c036cd77ec19ee" Feb 20 12:02:51.579799 master-0 kubenswrapper[7756]: E0220 12:02:51.579753 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-792hn_openshift-cluster-storage-operator(bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" podUID="bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4" Feb 20 12:02:51.880924 master-0 kubenswrapper[7756]: I0220 12:02:51.880745 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:51.880924 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:51.880924 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:51.880924 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:51.880924 master-0 kubenswrapper[7756]: I0220 12:02:51.880860 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:52.879878 master-0 kubenswrapper[7756]: I0220 12:02:52.879771 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:52.879878 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:52.879878 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:52.879878 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:52.880297 master-0 kubenswrapper[7756]: I0220 12:02:52.879870 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:53.880922 master-0 kubenswrapper[7756]: I0220 12:02:53.880832 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:53.880922 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:53.880922 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:53.880922 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:53.881952 master-0 kubenswrapper[7756]: I0220 12:02:53.880946 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:54.584863 master-0 kubenswrapper[7756]: I0220 12:02:54.584778 7756 scope.go:117] "RemoveContainer" containerID="0590c15c418b11d5d4116f66dd28e44110553b9745f97ed4ab83a1781e3345eb" Feb 20 12:02:54.585326 master-0 kubenswrapper[7756]: E0220 12:02:54.585261 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-kw2v6_openshift-ingress-operator(db2a7cb1-1d05-4b24-86ed-f823fad5013e)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" podUID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" Feb 20 12:02:54.880401 master-0 kubenswrapper[7756]: I0220 12:02:54.880259 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:54.880401 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:54.880401 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:54.880401 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:54.880401 master-0 kubenswrapper[7756]: I0220 12:02:54.880332 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:54.969714 master-0 kubenswrapper[7756]: I0220 12:02:54.969586 7756 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 12:02:54.969714 master-0 kubenswrapper[7756]: I0220 12:02:54.969694 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 12:02:54.970821 master-0 kubenswrapper[7756]: I0220 12:02:54.969790 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:02:54.971143 master-0 kubenswrapper[7756]: I0220 12:02:54.971070 7756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 20 12:02:54.971337 master-0 kubenswrapper[7756]: I0220 12:02:54.971277 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" containerID="cri-o://60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726" gracePeriod=30 Feb 20 12:02:55.095954 master-0 kubenswrapper[7756]: E0220 12:02:55.095894 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(a767e0793175d588147a983384ee43db)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" Feb 20 12:02:55.880514 master-0 kubenswrapper[7756]: I0220 12:02:55.880418 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:55.880514 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:55.880514 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:55.880514 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:55.880514 master-0 kubenswrapper[7756]: I0220 12:02:55.880507 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:56.002834 master-0 kubenswrapper[7756]: I0220 12:02:56.002727 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/cluster-policy-controller/3.log" Feb 20 12:02:56.003729 master-0 kubenswrapper[7756]: I0220 12:02:56.003579 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/cluster-policy-controller/2.log" Feb 20 12:02:56.007013 master-0 kubenswrapper[7756]: I0220 12:02:56.006955 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager/0.log" Feb 20 12:02:56.007099 master-0 kubenswrapper[7756]: I0220 12:02:56.007047 7756 generic.go:334] "Generic (PLEG): container finished" podID="a767e0793175d588147a983384ee43db" containerID="60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726" exitCode=255 Feb 20 12:02:56.007182 master-0 kubenswrapper[7756]: I0220 12:02:56.007101 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerDied","Data":"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726"} Feb 20 12:02:56.007182 master-0 kubenswrapper[7756]: I0220 12:02:56.007166 7756 scope.go:117] "RemoveContainer" containerID="75d9d37cd248f0e723c9a7084c00a12621efc7a713ce161db93fefc8ef371e85" Feb 20 12:02:56.008183 master-0 kubenswrapper[7756]: I0220 12:02:56.008119 7756 scope.go:117] "RemoveContainer" containerID="60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726" Feb 20 12:02:56.008787 master-0 kubenswrapper[7756]: E0220 12:02:56.008734 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(a767e0793175d588147a983384ee43db)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" Feb 20 12:02:56.881205 master-0 kubenswrapper[7756]: I0220 12:02:56.881094 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:56.881205 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:56.881205 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:56.881205 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:56.882043 master-0 kubenswrapper[7756]: I0220 12:02:56.881216 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:57.017981 master-0 kubenswrapper[7756]: I0220 12:02:57.017877 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/cluster-policy-controller/3.log" Feb 20 12:02:57.021122 master-0 kubenswrapper[7756]: I0220 12:02:57.021049 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager/0.log" Feb 20 12:02:57.879500 master-0 kubenswrapper[7756]: I0220 12:02:57.879421 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:57.879500 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:57.879500 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:57.879500 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:57.879781 master-0 kubenswrapper[7756]: I0220 12:02:57.879511 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:58.880601 master-0 kubenswrapper[7756]: I0220 12:02:58.880486 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:58.880601 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:58.880601 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:58.880601 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:58.880601 master-0 kubenswrapper[7756]: I0220 12:02:58.880595 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:02:59.880797 master-0 kubenswrapper[7756]: I0220 12:02:59.880713 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:02:59.880797 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:02:59.880797 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:02:59.880797 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:02:59.881809 master-0 kubenswrapper[7756]: I0220 12:02:59.880821 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:00.881470 master-0 kubenswrapper[7756]: I0220 12:03:00.881406 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:00.881470 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:00.881470 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:00.881470 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:00.882262 master-0 kubenswrapper[7756]: I0220 12:03:00.881501 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:01.880033 master-0 kubenswrapper[7756]: I0220 12:03:01.879940 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:01.880033 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:01.880033 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:01.880033 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:01.880033 master-0 kubenswrapper[7756]: I0220 12:03:01.880024 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:01.968694 master-0 kubenswrapper[7756]: I0220 12:03:01.968596 7756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:03:01.969915 master-0 kubenswrapper[7756]: I0220 12:03:01.969596 7756 scope.go:117] "RemoveContainer" containerID="60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726" Feb 20 12:03:01.969915 master-0 kubenswrapper[7756]: E0220 12:03:01.969871 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(a767e0793175d588147a983384ee43db)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" Feb 20 12:03:02.880749 master-0 kubenswrapper[7756]: I0220 12:03:02.880630 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:02.880749 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:02.880749 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:02.880749 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:02.880749 master-0 kubenswrapper[7756]: I0220 12:03:02.880717 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:03.070948 master-0 kubenswrapper[7756]: I0220 12:03:03.070812 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/cluster-policy-controller/3.log" Feb 20 12:03:03.072565 master-0 kubenswrapper[7756]: I0220 12:03:03.072462 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager-cert-syncer/0.log" Feb 20 12:03:03.073571 master-0 kubenswrapper[7756]: I0220 12:03:03.073486 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager/0.log" Feb 20 12:03:03.073687 master-0 kubenswrapper[7756]: I0220 12:03:03.073603 7756 generic.go:334] "Generic (PLEG): container finished" podID="a767e0793175d588147a983384ee43db" containerID="112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4" exitCode=1 Feb 20 12:03:03.073687 master-0 kubenswrapper[7756]: I0220 12:03:03.073646 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerDied","Data":"112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4"} Feb 20 12:03:03.076180 master-0 kubenswrapper[7756]: I0220 12:03:03.074361 7756 scope.go:117] "RemoveContainer" containerID="60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726" Feb 20 12:03:03.076180 master-0 kubenswrapper[7756]: I0220 12:03:03.074393 7756 scope.go:117] "RemoveContainer" containerID="112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4" Feb 20 12:03:03.336008 master-0 kubenswrapper[7756]: E0220 12:03:03.335942 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(a767e0793175d588147a983384ee43db)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" Feb 20 12:03:03.880190 master-0 kubenswrapper[7756]: I0220 12:03:03.880094 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:03.880190 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:03.880190 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:03.880190 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:03.880190 master-0 kubenswrapper[7756]: I0220 12:03:03.880180 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:04.084796 master-0 kubenswrapper[7756]: I0220 12:03:04.084693 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/cluster-policy-controller/3.log" Feb 20 12:03:04.087251 master-0 kubenswrapper[7756]: I0220 12:03:04.087179 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager-cert-syncer/0.log" Feb 20 12:03:04.088361 master-0 kubenswrapper[7756]: I0220 12:03:04.088306 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager/0.log" Feb 20 12:03:04.088478 master-0 kubenswrapper[7756]: I0220 12:03:04.088393 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerStarted","Data":"4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477"} Feb 20 12:03:04.089371 master-0 kubenswrapper[7756]: I0220 12:03:04.089320 7756 scope.go:117] "RemoveContainer" containerID="60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726" Feb 20 12:03:04.089844 master-0 kubenswrapper[7756]: E0220 12:03:04.089792 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(a767e0793175d588147a983384ee43db)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" Feb 20 12:03:04.880057 master-0 kubenswrapper[7756]: I0220 12:03:04.879962 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:04.880057 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:04.880057 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:04.880057 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:04.880495 master-0 kubenswrapper[7756]: I0220 12:03:04.880078 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:05.880097 master-0 kubenswrapper[7756]: I0220 12:03:05.880021 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:05.880097 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:05.880097 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:05.880097 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:05.881079 master-0 kubenswrapper[7756]: I0220 12:03:05.880123 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:06.579138 master-0 kubenswrapper[7756]: I0220 12:03:06.579062 7756 scope.go:117] "RemoveContainer" containerID="de9e8c93c0df2890c4752dca06392332c979a4e0bdd653de93c036cd77ec19ee" Feb 20 12:03:06.579478 master-0 kubenswrapper[7756]: E0220 12:03:06.579428 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-792hn_openshift-cluster-storage-operator(bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" podUID="bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4" Feb 20 12:03:06.879581 master-0 kubenswrapper[7756]: I0220 12:03:06.879412 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:06.879581 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:06.879581 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:06.879581 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:06.879581 master-0 kubenswrapper[7756]: I0220 12:03:06.879513 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:07.578941 master-0 kubenswrapper[7756]: I0220 12:03:07.578885 7756 scope.go:117] "RemoveContainer" containerID="0590c15c418b11d5d4116f66dd28e44110553b9745f97ed4ab83a1781e3345eb" Feb 20 12:03:07.580044 master-0 kubenswrapper[7756]: E0220 12:03:07.579971 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-kw2v6_openshift-ingress-operator(db2a7cb1-1d05-4b24-86ed-f823fad5013e)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" podUID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" Feb 20 12:03:07.793853 master-0 kubenswrapper[7756]: E0220 12:03:07.793746 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 12:03:07.880433 master-0 kubenswrapper[7756]: I0220 12:03:07.880247 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:07.880433 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:07.880433 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:07.880433 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:07.880433 master-0 kubenswrapper[7756]: I0220 12:03:07.880371 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:08.880517 master-0 kubenswrapper[7756]: I0220 12:03:08.880441 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:08.880517 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:08.880517 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:08.880517 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:08.881881 master-0 kubenswrapper[7756]: I0220 12:03:08.880574 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:09.879879 master-0 kubenswrapper[7756]: I0220 12:03:09.879803 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:09.879879 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:09.879879 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:09.879879 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:09.880205 master-0 kubenswrapper[7756]: I0220 12:03:09.879891 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:10.880407 master-0 kubenswrapper[7756]: I0220 12:03:10.880341 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:10.880407 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:10.880407 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:10.880407 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:10.880961 master-0 kubenswrapper[7756]: I0220 12:03:10.880417 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:11.880427 master-0 kubenswrapper[7756]: I0220 12:03:11.880311 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:11.880427 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:11.880427 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:11.880427 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:11.880427 master-0 kubenswrapper[7756]: I0220 12:03:11.880421 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:12.879903 master-0 kubenswrapper[7756]: I0220 12:03:12.879826 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:12.879903 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:12.879903 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:12.879903 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:12.880308 master-0 kubenswrapper[7756]: I0220 12:03:12.879929 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:13.880677 master-0 kubenswrapper[7756]: I0220 12:03:13.880583 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:13.880677 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:13.880677 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:13.880677 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:13.881644 master-0 kubenswrapper[7756]: I0220 12:03:13.880688 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:14.880765 master-0 kubenswrapper[7756]: I0220 12:03:14.880725 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:14.880765 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:14.880765 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:14.880765 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:14.881448 master-0 kubenswrapper[7756]: I0220 12:03:14.881424 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:15.880447 master-0 kubenswrapper[7756]: I0220 12:03:15.880340 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:15.880447 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:15.880447 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:15.880447 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:15.881475 master-0 kubenswrapper[7756]: I0220 12:03:15.880456 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:16.579295 master-0 kubenswrapper[7756]: I0220 12:03:16.579216 7756 scope.go:117] "RemoveContainer" containerID="60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726" Feb 20 12:03:16.579776 master-0 kubenswrapper[7756]: E0220 12:03:16.579713 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(a767e0793175d588147a983384ee43db)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" Feb 20 12:03:16.880266 master-0 kubenswrapper[7756]: I0220 12:03:16.880119 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:16.880266 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:16.880266 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:16.880266 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:16.880266 master-0 kubenswrapper[7756]: I0220 12:03:16.880208 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:17.880427 master-0 kubenswrapper[7756]: I0220 12:03:17.880339 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:17.880427 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:17.880427 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:17.880427 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:17.881524 master-0 kubenswrapper[7756]: I0220 12:03:17.880446 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:18.880521 master-0 kubenswrapper[7756]: I0220 12:03:18.880430 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:18.880521 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:18.880521 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:18.880521 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:18.881565 master-0 kubenswrapper[7756]: I0220 12:03:18.880558 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:19.579379 master-0 kubenswrapper[7756]: I0220 12:03:19.579300 7756 scope.go:117] "RemoveContainer" containerID="0590c15c418b11d5d4116f66dd28e44110553b9745f97ed4ab83a1781e3345eb" Feb 20 12:03:19.880533 master-0 kubenswrapper[7756]: I0220 12:03:19.880359 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:19.880533 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:19.880533 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:19.880533 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:19.880533 master-0 kubenswrapper[7756]: I0220 12:03:19.880478 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:20.224095 master-0 kubenswrapper[7756]: I0220 12:03:20.224009 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/4.log" Feb 20 12:03:20.224752 master-0 kubenswrapper[7756]: I0220 12:03:20.224660 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" event={"ID":"db2a7cb1-1d05-4b24-86ed-f823fad5013e","Type":"ContainerStarted","Data":"3d793e4b052662e3007d46e716737e20ceb8e6cf9a8ef63f129c3db48ce01b88"} Feb 20 12:03:20.881073 master-0 kubenswrapper[7756]: I0220 12:03:20.880960 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:20.881073 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:20.881073 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:20.881073 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:20.882096 master-0 kubenswrapper[7756]: I0220 12:03:20.881128 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:20.987219 master-0 kubenswrapper[7756]: E0220 12:03:20.986934 7756 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ingress-operator-6569778c84-kw2v6.1895f2553b8b349e openshift-ingress-operator 11443 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress-operator,Name:ingress-operator-6569778c84-kw2v6,UID:db2a7cb1-1d05-4b24-86ed-f823fad5013e,APIVersion:v1,ResourceVersion:3581,FieldPath:spec.containers{ingress-operator},},Reason:BackOff,Message:Back-off restarting failed container ingress-operator in pod ingress-operator-6569778c84-kw2v6_openshift-ingress-operator(db2a7cb1-1d05-4b24-86ed-f823fad5013e),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 11:54:36 +0000 UTC,LastTimestamp:2026-02-20 11:59:12.87108922 +0000 UTC m=+598.613337268,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 12:03:21.578247 master-0 kubenswrapper[7756]: I0220 12:03:21.578169 7756 scope.go:117] "RemoveContainer" containerID="de9e8c93c0df2890c4752dca06392332c979a4e0bdd653de93c036cd77ec19ee" Feb 20 12:03:21.880733 master-0 kubenswrapper[7756]: I0220 12:03:21.880596 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:21.880733 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:21.880733 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:21.880733 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:21.880733 master-0 kubenswrapper[7756]: I0220 12:03:21.880681 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:22.245903 master-0 kubenswrapper[7756]: I0220 12:03:22.245731 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/3.log" Feb 20 12:03:22.245903 master-0 kubenswrapper[7756]: I0220 12:03:22.245825 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" event={"ID":"bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4","Type":"ContainerStarted","Data":"2b67a87579a7d061aa6f176b965b8347b58138b0036e2e0bb2772787eac44faf"} Feb 20 12:03:22.880750 master-0 kubenswrapper[7756]: I0220 12:03:22.880646 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:22.880750 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:22.880750 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:22.880750 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:22.880750 master-0 kubenswrapper[7756]: I0220 12:03:22.880751 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:23.880740 master-0 kubenswrapper[7756]: I0220 12:03:23.880644 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:23.880740 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:23.880740 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:23.880740 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:23.881881 master-0 kubenswrapper[7756]: I0220 12:03:23.880746 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:24.796369 master-0 kubenswrapper[7756]: E0220 12:03:24.796200 7756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Feb 20 12:03:24.880791 master-0 kubenswrapper[7756]: I0220 12:03:24.880659 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:24.880791 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:24.880791 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:24.880791 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:24.880791 master-0 kubenswrapper[7756]: I0220 12:03:24.880762 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:25.880086 master-0 kubenswrapper[7756]: I0220 12:03:25.879982 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:25.880086 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:25.880086 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:25.880086 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:25.880086 master-0 kubenswrapper[7756]: I0220 12:03:25.880056 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:26.914881 master-0 kubenswrapper[7756]: I0220 12:03:26.913927 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:26.914881 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:26.914881 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:26.914881 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:26.914881 master-0 kubenswrapper[7756]: I0220 12:03:26.913991 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:27.879804 master-0 kubenswrapper[7756]: I0220 12:03:27.879730 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:27.879804 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:27.879804 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:27.879804 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:27.880126 master-0 kubenswrapper[7756]: I0220 12:03:27.879835 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:28.880230 master-0 kubenswrapper[7756]: I0220 12:03:28.880142 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:28.880230 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:28.880230 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:28.880230 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:28.880977 master-0 kubenswrapper[7756]: I0220 12:03:28.880227 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:29.879520 master-0 kubenswrapper[7756]: I0220 12:03:29.879448 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:29.879520 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:29.879520 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:29.879520 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:29.879520 master-0 kubenswrapper[7756]: I0220 12:03:29.879541 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:30.880033 master-0 kubenswrapper[7756]: I0220 12:03:30.879929 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:30.880033 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:30.880033 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:30.880033 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:30.880757 master-0 kubenswrapper[7756]: I0220 12:03:30.880084 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:31.579589 master-0 kubenswrapper[7756]: I0220 12:03:31.579473 7756 scope.go:117] "RemoveContainer" containerID="60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726" Feb 20 12:03:31.580614 master-0 kubenswrapper[7756]: E0220 12:03:31.579999 7756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(a767e0793175d588147a983384ee43db)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" Feb 20 12:03:31.880333 master-0 kubenswrapper[7756]: I0220 12:03:31.880173 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:31.880333 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:31.880333 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:31.880333 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:31.880333 master-0 kubenswrapper[7756]: I0220 12:03:31.880286 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:32.327544 master-0 kubenswrapper[7756]: I0220 12:03:32.327480 7756 generic.go:334] "Generic (PLEG): container finished" podID="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" containerID="c6679863b5436d03c685416538ec6a0c239b8d55dfa6ed45b92990d366d1cd74" exitCode=0 Feb 20 12:03:32.327544 master-0 kubenswrapper[7756]: I0220 12:03:32.327536 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" event={"ID":"1d3a36bb-9d11-48b3-a3b5-07b47738ef97","Type":"ContainerDied","Data":"c6679863b5436d03c685416538ec6a0c239b8d55dfa6ed45b92990d366d1cd74"} Feb 20 12:03:32.327776 master-0 kubenswrapper[7756]: I0220 12:03:32.327570 7756 scope.go:117] "RemoveContainer" containerID="8d90051cb425dcfb05eea700daacd614186eaabfc560fdf17a2b201fc46c56ad" Feb 20 12:03:32.328099 master-0 kubenswrapper[7756]: I0220 12:03:32.328071 7756 scope.go:117] "RemoveContainer" containerID="c6679863b5436d03c685416538ec6a0c239b8d55dfa6ed45b92990d366d1cd74" Feb 20 12:03:32.880162 master-0 kubenswrapper[7756]: I0220 12:03:32.880094 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:32.880162 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:32.880162 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:32.880162 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:32.880752 master-0 kubenswrapper[7756]: I0220 12:03:32.880179 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:33.337253 master-0 kubenswrapper[7756]: I0220 12:03:33.337174 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" event={"ID":"1d3a36bb-9d11-48b3-a3b5-07b47738ef97","Type":"ContainerStarted","Data":"40bc3a9da9992a4a44588d7fbb9bbc0abe8146bf44c6d048e8584cfa451b9841"} Feb 20 12:03:33.883469 master-0 kubenswrapper[7756]: I0220 12:03:33.883351 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:33.883469 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:33.883469 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:33.883469 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:33.884721 master-0 kubenswrapper[7756]: I0220 12:03:33.883499 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:34.880696 master-0 kubenswrapper[7756]: I0220 12:03:34.880594 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:34.880696 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:34.880696 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:34.880696 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:34.880696 master-0 kubenswrapper[7756]: I0220 12:03:34.880673 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:35.879989 master-0 kubenswrapper[7756]: I0220 12:03:35.879919 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:35.879989 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:35.879989 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:35.879989 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:35.879989 master-0 kubenswrapper[7756]: I0220 12:03:35.879982 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:36.576656 master-0 kubenswrapper[7756]: I0220 12:03:36.576563 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Feb 20 12:03:36.576961 master-0 kubenswrapper[7756]: E0220 12:03:36.576863 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de8fb9d-34f7-49bc-867d-827a0f9a11e7" containerName="installer" Feb 20 12:03:36.576961 master-0 kubenswrapper[7756]: I0220 12:03:36.576878 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de8fb9d-34f7-49bc-867d-827a0f9a11e7" containerName="installer" Feb 20 12:03:36.576961 master-0 kubenswrapper[7756]: E0220 12:03:36.576911 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f22083d-dc18-4acd-aa7f-d01d407c7837" containerName="installer" Feb 20 12:03:36.576961 master-0 kubenswrapper[7756]: I0220 12:03:36.576920 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f22083d-dc18-4acd-aa7f-d01d407c7837" containerName="installer" Feb 20 12:03:36.576961 master-0 kubenswrapper[7756]: E0220 12:03:36.576937 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305f625e-16b0-4840-a9e2-25571b49ad2a" containerName="installer" Feb 20 12:03:36.576961 master-0 kubenswrapper[7756]: I0220 12:03:36.576948 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="305f625e-16b0-4840-a9e2-25571b49ad2a" containerName="installer" Feb 20 12:03:36.577451 master-0 kubenswrapper[7756]: I0220 12:03:36.577095 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="305f625e-16b0-4840-a9e2-25571b49ad2a" containerName="installer" Feb 20 12:03:36.577451 master-0 kubenswrapper[7756]: I0220 12:03:36.577113 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f22083d-dc18-4acd-aa7f-d01d407c7837" containerName="installer" Feb 20 12:03:36.577451 master-0 kubenswrapper[7756]: I0220 12:03:36.577122 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7de8fb9d-34f7-49bc-867d-827a0f9a11e7" containerName="installer" Feb 20 12:03:36.577731 master-0 kubenswrapper[7756]: I0220 12:03:36.577633 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 20 12:03:36.582240 master-0 kubenswrapper[7756]: I0220 12:03:36.582177 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-9fz4f" Feb 20 12:03:36.582884 master-0 kubenswrapper[7756]: I0220 12:03:36.582844 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 20 12:03:36.589059 master-0 kubenswrapper[7756]: I0220 12:03:36.589002 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 20 12:03:36.591293 master-0 kubenswrapper[7756]: I0220 12:03:36.591239 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Feb 20 12:03:36.729634 master-0 kubenswrapper[7756]: I0220 12:03:36.729545 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"9cd8b719-50cb-45ad-ac5f-a01aa45bf832\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 20 12:03:36.729921 master-0 kubenswrapper[7756]: I0220 12:03:36.729832 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"9cd8b719-50cb-45ad-ac5f-a01aa45bf832\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 20 12:03:36.730054 master-0 kubenswrapper[7756]: I0220 12:03:36.730009 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"9cd8b719-50cb-45ad-ac5f-a01aa45bf832\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 20 12:03:36.832572 master-0 kubenswrapper[7756]: I0220 12:03:36.832395 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"9cd8b719-50cb-45ad-ac5f-a01aa45bf832\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 20 12:03:36.832783 master-0 kubenswrapper[7756]: I0220 12:03:36.832678 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"9cd8b719-50cb-45ad-ac5f-a01aa45bf832\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 20 12:03:36.832835 master-0 kubenswrapper[7756]: I0220 12:03:36.832804 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"9cd8b719-50cb-45ad-ac5f-a01aa45bf832\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 20 12:03:36.832888 master-0 kubenswrapper[7756]: I0220 12:03:36.832820 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"9cd8b719-50cb-45ad-ac5f-a01aa45bf832\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 20 12:03:36.832888 master-0 kubenswrapper[7756]: I0220 12:03:36.832875 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"9cd8b719-50cb-45ad-ac5f-a01aa45bf832\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 20 12:03:36.854720 master-0 kubenswrapper[7756]: I0220 12:03:36.854654 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"9cd8b719-50cb-45ad-ac5f-a01aa45bf832\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 20 12:03:36.880278 master-0 kubenswrapper[7756]: I0220 12:03:36.880203 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:36.880278 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:36.880278 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:36.880278 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:36.881060 master-0 kubenswrapper[7756]: I0220 12:03:36.880312 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:36.902189 master-0 kubenswrapper[7756]: I0220 12:03:36.902097 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 20 12:03:37.351972 master-0 kubenswrapper[7756]: I0220 12:03:37.351299 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Feb 20 12:03:37.358762 master-0 kubenswrapper[7756]: W0220 12:03:37.358670 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9cd8b719_50cb_45ad_ac5f_a01aa45bf832.slice/crio-0ce9620617bed9f134ebbea3a051f3826acc79ea8ed40edc00f1e5c077e14166 WatchSource:0}: Error finding container 0ce9620617bed9f134ebbea3a051f3826acc79ea8ed40edc00f1e5c077e14166: Status 404 returned error can't find the container with id 0ce9620617bed9f134ebbea3a051f3826acc79ea8ed40edc00f1e5c077e14166 Feb 20 12:03:37.879830 master-0 kubenswrapper[7756]: I0220 12:03:37.879733 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:37.879830 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:37.879830 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:37.879830 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:37.879830 master-0 kubenswrapper[7756]: I0220 12:03:37.879834 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:38.375762 master-0 kubenswrapper[7756]: I0220 12:03:38.375698 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" event={"ID":"9cd8b719-50cb-45ad-ac5f-a01aa45bf832","Type":"ContainerStarted","Data":"45919566e098baefe87bebf049b652731a4ba475b6e20928b17f2d3bdb6d2e5a"} Feb 20 12:03:38.375762 master-0 kubenswrapper[7756]: I0220 12:03:38.375768 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" event={"ID":"9cd8b719-50cb-45ad-ac5f-a01aa45bf832","Type":"ContainerStarted","Data":"0ce9620617bed9f134ebbea3a051f3826acc79ea8ed40edc00f1e5c077e14166"} Feb 20 12:03:38.420931 master-0 kubenswrapper[7756]: I0220 12:03:38.420782 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=2.420749808 podStartE2EDuration="2.420749808s" podCreationTimestamp="2026-02-20 12:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:03:38.414246553 +0000 UTC m=+864.156494551" watchObservedRunningTime="2026-02-20 12:03:38.420749808 +0000 UTC m=+864.162997856" Feb 20 12:03:38.435882 master-0 kubenswrapper[7756]: I0220 12:03:38.435783 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" podStartSLOduration=2.435760844 podStartE2EDuration="2.435760844s" podCreationTimestamp="2026-02-20 12:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:03:38.433594392 +0000 UTC m=+864.175842480" watchObservedRunningTime="2026-02-20 12:03:38.435760844 +0000 UTC m=+864.178008862" Feb 20 12:03:38.879559 master-0 kubenswrapper[7756]: I0220 12:03:38.879434 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:38.879559 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:38.879559 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:38.879559 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:38.879905 master-0 kubenswrapper[7756]: I0220 12:03:38.879573 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:39.879095 master-0 kubenswrapper[7756]: I0220 12:03:39.879007 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:39.879095 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:39.879095 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:39.879095 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:39.879095 master-0 kubenswrapper[7756]: I0220 12:03:39.879102 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:40.883282 master-0 kubenswrapper[7756]: I0220 12:03:40.882844 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:40.883282 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:40.883282 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:40.883282 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:40.883282 master-0 kubenswrapper[7756]: I0220 12:03:40.882934 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:41.409418 master-0 kubenswrapper[7756]: I0220 12:03:41.409368 7756 generic.go:334] "Generic (PLEG): container finished" podID="e0b28c90-d5b6-44f3-867c-020ece32ac7d" containerID="77890d6705292359843e6d71e469ce5d5c4b9d196554afc0ee3e0617dea2273f" exitCode=0 Feb 20 12:03:41.409644 master-0 kubenswrapper[7756]: I0220 12:03:41.409456 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" event={"ID":"e0b28c90-d5b6-44f3-867c-020ece32ac7d","Type":"ContainerDied","Data":"77890d6705292359843e6d71e469ce5d5c4b9d196554afc0ee3e0617dea2273f"} Feb 20 12:03:41.409644 master-0 kubenswrapper[7756]: I0220 12:03:41.409498 7756 scope.go:117] "RemoveContainer" containerID="73c4ac8066ad3eb7342716309b7b8a802bf833f8fcd163ad12901b630f6305c2" Feb 20 12:03:41.410392 master-0 kubenswrapper[7756]: I0220 12:03:41.410344 7756 scope.go:117] "RemoveContainer" containerID="77890d6705292359843e6d71e469ce5d5c4b9d196554afc0ee3e0617dea2273f" Feb 20 12:03:41.411556 master-0 kubenswrapper[7756]: I0220 12:03:41.411513 7756 generic.go:334] "Generic (PLEG): container finished" podID="8a97bbf5-7409-4f36-894b-b88284e1b6d0" containerID="0394ee858152290726abadbd7c30c0f31262c014870cefb1d45db15a3536bc63" exitCode=0 Feb 20 12:03:41.411613 master-0 kubenswrapper[7756]: I0220 12:03:41.411591 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" event={"ID":"8a97bbf5-7409-4f36-894b-b88284e1b6d0","Type":"ContainerDied","Data":"0394ee858152290726abadbd7c30c0f31262c014870cefb1d45db15a3536bc63"} Feb 20 12:03:41.412291 master-0 kubenswrapper[7756]: I0220 12:03:41.412231 7756 scope.go:117] "RemoveContainer" containerID="0394ee858152290726abadbd7c30c0f31262c014870cefb1d45db15a3536bc63" Feb 20 12:03:41.413868 master-0 kubenswrapper[7756]: I0220 12:03:41.413842 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-sksbt_8ab951b1-6898-4357-b813-16365f3f89d5/cluster-autoscaler-operator/0.log" Feb 20 12:03:41.414426 master-0 kubenswrapper[7756]: I0220 12:03:41.414373 7756 generic.go:334] "Generic (PLEG): container finished" podID="8ab951b1-6898-4357-b813-16365f3f89d5" containerID="9a057bcbfd065697f6b207a64f408c746a9bea8b73ae774c709e37560f5635da" exitCode=255 Feb 20 12:03:41.414500 master-0 kubenswrapper[7756]: I0220 12:03:41.414418 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" event={"ID":"8ab951b1-6898-4357-b813-16365f3f89d5","Type":"ContainerDied","Data":"9a057bcbfd065697f6b207a64f408c746a9bea8b73ae774c709e37560f5635da"} Feb 20 12:03:41.414894 master-0 kubenswrapper[7756]: I0220 12:03:41.414867 7756 scope.go:117] "RemoveContainer" containerID="9a057bcbfd065697f6b207a64f408c746a9bea8b73ae774c709e37560f5635da" Feb 20 12:03:41.418620 master-0 kubenswrapper[7756]: I0220 12:03:41.418573 7756 generic.go:334] "Generic (PLEG): container finished" podID="1fca5d50-eb5f-4dbb-bdf6-8e07231406f9" containerID="74b4edd626e209801e3786cc1dc29bf2a950a730269d6de5ed8a28d1b435f9b4" exitCode=0 Feb 20 12:03:41.418822 master-0 kubenswrapper[7756]: I0220 12:03:41.418676 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" event={"ID":"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9","Type":"ContainerDied","Data":"74b4edd626e209801e3786cc1dc29bf2a950a730269d6de5ed8a28d1b435f9b4"} Feb 20 12:03:41.419695 master-0 kubenswrapper[7756]: I0220 12:03:41.419652 7756 scope.go:117] "RemoveContainer" containerID="74b4edd626e209801e3786cc1dc29bf2a950a730269d6de5ed8a28d1b435f9b4" Feb 20 12:03:41.431647 master-0 kubenswrapper[7756]: I0220 12:03:41.431594 7756 generic.go:334] "Generic (PLEG): container finished" podID="29489539-68c6-49dd-bc1b-dcf0c7bb2ebe" containerID="4b16a34c164e3dca501c4332ff0f388668786b32102a5a19b7bf01b7c8440060" exitCode=0 Feb 20 12:03:41.431766 master-0 kubenswrapper[7756]: I0220 12:03:41.431686 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" event={"ID":"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe","Type":"ContainerDied","Data":"4b16a34c164e3dca501c4332ff0f388668786b32102a5a19b7bf01b7c8440060"} Feb 20 12:03:41.432276 master-0 kubenswrapper[7756]: I0220 12:03:41.432236 7756 scope.go:117] "RemoveContainer" containerID="4b16a34c164e3dca501c4332ff0f388668786b32102a5a19b7bf01b7c8440060" Feb 20 12:03:41.434475 master-0 kubenswrapper[7756]: I0220 12:03:41.434279 7756 generic.go:334] "Generic (PLEG): container finished" podID="eb135cff-1a2e-468d-80ab-f7db3f57552a" containerID="9583a5d028e457a8b1106eee87ac3a3f6e2e8ded0c2d13dad805b6ccfd5190e1" exitCode=0 Feb 20 12:03:41.434475 master-0 kubenswrapper[7756]: I0220 12:03:41.434350 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" event={"ID":"eb135cff-1a2e-468d-80ab-f7db3f57552a","Type":"ContainerDied","Data":"9583a5d028e457a8b1106eee87ac3a3f6e2e8ded0c2d13dad805b6ccfd5190e1"} Feb 20 12:03:41.434878 master-0 kubenswrapper[7756]: I0220 12:03:41.434836 7756 scope.go:117] "RemoveContainer" containerID="9583a5d028e457a8b1106eee87ac3a3f6e2e8ded0c2d13dad805b6ccfd5190e1" Feb 20 12:03:41.441593 master-0 kubenswrapper[7756]: I0220 12:03:41.439960 7756 generic.go:334] "Generic (PLEG): container finished" podID="bbdbadd9-eeaa-46ef-936e-5db8d395c118" containerID="6e11d702e4faa3980c4584f7fbbe0edd61d03b400f537710d4a26da3248d5efc" exitCode=0 Feb 20 12:03:41.441593 master-0 kubenswrapper[7756]: I0220 12:03:41.440021 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" event={"ID":"bbdbadd9-eeaa-46ef-936e-5db8d395c118","Type":"ContainerDied","Data":"6e11d702e4faa3980c4584f7fbbe0edd61d03b400f537710d4a26da3248d5efc"} Feb 20 12:03:41.441593 master-0 kubenswrapper[7756]: I0220 12:03:41.441112 7756 scope.go:117] "RemoveContainer" containerID="6e11d702e4faa3980c4584f7fbbe0edd61d03b400f537710d4a26da3248d5efc" Feb 20 12:03:41.442391 master-0 kubenswrapper[7756]: I0220 12:03:41.442322 7756 generic.go:334] "Generic (PLEG): container finished" podID="5360f3f5-2d07-432f-af45-22659538c55e" containerID="2d9f878c267250c634175c8afa99432d0586168560ba8d948183859d4b64504a" exitCode=0 Feb 20 12:03:41.442391 master-0 kubenswrapper[7756]: I0220 12:03:41.442372 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" event={"ID":"5360f3f5-2d07-432f-af45-22659538c55e","Type":"ContainerDied","Data":"2d9f878c267250c634175c8afa99432d0586168560ba8d948183859d4b64504a"} Feb 20 12:03:41.442725 master-0 kubenswrapper[7756]: I0220 12:03:41.442700 7756 scope.go:117] "RemoveContainer" containerID="2d9f878c267250c634175c8afa99432d0586168560ba8d948183859d4b64504a" Feb 20 12:03:41.445706 master-0 kubenswrapper[7756]: I0220 12:03:41.445675 7756 generic.go:334] "Generic (PLEG): container finished" podID="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" containerID="de3cf90976c88f94ee4890bd56c7f0488152bb4020f300dabbcd987cd8523183" exitCode=0 Feb 20 12:03:41.445782 master-0 kubenswrapper[7756]: I0220 12:03:41.445723 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" event={"ID":"6c3aa45a-44cc-48fb-a478-ce01a70c4b02","Type":"ContainerDied","Data":"de3cf90976c88f94ee4890bd56c7f0488152bb4020f300dabbcd987cd8523183"} Feb 20 12:03:41.446060 master-0 kubenswrapper[7756]: I0220 12:03:41.446025 7756 scope.go:117] "RemoveContainer" containerID="de3cf90976c88f94ee4890bd56c7f0488152bb4020f300dabbcd987cd8523183" Feb 20 12:03:41.448136 master-0 kubenswrapper[7756]: I0220 12:03:41.448103 7756 generic.go:334] "Generic (PLEG): container finished" podID="312ca024-c8f0-4994-8f9a-b707607341fe" containerID="a2f57d0cbbd57b5325ad0aac9713219f739036a6acc3195c5bbfa570326dbcd4" exitCode=0 Feb 20 12:03:41.448413 master-0 kubenswrapper[7756]: I0220 12:03:41.448155 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-fv598" event={"ID":"312ca024-c8f0-4994-8f9a-b707607341fe","Type":"ContainerDied","Data":"a2f57d0cbbd57b5325ad0aac9713219f739036a6acc3195c5bbfa570326dbcd4"} Feb 20 12:03:41.448624 master-0 kubenswrapper[7756]: I0220 12:03:41.448597 7756 scope.go:117] "RemoveContainer" containerID="a2f57d0cbbd57b5325ad0aac9713219f739036a6acc3195c5bbfa570326dbcd4" Feb 20 12:03:41.462339 master-0 kubenswrapper[7756]: I0220 12:03:41.458694 7756 generic.go:334] "Generic (PLEG): container finished" podID="ce2b6fde-de56-49c3-9bd6-e81c679b02bc" containerID="f3706b3c34cf4ca963f10ba2e8498b0291187d135d8a240b66a3eb3e3ede44fb" exitCode=0 Feb 20 12:03:41.462339 master-0 kubenswrapper[7756]: I0220 12:03:41.458845 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" event={"ID":"ce2b6fde-de56-49c3-9bd6-e81c679b02bc","Type":"ContainerDied","Data":"f3706b3c34cf4ca963f10ba2e8498b0291187d135d8a240b66a3eb3e3ede44fb"} Feb 20 12:03:41.462339 master-0 kubenswrapper[7756]: I0220 12:03:41.461054 7756 scope.go:117] "RemoveContainer" containerID="f3706b3c34cf4ca963f10ba2e8498b0291187d135d8a240b66a3eb3e3ede44fb" Feb 20 12:03:41.462692 master-0 kubenswrapper[7756]: I0220 12:03:41.462596 7756 generic.go:334] "Generic (PLEG): container finished" podID="7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca" containerID="e47a05c8d2dbbc49205addf05b6f326c0f38dfd41f3498f290a08ebfa22cbc94" exitCode=0 Feb 20 12:03:41.462736 master-0 kubenswrapper[7756]: I0220 12:03:41.462679 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" event={"ID":"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca","Type":"ContainerDied","Data":"e47a05c8d2dbbc49205addf05b6f326c0f38dfd41f3498f290a08ebfa22cbc94"} Feb 20 12:03:41.463454 master-0 kubenswrapper[7756]: I0220 12:03:41.463417 7756 scope.go:117] "RemoveContainer" containerID="e47a05c8d2dbbc49205addf05b6f326c0f38dfd41f3498f290a08ebfa22cbc94" Feb 20 12:03:41.465288 master-0 kubenswrapper[7756]: I0220 12:03:41.465250 7756 generic.go:334] "Generic (PLEG): container finished" podID="839bf5b1-b242-4bbd-bc09-cf6abcf7f734" containerID="a536c272954462921fc604267b25f8d65d6f6bc9444d2c6bb8607f4b9f14a00d" exitCode=0 Feb 20 12:03:41.465380 master-0 kubenswrapper[7756]: I0220 12:03:41.465356 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw" event={"ID":"839bf5b1-b242-4bbd-bc09-cf6abcf7f734","Type":"ContainerDied","Data":"a536c272954462921fc604267b25f8d65d6f6bc9444d2c6bb8607f4b9f14a00d"} Feb 20 12:03:41.465907 master-0 kubenswrapper[7756]: I0220 12:03:41.465870 7756 scope.go:117] "RemoveContainer" containerID="a536c272954462921fc604267b25f8d65d6f6bc9444d2c6bb8607f4b9f14a00d" Feb 20 12:03:41.468180 master-0 kubenswrapper[7756]: I0220 12:03:41.468149 7756 generic.go:334] "Generic (PLEG): container finished" podID="f98aeaf7-bf1a-46af-bf1b-85713baa4c67" containerID="f8d154b1c828589837ec3c8ec4ad4d835c269d69b663caaef17de5eec1f25aa8" exitCode=0 Feb 20 12:03:41.468362 master-0 kubenswrapper[7756]: I0220 12:03:41.468332 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" event={"ID":"f98aeaf7-bf1a-46af-bf1b-85713baa4c67","Type":"ContainerDied","Data":"f8d154b1c828589837ec3c8ec4ad4d835c269d69b663caaef17de5eec1f25aa8"} Feb 20 12:03:41.468843 master-0 kubenswrapper[7756]: I0220 12:03:41.468822 7756 scope.go:117] "RemoveContainer" containerID="f8d154b1c828589837ec3c8ec4ad4d835c269d69b663caaef17de5eec1f25aa8" Feb 20 12:03:41.472219 master-0 kubenswrapper[7756]: I0220 12:03:41.472179 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-mr99g_dbce6cdc-040a-48e1-8a81-b6ff9c180eba/package-server-manager/0.log" Feb 20 12:03:41.475029 master-0 kubenswrapper[7756]: I0220 12:03:41.474973 7756 generic.go:334] "Generic (PLEG): container finished" podID="dbce6cdc-040a-48e1-8a81-b6ff9c180eba" containerID="d69dad82c79e06506f238a23ca41e2826075f52d69f22b3756440b59139033ec" exitCode=1 Feb 20 12:03:41.475127 master-0 kubenswrapper[7756]: I0220 12:03:41.475046 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" event={"ID":"dbce6cdc-040a-48e1-8a81-b6ff9c180eba","Type":"ContainerDied","Data":"d69dad82c79e06506f238a23ca41e2826075f52d69f22b3756440b59139033ec"} Feb 20 12:03:41.475839 master-0 kubenswrapper[7756]: I0220 12:03:41.475779 7756 scope.go:117] "RemoveContainer" containerID="d69dad82c79e06506f238a23ca41e2826075f52d69f22b3756440b59139033ec" Feb 20 12:03:41.483369 master-0 kubenswrapper[7756]: I0220 12:03:41.483318 7756 generic.go:334] "Generic (PLEG): container finished" podID="1df81fcc-f967-4874-ad16-1a89f0e7875a" containerID="f658812d3a0840e273c061153c1646fa88e6e4617da166e0ff391ed3c4a82be1" exitCode=0 Feb 20 12:03:41.483474 master-0 kubenswrapper[7756]: I0220 12:03:41.483393 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" event={"ID":"1df81fcc-f967-4874-ad16-1a89f0e7875a","Type":"ContainerDied","Data":"f658812d3a0840e273c061153c1646fa88e6e4617da166e0ff391ed3c4a82be1"} Feb 20 12:03:41.484264 master-0 kubenswrapper[7756]: I0220 12:03:41.484223 7756 scope.go:117] "RemoveContainer" containerID="f658812d3a0840e273c061153c1646fa88e6e4617da166e0ff391ed3c4a82be1" Feb 20 12:03:41.495521 master-0 kubenswrapper[7756]: I0220 12:03:41.495472 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-gwpst_4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/cluster-node-tuning-operator/0.log" Feb 20 12:03:41.495657 master-0 kubenswrapper[7756]: I0220 12:03:41.495566 7756 generic.go:334] "Generic (PLEG): container finished" podID="4cbb46f1-1c33-42fc-8371-6a1bea8c28ff" containerID="f478ae19f7f37b0b144530d29503cc9eb3edcf8d27e26035c2139b9aa149987b" exitCode=1 Feb 20 12:03:41.495657 master-0 kubenswrapper[7756]: I0220 12:03:41.495639 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" event={"ID":"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff","Type":"ContainerDied","Data":"f478ae19f7f37b0b144530d29503cc9eb3edcf8d27e26035c2139b9aa149987b"} Feb 20 12:03:41.496241 master-0 kubenswrapper[7756]: I0220 12:03:41.496205 7756 scope.go:117] "RemoveContainer" containerID="f478ae19f7f37b0b144530d29503cc9eb3edcf8d27e26035c2139b9aa149987b" Feb 20 12:03:41.499171 master-0 kubenswrapper[7756]: I0220 12:03:41.499092 7756 generic.go:334] "Generic (PLEG): container finished" podID="c29fd426-7c89-434e-8332-1ca31075d4bf" containerID="b4292dccd690e9143e933dee29f59d01786a2f035fd7b57469d300f2f8a55365" exitCode=0 Feb 20 12:03:41.499327 master-0 kubenswrapper[7756]: I0220 12:03:41.499203 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" event={"ID":"c29fd426-7c89-434e-8332-1ca31075d4bf","Type":"ContainerDied","Data":"b4292dccd690e9143e933dee29f59d01786a2f035fd7b57469d300f2f8a55365"} Feb 20 12:03:41.499835 master-0 kubenswrapper[7756]: I0220 12:03:41.499781 7756 scope.go:117] "RemoveContainer" containerID="b4292dccd690e9143e933dee29f59d01786a2f035fd7b57469d300f2f8a55365" Feb 20 12:03:41.500979 master-0 kubenswrapper[7756]: I0220 12:03:41.500936 7756 generic.go:334] "Generic (PLEG): container finished" podID="89383482-190e-4f74-a81e-b1547e5b9ae6" containerID="cdc9cc9ed8b0ca2df37b48bd33917f4f6c78f23c4f8aeddaab64905dab048bcd" exitCode=0 Feb 20 12:03:41.500979 master-0 kubenswrapper[7756]: I0220 12:03:41.500970 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" event={"ID":"89383482-190e-4f74-a81e-b1547e5b9ae6","Type":"ContainerDied","Data":"cdc9cc9ed8b0ca2df37b48bd33917f4f6c78f23c4f8aeddaab64905dab048bcd"} Feb 20 12:03:41.501745 master-0 kubenswrapper[7756]: I0220 12:03:41.501500 7756 scope.go:117] "RemoveContainer" containerID="cdc9cc9ed8b0ca2df37b48bd33917f4f6c78f23c4f8aeddaab64905dab048bcd" Feb 20 12:03:41.505820 master-0 kubenswrapper[7756]: I0220 12:03:41.505768 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-c48c8bf7c-qwwbk_01e90033-9ddf-41b4-ab61-e89add6c2fde/service-ca-operator/1.log" Feb 20 12:03:41.505944 master-0 kubenswrapper[7756]: I0220 12:03:41.505828 7756 generic.go:334] "Generic (PLEG): container finished" podID="01e90033-9ddf-41b4-ab61-e89add6c2fde" containerID="f9528f6d61bdc5d1282c2d9d2f6d9758a8e04364c9337158e14aef2c2ffff6b4" exitCode=0 Feb 20 12:03:41.505944 master-0 kubenswrapper[7756]: I0220 12:03:41.505871 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" event={"ID":"01e90033-9ddf-41b4-ab61-e89add6c2fde","Type":"ContainerDied","Data":"f9528f6d61bdc5d1282c2d9d2f6d9758a8e04364c9337158e14aef2c2ffff6b4"} Feb 20 12:03:41.506467 master-0 kubenswrapper[7756]: I0220 12:03:41.506433 7756 scope.go:117] "RemoveContainer" containerID="f9528f6d61bdc5d1282c2d9d2f6d9758a8e04364c9337158e14aef2c2ffff6b4" Feb 20 12:03:41.879485 master-0 kubenswrapper[7756]: I0220 12:03:41.879424 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:03:41.879485 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:03:41.879485 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:03:41.879485 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:03:41.879696 master-0 kubenswrapper[7756]: I0220 12:03:41.879485 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:03:41.879696 master-0 kubenswrapper[7756]: I0220 12:03:41.879542 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:03:41.880059 master-0 kubenswrapper[7756]: I0220 12:03:41.880033 7756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"e6a70c0e0f237b900ba323a2d2250f1ed5e02a069194617f8e9507c1f16cde63"} pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" containerMessage="Container router failed startup probe, will be restarted" Feb 20 12:03:41.880098 master-0 kubenswrapper[7756]: I0220 12:03:41.880065 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" containerID="cri-o://e6a70c0e0f237b900ba323a2d2250f1ed5e02a069194617f8e9507c1f16cde63" gracePeriod=3600 Feb 20 12:03:42.074980 master-0 kubenswrapper[7756]: I0220 12:03:42.074925 7756 scope.go:117] "RemoveContainer" containerID="c3fd58850441274093931c36087d9a8518e8af6cd5182fdb00d74233da8d66da" Feb 20 12:03:42.124120 master-0 kubenswrapper[7756]: I0220 12:03:42.124029 7756 scope.go:117] "RemoveContainer" containerID="f4d85100cd0f06816a98689538bc93ed981f60823f3ce37e7c844447bcdb96ee" Feb 20 12:03:42.170835 master-0 kubenswrapper[7756]: I0220 12:03:42.170791 7756 scope.go:117] "RemoveContainer" containerID="9e91bb7cb260950fd5e975354ec43adcbf694e33c154dd1b679deca6be0b9cfb" Feb 20 12:03:42.199127 master-0 kubenswrapper[7756]: I0220 12:03:42.199082 7756 scope.go:117] "RemoveContainer" containerID="ba33361681392f1def86ef3fcb0b685dd11e1a8eb4030176e604e1253b421630" Feb 20 12:03:42.264451 master-0 kubenswrapper[7756]: I0220 12:03:42.264399 7756 scope.go:117] "RemoveContainer" containerID="5461ac8869ede1ae48aaf443305cec8c0cf9a21a54dc206e103440a3f966bcc9" Feb 20 12:03:42.333114 master-0 kubenswrapper[7756]: I0220 12:03:42.333028 7756 scope.go:117] "RemoveContainer" containerID="5602fcf86766ef7d0d60953da5d2c52d3e2681c284b76402a701dd6648958446" Feb 20 12:03:43.539238 master-0 kubenswrapper[7756]: I0220 12:03:43.539178 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" event={"ID":"e0b28c90-d5b6-44f3-867c-020ece32ac7d","Type":"ContainerStarted","Data":"6e72a3f63f69d44d250455191ce5468054ea0869cca0b0f3c36345f688f39321"} Feb 20 12:03:43.541409 master-0 kubenswrapper[7756]: I0220 12:03:43.541370 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" event={"ID":"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9","Type":"ContainerStarted","Data":"b222f35dff3e5c25870d080c6d3713229c5fbac93a1be203e7ca90a8cce2d1c8"} Feb 20 12:03:43.543416 master-0 kubenswrapper[7756]: I0220 12:03:43.543384 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" event={"ID":"c29fd426-7c89-434e-8332-1ca31075d4bf","Type":"ContainerStarted","Data":"c19eec66d34e5d17a3e186a00fcaa04150b49fcc6bd52c6714edcc3b79452483"} Feb 20 12:03:43.543707 master-0 kubenswrapper[7756]: I0220 12:03:43.543683 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:03:43.545368 master-0 kubenswrapper[7756]: I0220 12:03:43.545339 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" event={"ID":"01e90033-9ddf-41b4-ab61-e89add6c2fde","Type":"ContainerStarted","Data":"e2d1b404f56d49d59847e718e3e5abe26ea17ac8d0bc487708471f385bae4f10"} Feb 20 12:03:43.548441 master-0 kubenswrapper[7756]: I0220 12:03:43.548413 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" event={"ID":"ce2b6fde-de56-49c3-9bd6-e81c679b02bc","Type":"ContainerStarted","Data":"add605fbe9481f0dc870665fc99da0e03e0b803894d376444bb50f597292d519"} Feb 20 12:03:43.549637 master-0 kubenswrapper[7756]: I0220 12:03:43.549579 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:03:43.550358 master-0 kubenswrapper[7756]: I0220 12:03:43.550323 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" event={"ID":"5360f3f5-2d07-432f-af45-22659538c55e","Type":"ContainerStarted","Data":"37e584969a982f725eee28c571c96a59baa1ce8b591228092cd4e6ef43006b06"} Feb 20 12:03:43.553133 master-0 kubenswrapper[7756]: I0220 12:03:43.553108 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" event={"ID":"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe","Type":"ContainerStarted","Data":"343096342cd404a3631abeddf5ab849cbf148386bf095fc0870ee61b61a3a04d"} Feb 20 12:03:43.555944 master-0 kubenswrapper[7756]: I0220 12:03:43.555920 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" event={"ID":"1df81fcc-f967-4874-ad16-1a89f0e7875a","Type":"ContainerStarted","Data":"cb8c4cf8e1253c6df8004d1ed005598f238eded557dc5452cec7513f000d843a"} Feb 20 12:03:43.572250 master-0 kubenswrapper[7756]: I0220 12:03:43.572164 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" event={"ID":"eb135cff-1a2e-468d-80ab-f7db3f57552a","Type":"ContainerStarted","Data":"7ae653d399c1efbda21068b923ef809daa94f630f8c3e45bb2632a45ffa943fc"} Feb 20 12:03:43.603764 master-0 kubenswrapper[7756]: I0220 12:03:43.603716 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-fv598" event={"ID":"312ca024-c8f0-4994-8f9a-b707607341fe","Type":"ContainerStarted","Data":"1b16edd41bc6149291d984909a32f785e18c62f96bfbc70f4929e7dd6ea1967f"} Feb 20 12:03:43.612126 master-0 kubenswrapper[7756]: I0220 12:03:43.612049 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" event={"ID":"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca","Type":"ContainerStarted","Data":"eb9cdfd7e84955798866942c15f767bcf749f682539f459814ce93e020239637"} Feb 20 12:03:43.621785 master-0 kubenswrapper[7756]: I0220 12:03:43.621737 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw" event={"ID":"839bf5b1-b242-4bbd-bc09-cf6abcf7f734","Type":"ContainerStarted","Data":"de8b62b5f33e37d35b0b60ccd1f18ac1a7544229c0d516d4b01dbe89e5113e55"} Feb 20 12:03:43.629929 master-0 kubenswrapper[7756]: I0220 12:03:43.629880 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" event={"ID":"f98aeaf7-bf1a-46af-bf1b-85713baa4c67","Type":"ContainerStarted","Data":"75580c03427a4345d2c7a76d71bc6d232eac9686916eea0405b8e6b4f8896064"} Feb 20 12:03:43.642088 master-0 kubenswrapper[7756]: I0220 12:03:43.642048 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-mr99g_dbce6cdc-040a-48e1-8a81-b6ff9c180eba/package-server-manager/0.log" Feb 20 12:03:43.642611 master-0 kubenswrapper[7756]: I0220 12:03:43.642574 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" event={"ID":"dbce6cdc-040a-48e1-8a81-b6ff9c180eba","Type":"ContainerStarted","Data":"ac4326f39afce598d746829017055246be22142f37c8b38e26af981b3a1a12fa"} Feb 20 12:03:43.642815 master-0 kubenswrapper[7756]: I0220 12:03:43.642793 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 12:03:43.644381 master-0 kubenswrapper[7756]: I0220 12:03:43.644358 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" event={"ID":"89383482-190e-4f74-a81e-b1547e5b9ae6","Type":"ContainerStarted","Data":"732c82e4de9387d0031bf936fa457bbd01219032435aefce24e9dd19e22be1ca"} Feb 20 12:03:43.646108 master-0 kubenswrapper[7756]: I0220 12:03:43.646086 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-gwpst_4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/cluster-node-tuning-operator/0.log" Feb 20 12:03:43.646171 master-0 kubenswrapper[7756]: I0220 12:03:43.646133 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" event={"ID":"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff","Type":"ContainerStarted","Data":"115071f5754b1dae3fa6664c1da9fbd69d11cfd1632ee01ec7e38cdd9578b92d"} Feb 20 12:03:43.651673 master-0 kubenswrapper[7756]: I0220 12:03:43.651651 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-sksbt_8ab951b1-6898-4357-b813-16365f3f89d5/cluster-autoscaler-operator/0.log" Feb 20 12:03:43.651957 master-0 kubenswrapper[7756]: I0220 12:03:43.651934 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" event={"ID":"8ab951b1-6898-4357-b813-16365f3f89d5","Type":"ContainerStarted","Data":"bbcd3c72c2469542b1a18f35774290442502855e21a373acf720e0f8488b6015"} Feb 20 12:03:43.664127 master-0 kubenswrapper[7756]: I0220 12:03:43.664081 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" event={"ID":"8a97bbf5-7409-4f36-894b-b88284e1b6d0","Type":"ContainerStarted","Data":"40c4ee090cecff0fee415511187cf82f68d781599bbdef33acae7ab208900711"} Feb 20 12:03:43.666727 master-0 kubenswrapper[7756]: I0220 12:03:43.666700 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" event={"ID":"6c3aa45a-44cc-48fb-a478-ce01a70c4b02","Type":"ContainerStarted","Data":"26411e9c009849e32629fc9ef7ae2bbd248944a9ffb6cc0e23a16a9e2e7bc996"} Feb 20 12:03:43.673541 master-0 kubenswrapper[7756]: I0220 12:03:43.670123 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" event={"ID":"bbdbadd9-eeaa-46ef-936e-5db8d395c118","Type":"ContainerStarted","Data":"17bcf34f076399e8e35b20ec0fa0fb593821876144dabca7606e8b3af5246e21"} Feb 20 12:03:45.579355 master-0 kubenswrapper[7756]: I0220 12:03:45.579286 7756 scope.go:117] "RemoveContainer" containerID="60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726" Feb 20 12:03:46.693508 master-0 kubenswrapper[7756]: I0220 12:03:46.693447 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/cluster-policy-controller/3.log" Feb 20 12:03:46.694483 master-0 kubenswrapper[7756]: I0220 12:03:46.694453 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager-cert-syncer/0.log" Feb 20 12:03:46.694949 master-0 kubenswrapper[7756]: I0220 12:03:46.694919 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager/0.log" Feb 20 12:03:46.695018 master-0 kubenswrapper[7756]: I0220 12:03:46.694956 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a767e0793175d588147a983384ee43db","Type":"ContainerStarted","Data":"b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0"} Feb 20 12:03:50.141572 master-0 kubenswrapper[7756]: I0220 12:03:50.138083 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 20 12:03:50.142612 master-0 kubenswrapper[7756]: I0220 12:03:50.141595 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 12:03:50.152298 master-0 kubenswrapper[7756]: I0220 12:03:50.152242 7756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 20 12:03:50.152628 master-0 kubenswrapper[7756]: I0220 12:03:50.152434 7756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-h4rwl" Feb 20 12:03:50.179995 master-0 kubenswrapper[7756]: I0220 12:03:50.177340 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 20 12:03:50.265433 master-0 kubenswrapper[7756]: I0220 12:03:50.265313 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-var-lock\") pod \"installer-3-master-0\" (UID: \"a41b23ca-9eed-4eb9-95dc-92418a6f4e86\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 12:03:50.265845 master-0 kubenswrapper[7756]: I0220 12:03:50.265784 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-kube-api-access\") pod \"installer-3-master-0\" (UID: \"a41b23ca-9eed-4eb9-95dc-92418a6f4e86\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 12:03:50.265936 master-0 kubenswrapper[7756]: I0220 12:03:50.265870 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"a41b23ca-9eed-4eb9-95dc-92418a6f4e86\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 12:03:50.368299 master-0 kubenswrapper[7756]: I0220 12:03:50.368196 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-kube-api-access\") pod \"installer-3-master-0\" (UID: \"a41b23ca-9eed-4eb9-95dc-92418a6f4e86\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 12:03:50.368618 master-0 kubenswrapper[7756]: I0220 12:03:50.368520 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"a41b23ca-9eed-4eb9-95dc-92418a6f4e86\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 12:03:50.368618 master-0 kubenswrapper[7756]: I0220 12:03:50.368599 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-var-lock\") pod \"installer-3-master-0\" (UID: \"a41b23ca-9eed-4eb9-95dc-92418a6f4e86\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 12:03:50.368782 master-0 kubenswrapper[7756]: I0220 12:03:50.368612 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"a41b23ca-9eed-4eb9-95dc-92418a6f4e86\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 12:03:50.368782 master-0 kubenswrapper[7756]: I0220 12:03:50.368687 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-var-lock\") pod \"installer-3-master-0\" (UID: \"a41b23ca-9eed-4eb9-95dc-92418a6f4e86\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 12:03:50.399209 master-0 kubenswrapper[7756]: I0220 12:03:50.399037 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-kube-api-access\") pod \"installer-3-master-0\" (UID: \"a41b23ca-9eed-4eb9-95dc-92418a6f4e86\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 12:03:50.493598 master-0 kubenswrapper[7756]: I0220 12:03:50.493483 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 12:03:50.983643 master-0 kubenswrapper[7756]: I0220 12:03:50.982980 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 20 12:03:51.744173 master-0 kubenswrapper[7756]: I0220 12:03:51.744107 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"a41b23ca-9eed-4eb9-95dc-92418a6f4e86","Type":"ContainerStarted","Data":"2260e76cd3d2df450c12a0158d94e76ddd7d3b92f4e2a837f57c5c73685c7d75"} Feb 20 12:03:51.744856 master-0 kubenswrapper[7756]: I0220 12:03:51.744177 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"a41b23ca-9eed-4eb9-95dc-92418a6f4e86","Type":"ContainerStarted","Data":"ce9ed94bd982d2f41102a55cb2e618edd19c6224d6f0adfa7cf35da3a1237451"} Feb 20 12:03:51.769943 master-0 kubenswrapper[7756]: I0220 12:03:51.769798 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=1.769776442 podStartE2EDuration="1.769776442s" podCreationTimestamp="2026-02-20 12:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:03:51.767494867 +0000 UTC m=+877.509742885" watchObservedRunningTime="2026-02-20 12:03:51.769776442 +0000 UTC m=+877.512024460" Feb 20 12:03:51.969574 master-0 kubenswrapper[7756]: I0220 12:03:51.969375 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:03:51.969574 master-0 kubenswrapper[7756]: I0220 12:03:51.969452 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:03:51.976795 master-0 kubenswrapper[7756]: I0220 12:03:51.976713 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:01.975780 master-0 kubenswrapper[7756]: I0220 12:04:01.975693 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:04.217228 master-0 kubenswrapper[7756]: I0220 12:04:04.217095 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Feb 20 12:04:04.218108 master-0 kubenswrapper[7756]: I0220 12:04:04.217558 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" podUID="9cd8b719-50cb-45ad-ac5f-a01aa45bf832" containerName="installer" containerID="cri-o://45919566e098baefe87bebf049b652731a4ba475b6e20928b17f2d3bdb6d2e5a" gracePeriod=30 Feb 20 12:04:07.419446 master-0 kubenswrapper[7756]: I0220 12:04:07.419362 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 20 12:04:07.420917 master-0 kubenswrapper[7756]: I0220 12:04:07.420873 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:07.439400 master-0 kubenswrapper[7756]: I0220 12:04:07.439292 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 20 12:04:07.551487 master-0 kubenswrapper[7756]: I0220 12:04:07.551316 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:07.551487 master-0 kubenswrapper[7756]: I0220 12:04:07.551464 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-var-lock\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:07.551938 master-0 kubenswrapper[7756]: I0220 12:04:07.551638 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:07.653943 master-0 kubenswrapper[7756]: I0220 12:04:07.653838 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-var-lock\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:07.653943 master-0 kubenswrapper[7756]: I0220 12:04:07.653922 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:07.654587 master-0 kubenswrapper[7756]: I0220 12:04:07.654037 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:07.654587 master-0 kubenswrapper[7756]: I0220 12:04:07.654031 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-var-lock\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:07.654587 master-0 kubenswrapper[7756]: I0220 12:04:07.654142 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:07.683254 master-0 kubenswrapper[7756]: I0220 12:04:07.683117 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:07.764925 master-0 kubenswrapper[7756]: I0220 12:04:07.764817 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:08.256414 master-0 kubenswrapper[7756]: I0220 12:04:08.256343 7756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 20 12:04:08.263310 master-0 kubenswrapper[7756]: W0220 12:04:08.263242 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod97095f88_ee81_4a47_9bd7_1dbe71ec8d4d.slice/crio-3590f63863912596b171ca5f35809210ae59c7b19c2fdb801182abdc3cd97397 WatchSource:0}: Error finding container 3590f63863912596b171ca5f35809210ae59c7b19c2fdb801182abdc3cd97397: Status 404 returned error can't find the container with id 3590f63863912596b171ca5f35809210ae59c7b19c2fdb801182abdc3cd97397 Feb 20 12:04:08.886691 master-0 kubenswrapper[7756]: I0220 12:04:08.886601 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d","Type":"ContainerStarted","Data":"54c853c11767fb2e9c16b82b830e00aa5d8a596a5498e4384e29c0cde6cc8aed"} Feb 20 12:04:08.886691 master-0 kubenswrapper[7756]: I0220 12:04:08.886692 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d","Type":"ContainerStarted","Data":"3590f63863912596b171ca5f35809210ae59c7b19c2fdb801182abdc3cd97397"} Feb 20 12:04:08.938096 master-0 kubenswrapper[7756]: I0220 12:04:08.938006 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=1.937989151 podStartE2EDuration="1.937989151s" podCreationTimestamp="2026-02-20 12:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:04:08.92069236 +0000 UTC m=+894.662940378" watchObservedRunningTime="2026-02-20 12:04:08.937989151 +0000 UTC m=+894.680237159" Feb 20 12:04:09.283068 master-0 kubenswrapper[7756]: I0220 12:04:09.283001 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-retry-1-master-0_9cd8b719-50cb-45ad-ac5f-a01aa45bf832/installer/0.log" Feb 20 12:04:09.283301 master-0 kubenswrapper[7756]: I0220 12:04:09.283104 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 20 12:04:09.478907 master-0 kubenswrapper[7756]: I0220 12:04:09.478811 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-kubelet-dir\") pod \"9cd8b719-50cb-45ad-ac5f-a01aa45bf832\" (UID: \"9cd8b719-50cb-45ad-ac5f-a01aa45bf832\") " Feb 20 12:04:09.479116 master-0 kubenswrapper[7756]: I0220 12:04:09.478922 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9cd8b719-50cb-45ad-ac5f-a01aa45bf832" (UID: "9cd8b719-50cb-45ad-ac5f-a01aa45bf832"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:09.479116 master-0 kubenswrapper[7756]: I0220 12:04:09.479081 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-kube-api-access\") pod \"9cd8b719-50cb-45ad-ac5f-a01aa45bf832\" (UID: \"9cd8b719-50cb-45ad-ac5f-a01aa45bf832\") " Feb 20 12:04:09.479210 master-0 kubenswrapper[7756]: I0220 12:04:09.479191 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-var-lock\") pod \"9cd8b719-50cb-45ad-ac5f-a01aa45bf832\" (UID: \"9cd8b719-50cb-45ad-ac5f-a01aa45bf832\") " Feb 20 12:04:09.479336 master-0 kubenswrapper[7756]: I0220 12:04:09.479297 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-var-lock" (OuterVolumeSpecName: "var-lock") pod "9cd8b719-50cb-45ad-ac5f-a01aa45bf832" (UID: "9cd8b719-50cb-45ad-ac5f-a01aa45bf832"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:09.479728 master-0 kubenswrapper[7756]: I0220 12:04:09.479692 7756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:09.479785 master-0 kubenswrapper[7756]: I0220 12:04:09.479726 7756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:09.483829 master-0 kubenswrapper[7756]: I0220 12:04:09.483771 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9cd8b719-50cb-45ad-ac5f-a01aa45bf832" (UID: "9cd8b719-50cb-45ad-ac5f-a01aa45bf832"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:04:09.581618 master-0 kubenswrapper[7756]: I0220 12:04:09.581499 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cd8b719-50cb-45ad-ac5f-a01aa45bf832-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:09.896788 master-0 kubenswrapper[7756]: I0220 12:04:09.896650 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-retry-1-master-0_9cd8b719-50cb-45ad-ac5f-a01aa45bf832/installer/0.log" Feb 20 12:04:09.896788 master-0 kubenswrapper[7756]: I0220 12:04:09.896735 7756 generic.go:334] "Generic (PLEG): container finished" podID="9cd8b719-50cb-45ad-ac5f-a01aa45bf832" containerID="45919566e098baefe87bebf049b652731a4ba475b6e20928b17f2d3bdb6d2e5a" exitCode=1 Feb 20 12:04:09.897880 master-0 kubenswrapper[7756]: I0220 12:04:09.896868 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 20 12:04:09.897880 master-0 kubenswrapper[7756]: I0220 12:04:09.896887 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" event={"ID":"9cd8b719-50cb-45ad-ac5f-a01aa45bf832","Type":"ContainerDied","Data":"45919566e098baefe87bebf049b652731a4ba475b6e20928b17f2d3bdb6d2e5a"} Feb 20 12:04:09.897880 master-0 kubenswrapper[7756]: I0220 12:04:09.897000 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" event={"ID":"9cd8b719-50cb-45ad-ac5f-a01aa45bf832","Type":"ContainerDied","Data":"0ce9620617bed9f134ebbea3a051f3826acc79ea8ed40edc00f1e5c077e14166"} Feb 20 12:04:09.897880 master-0 kubenswrapper[7756]: I0220 12:04:09.897039 7756 scope.go:117] "RemoveContainer" containerID="45919566e098baefe87bebf049b652731a4ba475b6e20928b17f2d3bdb6d2e5a" Feb 20 12:04:09.924584 master-0 kubenswrapper[7756]: I0220 12:04:09.924481 7756 scope.go:117] "RemoveContainer" containerID="45919566e098baefe87bebf049b652731a4ba475b6e20928b17f2d3bdb6d2e5a" Feb 20 12:04:09.925304 master-0 kubenswrapper[7756]: E0220 12:04:09.925232 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45919566e098baefe87bebf049b652731a4ba475b6e20928b17f2d3bdb6d2e5a\": container with ID starting with 45919566e098baefe87bebf049b652731a4ba475b6e20928b17f2d3bdb6d2e5a not found: ID does not exist" containerID="45919566e098baefe87bebf049b652731a4ba475b6e20928b17f2d3bdb6d2e5a" Feb 20 12:04:09.925465 master-0 kubenswrapper[7756]: I0220 12:04:09.925298 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45919566e098baefe87bebf049b652731a4ba475b6e20928b17f2d3bdb6d2e5a"} err="failed to get container status \"45919566e098baefe87bebf049b652731a4ba475b6e20928b17f2d3bdb6d2e5a\": rpc error: code = NotFound desc = could not find container \"45919566e098baefe87bebf049b652731a4ba475b6e20928b17f2d3bdb6d2e5a\": container with ID starting with 45919566e098baefe87bebf049b652731a4ba475b6e20928b17f2d3bdb6d2e5a not found: ID does not exist" Feb 20 12:04:09.954551 master-0 kubenswrapper[7756]: I0220 12:04:09.954440 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Feb 20 12:04:09.961275 master-0 kubenswrapper[7756]: I0220 12:04:09.961199 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Feb 20 12:04:10.590827 master-0 kubenswrapper[7756]: I0220 12:04:10.590745 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd8b719-50cb-45ad-ac5f-a01aa45bf832" path="/var/lib/kubelet/pods/9cd8b719-50cb-45ad-ac5f-a01aa45bf832/volumes" Feb 20 12:04:14.954556 master-0 kubenswrapper[7756]: I0220 12:04:14.954458 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_aa2f6c0cf73fadd0d96a26150bb4dbb3/kube-scheduler-cert-syncer/0.log" Feb 20 12:04:14.955560 master-0 kubenswrapper[7756]: I0220 12:04:14.955497 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_aa2f6c0cf73fadd0d96a26150bb4dbb3/kube-scheduler/0.log" Feb 20 12:04:14.956382 master-0 kubenswrapper[7756]: I0220 12:04:14.956325 7756 generic.go:334] "Generic (PLEG): container finished" podID="aa2f6c0cf73fadd0d96a26150bb4dbb3" containerID="b3973bb4e0436fc81dccb8348c1f9f8491e95c0a5851afc33de82d620bb3b291" exitCode=1 Feb 20 12:04:14.956510 master-0 kubenswrapper[7756]: I0220 12:04:14.956398 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerDied","Data":"b3973bb4e0436fc81dccb8348c1f9f8491e95c0a5851afc33de82d620bb3b291"} Feb 20 12:04:14.957458 master-0 kubenswrapper[7756]: I0220 12:04:14.957416 7756 scope.go:117] "RemoveContainer" containerID="b3973bb4e0436fc81dccb8348c1f9f8491e95c0a5851afc33de82d620bb3b291" Feb 20 12:04:15.969034 master-0 kubenswrapper[7756]: I0220 12:04:15.968961 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_aa2f6c0cf73fadd0d96a26150bb4dbb3/kube-scheduler-cert-syncer/0.log" Feb 20 12:04:15.970000 master-0 kubenswrapper[7756]: I0220 12:04:15.969941 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_aa2f6c0cf73fadd0d96a26150bb4dbb3/kube-scheduler/0.log" Feb 20 12:04:15.970736 master-0 kubenswrapper[7756]: I0220 12:04:15.970671 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerStarted","Data":"37900565eb75d9b798f3f149616903b7d394f85e312ddc281cb50f56eac08ff1"} Feb 20 12:04:17.604148 master-0 kubenswrapper[7756]: I0220 12:04:17.604070 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 12:04:24.425443 master-0 kubenswrapper[7756]: I0220 12:04:24.425361 7756 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 12:04:24.426424 master-0 kubenswrapper[7756]: I0220 12:04:24.425863 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c" gracePeriod=30 Feb 20 12:04:24.426424 master-0 kubenswrapper[7756]: I0220 12:04:24.425934 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477" gracePeriod=30 Feb 20 12:04:24.426424 master-0 kubenswrapper[7756]: I0220 12:04:24.426024 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" containerID="cri-o://b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0" gracePeriod=30 Feb 20 12:04:24.426424 master-0 kubenswrapper[7756]: I0220 12:04:24.426040 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager" containerID="cri-o://fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23" gracePeriod=30 Feb 20 12:04:24.427952 master-0 kubenswrapper[7756]: I0220 12:04:24.427473 7756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 12:04:24.427952 master-0 kubenswrapper[7756]: E0220 12:04:24.427914 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" Feb 20 12:04:24.427952 master-0 kubenswrapper[7756]: I0220 12:04:24.427939 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" Feb 20 12:04:24.428135 master-0 kubenswrapper[7756]: E0220 12:04:24.427959 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" Feb 20 12:04:24.428135 master-0 kubenswrapper[7756]: I0220 12:04:24.427972 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" Feb 20 12:04:24.428135 master-0 kubenswrapper[7756]: E0220 12:04:24.427987 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager-recovery-controller" Feb 20 12:04:24.428135 master-0 kubenswrapper[7756]: I0220 12:04:24.428000 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager-recovery-controller" Feb 20 12:04:24.428135 master-0 kubenswrapper[7756]: E0220 12:04:24.428017 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager-cert-syncer" Feb 20 12:04:24.428135 master-0 kubenswrapper[7756]: I0220 12:04:24.428031 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager-cert-syncer" Feb 20 12:04:24.428135 master-0 kubenswrapper[7756]: E0220 12:04:24.428051 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" Feb 20 12:04:24.428135 master-0 kubenswrapper[7756]: I0220 12:04:24.428063 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" Feb 20 12:04:24.428135 master-0 kubenswrapper[7756]: E0220 12:04:24.428096 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" Feb 20 12:04:24.428135 master-0 kubenswrapper[7756]: I0220 12:04:24.428108 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" Feb 20 12:04:24.428135 master-0 kubenswrapper[7756]: E0220 12:04:24.428130 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager" Feb 20 12:04:24.428135 master-0 kubenswrapper[7756]: I0220 12:04:24.428144 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager" Feb 20 12:04:24.428877 master-0 kubenswrapper[7756]: E0220 12:04:24.428169 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd8b719-50cb-45ad-ac5f-a01aa45bf832" containerName="installer" Feb 20 12:04:24.428877 master-0 kubenswrapper[7756]: I0220 12:04:24.428182 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd8b719-50cb-45ad-ac5f-a01aa45bf832" containerName="installer" Feb 20 12:04:24.428877 master-0 kubenswrapper[7756]: E0220 12:04:24.428204 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager" Feb 20 12:04:24.428877 master-0 kubenswrapper[7756]: I0220 12:04:24.428216 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager" Feb 20 12:04:24.428877 master-0 kubenswrapper[7756]: I0220 12:04:24.428472 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager" Feb 20 12:04:24.428877 master-0 kubenswrapper[7756]: I0220 12:04:24.428501 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd8b719-50cb-45ad-ac5f-a01aa45bf832" containerName="installer" Feb 20 12:04:24.428877 master-0 kubenswrapper[7756]: I0220 12:04:24.428520 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager-cert-syncer" Feb 20 12:04:24.428877 master-0 kubenswrapper[7756]: I0220 12:04:24.428583 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" Feb 20 12:04:24.428877 master-0 kubenswrapper[7756]: I0220 12:04:24.428617 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" Feb 20 12:04:24.428877 master-0 kubenswrapper[7756]: I0220 12:04:24.428645 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager-recovery-controller" Feb 20 12:04:24.428877 master-0 kubenswrapper[7756]: I0220 12:04:24.428674 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" Feb 20 12:04:24.428877 master-0 kubenswrapper[7756]: I0220 12:04:24.428691 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager" Feb 20 12:04:24.428877 master-0 kubenswrapper[7756]: I0220 12:04:24.428725 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" Feb 20 12:04:24.428877 master-0 kubenswrapper[7756]: I0220 12:04:24.428746 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager-cert-syncer" Feb 20 12:04:24.429722 master-0 kubenswrapper[7756]: E0220 12:04:24.428959 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager-cert-syncer" Feb 20 12:04:24.429722 master-0 kubenswrapper[7756]: I0220 12:04:24.428976 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a767e0793175d588147a983384ee43db" containerName="kube-controller-manager-cert-syncer" Feb 20 12:04:24.429722 master-0 kubenswrapper[7756]: E0220 12:04:24.429009 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" Feb 20 12:04:24.429722 master-0 kubenswrapper[7756]: I0220 12:04:24.429022 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" Feb 20 12:04:24.429722 master-0 kubenswrapper[7756]: I0220 12:04:24.429252 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a767e0793175d588147a983384ee43db" containerName="cluster-policy-controller" Feb 20 12:04:24.519253 master-0 kubenswrapper[7756]: I0220 12:04:24.519156 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"65774ccd44b6b404cec890cd0cfa3872\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:24.519624 master-0 kubenswrapper[7756]: I0220 12:04:24.519575 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"65774ccd44b6b404cec890cd0cfa3872\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:24.587577 master-0 kubenswrapper[7756]: I0220 12:04:24.587468 7756 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="a767e0793175d588147a983384ee43db" podUID="65774ccd44b6b404cec890cd0cfa3872" Feb 20 12:04:24.620376 master-0 kubenswrapper[7756]: I0220 12:04:24.620295 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"65774ccd44b6b404cec890cd0cfa3872\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:24.620575 master-0 kubenswrapper[7756]: I0220 12:04:24.620450 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"65774ccd44b6b404cec890cd0cfa3872\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:24.620575 master-0 kubenswrapper[7756]: I0220 12:04:24.620518 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"65774ccd44b6b404cec890cd0cfa3872\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:24.620712 master-0 kubenswrapper[7756]: I0220 12:04:24.620618 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"65774ccd44b6b404cec890cd0cfa3872\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:24.724276 master-0 kubenswrapper[7756]: I0220 12:04:24.724019 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager-cert-syncer/1.log" Feb 20 12:04:24.724744 master-0 kubenswrapper[7756]: I0220 12:04:24.724698 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/cluster-policy-controller/3.log" Feb 20 12:04:24.726504 master-0 kubenswrapper[7756]: I0220 12:04:24.726450 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager-cert-syncer/0.log" Feb 20 12:04:24.727258 master-0 kubenswrapper[7756]: I0220 12:04:24.727219 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager/0.log" Feb 20 12:04:24.727343 master-0 kubenswrapper[7756]: I0220 12:04:24.727317 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:24.731663 master-0 kubenswrapper[7756]: I0220 12:04:24.731496 7756 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="a767e0793175d588147a983384ee43db" podUID="65774ccd44b6b404cec890cd0cfa3872" Feb 20 12:04:24.824990 master-0 kubenswrapper[7756]: I0220 12:04:24.824927 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a767e0793175d588147a983384ee43db-resource-dir\") pod \"a767e0793175d588147a983384ee43db\" (UID: \"a767e0793175d588147a983384ee43db\") " Feb 20 12:04:24.825410 master-0 kubenswrapper[7756]: I0220 12:04:24.825378 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a767e0793175d588147a983384ee43db-cert-dir\") pod \"a767e0793175d588147a983384ee43db\" (UID: \"a767e0793175d588147a983384ee43db\") " Feb 20 12:04:24.825692 master-0 kubenswrapper[7756]: I0220 12:04:24.825199 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a767e0793175d588147a983384ee43db-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "a767e0793175d588147a983384ee43db" (UID: "a767e0793175d588147a983384ee43db"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:24.825918 master-0 kubenswrapper[7756]: I0220 12:04:24.825863 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a767e0793175d588147a983384ee43db-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "a767e0793175d588147a983384ee43db" (UID: "a767e0793175d588147a983384ee43db"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:24.826909 master-0 kubenswrapper[7756]: I0220 12:04:24.826871 7756 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a767e0793175d588147a983384ee43db-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:24.827144 master-0 kubenswrapper[7756]: I0220 12:04:24.827106 7756 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a767e0793175d588147a983384ee43db-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:25.049866 master-0 kubenswrapper[7756]: I0220 12:04:25.049808 7756 generic.go:334] "Generic (PLEG): container finished" podID="a41b23ca-9eed-4eb9-95dc-92418a6f4e86" containerID="2260e76cd3d2df450c12a0158d94e76ddd7d3b92f4e2a837f57c5c73685c7d75" exitCode=0 Feb 20 12:04:25.050150 master-0 kubenswrapper[7756]: I0220 12:04:25.049900 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"a41b23ca-9eed-4eb9-95dc-92418a6f4e86","Type":"ContainerDied","Data":"2260e76cd3d2df450c12a0158d94e76ddd7d3b92f4e2a837f57c5c73685c7d75"} Feb 20 12:04:25.055855 master-0 kubenswrapper[7756]: I0220 12:04:25.055802 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager-cert-syncer/1.log" Feb 20 12:04:25.056476 master-0 kubenswrapper[7756]: I0220 12:04:25.056430 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/cluster-policy-controller/3.log" Feb 20 12:04:25.058400 master-0 kubenswrapper[7756]: I0220 12:04:25.058357 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager-cert-syncer/0.log" Feb 20 12:04:25.059555 master-0 kubenswrapper[7756]: I0220 12:04:25.059493 7756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a767e0793175d588147a983384ee43db/kube-controller-manager/0.log" Feb 20 12:04:25.059794 master-0 kubenswrapper[7756]: I0220 12:04:25.059759 7756 generic.go:334] "Generic (PLEG): container finished" podID="a767e0793175d588147a983384ee43db" containerID="b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0" exitCode=0 Feb 20 12:04:25.059960 master-0 kubenswrapper[7756]: I0220 12:04:25.059846 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:25.060099 master-0 kubenswrapper[7756]: I0220 12:04:25.059880 7756 scope.go:117] "RemoveContainer" containerID="b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0" Feb 20 12:04:25.060289 master-0 kubenswrapper[7756]: I0220 12:04:25.059929 7756 generic.go:334] "Generic (PLEG): container finished" podID="a767e0793175d588147a983384ee43db" containerID="4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477" exitCode=2 Feb 20 12:04:25.060581 master-0 kubenswrapper[7756]: I0220 12:04:25.060299 7756 generic.go:334] "Generic (PLEG): container finished" podID="a767e0793175d588147a983384ee43db" containerID="fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23" exitCode=0 Feb 20 12:04:25.060581 master-0 kubenswrapper[7756]: I0220 12:04:25.060337 7756 generic.go:334] "Generic (PLEG): container finished" podID="a767e0793175d588147a983384ee43db" containerID="b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c" exitCode=0 Feb 20 12:04:25.077996 master-0 kubenswrapper[7756]: I0220 12:04:25.077930 7756 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="a767e0793175d588147a983384ee43db" podUID="65774ccd44b6b404cec890cd0cfa3872" Feb 20 12:04:25.088590 master-0 kubenswrapper[7756]: I0220 12:04:25.088485 7756 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="a767e0793175d588147a983384ee43db" podUID="65774ccd44b6b404cec890cd0cfa3872" Feb 20 12:04:25.096507 master-0 kubenswrapper[7756]: I0220 12:04:25.096459 7756 scope.go:117] "RemoveContainer" containerID="4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477" Feb 20 12:04:25.123223 master-0 kubenswrapper[7756]: I0220 12:04:25.123175 7756 scope.go:117] "RemoveContainer" containerID="60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726" Feb 20 12:04:25.148955 master-0 kubenswrapper[7756]: I0220 12:04:25.148907 7756 scope.go:117] "RemoveContainer" containerID="fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23" Feb 20 12:04:25.175696 master-0 kubenswrapper[7756]: I0220 12:04:25.175656 7756 scope.go:117] "RemoveContainer" containerID="b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c" Feb 20 12:04:25.204911 master-0 kubenswrapper[7756]: I0220 12:04:25.204856 7756 scope.go:117] "RemoveContainer" containerID="112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4" Feb 20 12:04:25.231279 master-0 kubenswrapper[7756]: I0220 12:04:25.231222 7756 scope.go:117] "RemoveContainer" containerID="ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da" Feb 20 12:04:25.262671 master-0 kubenswrapper[7756]: I0220 12:04:25.262633 7756 scope.go:117] "RemoveContainer" containerID="b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0" Feb 20 12:04:25.263001 master-0 kubenswrapper[7756]: E0220 12:04:25.262965 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0\": container with ID starting with b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0 not found: ID does not exist" containerID="b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0" Feb 20 12:04:25.263116 master-0 kubenswrapper[7756]: I0220 12:04:25.263087 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0"} err="failed to get container status \"b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0\": rpc error: code = NotFound desc = could not find container \"b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0\": container with ID starting with b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0 not found: ID does not exist" Feb 20 12:04:25.263214 master-0 kubenswrapper[7756]: I0220 12:04:25.263196 7756 scope.go:117] "RemoveContainer" containerID="4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477" Feb 20 12:04:25.263581 master-0 kubenswrapper[7756]: E0220 12:04:25.263554 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477\": container with ID starting with 4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477 not found: ID does not exist" containerID="4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477" Feb 20 12:04:25.263666 master-0 kubenswrapper[7756]: I0220 12:04:25.263582 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477"} err="failed to get container status \"4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477\": rpc error: code = NotFound desc = could not find container \"4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477\": container with ID starting with 4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477 not found: ID does not exist" Feb 20 12:04:25.263666 master-0 kubenswrapper[7756]: I0220 12:04:25.263600 7756 scope.go:117] "RemoveContainer" containerID="60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726" Feb 20 12:04:25.263924 master-0 kubenswrapper[7756]: E0220 12:04:25.263903 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726\": container with ID starting with 60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726 not found: ID does not exist" containerID="60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726" Feb 20 12:04:25.264024 master-0 kubenswrapper[7756]: I0220 12:04:25.264004 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726"} err="failed to get container status \"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726\": rpc error: code = NotFound desc = could not find container \"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726\": container with ID starting with 60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726 not found: ID does not exist" Feb 20 12:04:25.264100 master-0 kubenswrapper[7756]: I0220 12:04:25.264089 7756 scope.go:117] "RemoveContainer" containerID="fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23" Feb 20 12:04:25.264513 master-0 kubenswrapper[7756]: E0220 12:04:25.264477 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23\": container with ID starting with fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23 not found: ID does not exist" containerID="fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23" Feb 20 12:04:25.264606 master-0 kubenswrapper[7756]: I0220 12:04:25.264556 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23"} err="failed to get container status \"fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23\": rpc error: code = NotFound desc = could not find container \"fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23\": container with ID starting with fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23 not found: ID does not exist" Feb 20 12:04:25.264606 master-0 kubenswrapper[7756]: I0220 12:04:25.264596 7756 scope.go:117] "RemoveContainer" containerID="b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c" Feb 20 12:04:25.265208 master-0 kubenswrapper[7756]: E0220 12:04:25.265170 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c\": container with ID starting with b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c not found: ID does not exist" containerID="b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c" Feb 20 12:04:25.265208 master-0 kubenswrapper[7756]: I0220 12:04:25.265199 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c"} err="failed to get container status \"b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c\": rpc error: code = NotFound desc = could not find container \"b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c\": container with ID starting with b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c not found: ID does not exist" Feb 20 12:04:25.265326 master-0 kubenswrapper[7756]: I0220 12:04:25.265217 7756 scope.go:117] "RemoveContainer" containerID="112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4" Feb 20 12:04:25.265627 master-0 kubenswrapper[7756]: E0220 12:04:25.265600 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4\": container with ID starting with 112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4 not found: ID does not exist" containerID="112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4" Feb 20 12:04:25.265718 master-0 kubenswrapper[7756]: I0220 12:04:25.265629 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4"} err="failed to get container status \"112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4\": rpc error: code = NotFound desc = could not find container \"112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4\": container with ID starting with 112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4 not found: ID does not exist" Feb 20 12:04:25.265718 master-0 kubenswrapper[7756]: I0220 12:04:25.265648 7756 scope.go:117] "RemoveContainer" containerID="ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da" Feb 20 12:04:25.265943 master-0 kubenswrapper[7756]: E0220 12:04:25.265915 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da\": container with ID starting with ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da not found: ID does not exist" containerID="ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da" Feb 20 12:04:25.266072 master-0 kubenswrapper[7756]: I0220 12:04:25.266051 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da"} err="failed to get container status \"ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da\": rpc error: code = NotFound desc = could not find container \"ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da\": container with ID starting with ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da not found: ID does not exist" Feb 20 12:04:25.266155 master-0 kubenswrapper[7756]: I0220 12:04:25.266143 7756 scope.go:117] "RemoveContainer" containerID="b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0" Feb 20 12:04:25.266658 master-0 kubenswrapper[7756]: I0220 12:04:25.266624 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0"} err="failed to get container status \"b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0\": rpc error: code = NotFound desc = could not find container \"b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0\": container with ID starting with b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0 not found: ID does not exist" Feb 20 12:04:25.266658 master-0 kubenswrapper[7756]: I0220 12:04:25.266654 7756 scope.go:117] "RemoveContainer" containerID="4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477" Feb 20 12:04:25.266988 master-0 kubenswrapper[7756]: I0220 12:04:25.266956 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477"} err="failed to get container status \"4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477\": rpc error: code = NotFound desc = could not find container \"4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477\": container with ID starting with 4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477 not found: ID does not exist" Feb 20 12:04:25.267061 master-0 kubenswrapper[7756]: I0220 12:04:25.266986 7756 scope.go:117] "RemoveContainer" containerID="60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726" Feb 20 12:04:25.267359 master-0 kubenswrapper[7756]: I0220 12:04:25.267329 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726"} err="failed to get container status \"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726\": rpc error: code = NotFound desc = could not find container \"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726\": container with ID starting with 60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726 not found: ID does not exist" Feb 20 12:04:25.267359 master-0 kubenswrapper[7756]: I0220 12:04:25.267353 7756 scope.go:117] "RemoveContainer" containerID="fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23" Feb 20 12:04:25.267664 master-0 kubenswrapper[7756]: I0220 12:04:25.267626 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23"} err="failed to get container status \"fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23\": rpc error: code = NotFound desc = could not find container \"fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23\": container with ID starting with fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23 not found: ID does not exist" Feb 20 12:04:25.267749 master-0 kubenswrapper[7756]: I0220 12:04:25.267664 7756 scope.go:117] "RemoveContainer" containerID="b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c" Feb 20 12:04:25.268333 master-0 kubenswrapper[7756]: I0220 12:04:25.268297 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c"} err="failed to get container status \"b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c\": rpc error: code = NotFound desc = could not find container \"b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c\": container with ID starting with b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c not found: ID does not exist" Feb 20 12:04:25.268414 master-0 kubenswrapper[7756]: I0220 12:04:25.268330 7756 scope.go:117] "RemoveContainer" containerID="112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4" Feb 20 12:04:25.268852 master-0 kubenswrapper[7756]: I0220 12:04:25.268821 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4"} err="failed to get container status \"112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4\": rpc error: code = NotFound desc = could not find container \"112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4\": container with ID starting with 112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4 not found: ID does not exist" Feb 20 12:04:25.268852 master-0 kubenswrapper[7756]: I0220 12:04:25.268847 7756 scope.go:117] "RemoveContainer" containerID="ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da" Feb 20 12:04:25.269196 master-0 kubenswrapper[7756]: I0220 12:04:25.269174 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da"} err="failed to get container status \"ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da\": rpc error: code = NotFound desc = could not find container \"ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da\": container with ID starting with ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da not found: ID does not exist" Feb 20 12:04:25.269289 master-0 kubenswrapper[7756]: I0220 12:04:25.269275 7756 scope.go:117] "RemoveContainer" containerID="b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0" Feb 20 12:04:25.269794 master-0 kubenswrapper[7756]: I0220 12:04:25.269770 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0"} err="failed to get container status \"b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0\": rpc error: code = NotFound desc = could not find container \"b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0\": container with ID starting with b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0 not found: ID does not exist" Feb 20 12:04:25.269894 master-0 kubenswrapper[7756]: I0220 12:04:25.269879 7756 scope.go:117] "RemoveContainer" containerID="4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477" Feb 20 12:04:25.270259 master-0 kubenswrapper[7756]: I0220 12:04:25.270233 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477"} err="failed to get container status \"4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477\": rpc error: code = NotFound desc = could not find container \"4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477\": container with ID starting with 4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477 not found: ID does not exist" Feb 20 12:04:25.270259 master-0 kubenswrapper[7756]: I0220 12:04:25.270258 7756 scope.go:117] "RemoveContainer" containerID="60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726" Feb 20 12:04:25.270583 master-0 kubenswrapper[7756]: I0220 12:04:25.270498 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726"} err="failed to get container status \"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726\": rpc error: code = NotFound desc = could not find container \"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726\": container with ID starting with 60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726 not found: ID does not exist" Feb 20 12:04:25.270663 master-0 kubenswrapper[7756]: I0220 12:04:25.270589 7756 scope.go:117] "RemoveContainer" containerID="fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23" Feb 20 12:04:25.270885 master-0 kubenswrapper[7756]: I0220 12:04:25.270861 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23"} err="failed to get container status \"fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23\": rpc error: code = NotFound desc = could not find container \"fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23\": container with ID starting with fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23 not found: ID does not exist" Feb 20 12:04:25.270885 master-0 kubenswrapper[7756]: I0220 12:04:25.270879 7756 scope.go:117] "RemoveContainer" containerID="b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c" Feb 20 12:04:25.271134 master-0 kubenswrapper[7756]: I0220 12:04:25.271106 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c"} err="failed to get container status \"b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c\": rpc error: code = NotFound desc = could not find container \"b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c\": container with ID starting with b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c not found: ID does not exist" Feb 20 12:04:25.271198 master-0 kubenswrapper[7756]: I0220 12:04:25.271158 7756 scope.go:117] "RemoveContainer" containerID="112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4" Feb 20 12:04:25.271505 master-0 kubenswrapper[7756]: I0220 12:04:25.271460 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4"} err="failed to get container status \"112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4\": rpc error: code = NotFound desc = could not find container \"112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4\": container with ID starting with 112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4 not found: ID does not exist" Feb 20 12:04:25.271588 master-0 kubenswrapper[7756]: I0220 12:04:25.271505 7756 scope.go:117] "RemoveContainer" containerID="ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da" Feb 20 12:04:25.271830 master-0 kubenswrapper[7756]: I0220 12:04:25.271803 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da"} err="failed to get container status \"ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da\": rpc error: code = NotFound desc = could not find container \"ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da\": container with ID starting with ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da not found: ID does not exist" Feb 20 12:04:25.271908 master-0 kubenswrapper[7756]: I0220 12:04:25.271829 7756 scope.go:117] "RemoveContainer" containerID="b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0" Feb 20 12:04:25.272105 master-0 kubenswrapper[7756]: I0220 12:04:25.272080 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0"} err="failed to get container status \"b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0\": rpc error: code = NotFound desc = could not find container \"b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0\": container with ID starting with b4e0bc7f17a611c50282f736560d5625b49a89b95bdb2dfc2f0272d897ef1fa0 not found: ID does not exist" Feb 20 12:04:25.272206 master-0 kubenswrapper[7756]: I0220 12:04:25.272188 7756 scope.go:117] "RemoveContainer" containerID="4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477" Feb 20 12:04:25.272516 master-0 kubenswrapper[7756]: I0220 12:04:25.272478 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477"} err="failed to get container status \"4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477\": rpc error: code = NotFound desc = could not find container \"4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477\": container with ID starting with 4a16b35cb0d14b12bf2dfc0709ad5330e8184da564fa16e571adfcf111b28477 not found: ID does not exist" Feb 20 12:04:25.272516 master-0 kubenswrapper[7756]: I0220 12:04:25.272503 7756 scope.go:117] "RemoveContainer" containerID="60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726" Feb 20 12:04:25.272772 master-0 kubenswrapper[7756]: I0220 12:04:25.272745 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726"} err="failed to get container status \"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726\": rpc error: code = NotFound desc = could not find container \"60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726\": container with ID starting with 60b17f84792d8c78eaa6edb6a0c648c90c8a2013ea001789a382ea9528d83726 not found: ID does not exist" Feb 20 12:04:25.272772 master-0 kubenswrapper[7756]: I0220 12:04:25.272768 7756 scope.go:117] "RemoveContainer" containerID="fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23" Feb 20 12:04:25.273358 master-0 kubenswrapper[7756]: I0220 12:04:25.273325 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23"} err="failed to get container status \"fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23\": rpc error: code = NotFound desc = could not find container \"fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23\": container with ID starting with fb7fc3ef76c32755e5be1a3e49c8d959db9e894378e694e1aa2a81af980b8a23 not found: ID does not exist" Feb 20 12:04:25.273358 master-0 kubenswrapper[7756]: I0220 12:04:25.273354 7756 scope.go:117] "RemoveContainer" containerID="b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c" Feb 20 12:04:25.273722 master-0 kubenswrapper[7756]: I0220 12:04:25.273701 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c"} err="failed to get container status \"b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c\": rpc error: code = NotFound desc = could not find container \"b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c\": container with ID starting with b337904f7276167f79e86f5f366aa57323dc10348c499378aca33b938a5a4a1c not found: ID does not exist" Feb 20 12:04:25.273786 master-0 kubenswrapper[7756]: I0220 12:04:25.273720 7756 scope.go:117] "RemoveContainer" containerID="112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4" Feb 20 12:04:25.273990 master-0 kubenswrapper[7756]: I0220 12:04:25.273965 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4"} err="failed to get container status \"112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4\": rpc error: code = NotFound desc = could not find container \"112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4\": container with ID starting with 112e439fa35c40fad9981684b5b248743bcd24108a41b98ab43a5c431f5265d4 not found: ID does not exist" Feb 20 12:04:25.273990 master-0 kubenswrapper[7756]: I0220 12:04:25.273983 7756 scope.go:117] "RemoveContainer" containerID="ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da" Feb 20 12:04:25.274261 master-0 kubenswrapper[7756]: I0220 12:04:25.274238 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da"} err="failed to get container status \"ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da\": rpc error: code = NotFound desc = could not find container \"ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da\": container with ID starting with ac70045143a704406b8eaa07464b733f22592f841832f99bebabf3ac6d8396da not found: ID does not exist" Feb 20 12:04:26.511500 master-0 kubenswrapper[7756]: I0220 12:04:26.511432 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 12:04:26.553777 master-0 kubenswrapper[7756]: I0220 12:04:26.553177 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-kube-api-access\") pod \"a41b23ca-9eed-4eb9-95dc-92418a6f4e86\" (UID: \"a41b23ca-9eed-4eb9-95dc-92418a6f4e86\") " Feb 20 12:04:26.553777 master-0 kubenswrapper[7756]: I0220 12:04:26.553259 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-kubelet-dir\") pod \"a41b23ca-9eed-4eb9-95dc-92418a6f4e86\" (UID: \"a41b23ca-9eed-4eb9-95dc-92418a6f4e86\") " Feb 20 12:04:26.553777 master-0 kubenswrapper[7756]: I0220 12:04:26.553404 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-var-lock\") pod \"a41b23ca-9eed-4eb9-95dc-92418a6f4e86\" (UID: \"a41b23ca-9eed-4eb9-95dc-92418a6f4e86\") " Feb 20 12:04:26.553777 master-0 kubenswrapper[7756]: I0220 12:04:26.553782 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a41b23ca-9eed-4eb9-95dc-92418a6f4e86" (UID: "a41b23ca-9eed-4eb9-95dc-92418a6f4e86"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:26.554375 master-0 kubenswrapper[7756]: I0220 12:04:26.553819 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-var-lock" (OuterVolumeSpecName: "var-lock") pod "a41b23ca-9eed-4eb9-95dc-92418a6f4e86" (UID: "a41b23ca-9eed-4eb9-95dc-92418a6f4e86"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:26.554375 master-0 kubenswrapper[7756]: I0220 12:04:26.554131 7756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:26.554375 master-0 kubenswrapper[7756]: I0220 12:04:26.554160 7756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:26.558269 master-0 kubenswrapper[7756]: I0220 12:04:26.558200 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a41b23ca-9eed-4eb9-95dc-92418a6f4e86" (UID: "a41b23ca-9eed-4eb9-95dc-92418a6f4e86"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:04:26.593929 master-0 kubenswrapper[7756]: I0220 12:04:26.593845 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a767e0793175d588147a983384ee43db" path="/var/lib/kubelet/pods/a767e0793175d588147a983384ee43db/volumes" Feb 20 12:04:26.655549 master-0 kubenswrapper[7756]: I0220 12:04:26.655460 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a41b23ca-9eed-4eb9-95dc-92418a6f4e86-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:27.084774 master-0 kubenswrapper[7756]: I0220 12:04:27.084696 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"a41b23ca-9eed-4eb9-95dc-92418a6f4e86","Type":"ContainerDied","Data":"ce9ed94bd982d2f41102a55cb2e618edd19c6224d6f0adfa7cf35da3a1237451"} Feb 20 12:04:27.084774 master-0 kubenswrapper[7756]: I0220 12:04:27.084777 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce9ed94bd982d2f41102a55cb2e618edd19c6224d6f0adfa7cf35da3a1237451" Feb 20 12:04:27.085203 master-0 kubenswrapper[7756]: I0220 12:04:27.085170 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 12:04:28.097075 master-0 kubenswrapper[7756]: I0220 12:04:28.097008 7756 generic.go:334] "Generic (PLEG): container finished" podID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerID="e6a70c0e0f237b900ba323a2d2250f1ed5e02a069194617f8e9507c1f16cde63" exitCode=0 Feb 20 12:04:28.101892 master-0 kubenswrapper[7756]: I0220 12:04:28.097074 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" event={"ID":"9c078827-3bdb-4509-aeb3-eb558df1f6e7","Type":"ContainerDied","Data":"e6a70c0e0f237b900ba323a2d2250f1ed5e02a069194617f8e9507c1f16cde63"} Feb 20 12:04:28.101892 master-0 kubenswrapper[7756]: I0220 12:04:28.097165 7756 scope.go:117] "RemoveContainer" containerID="59be86e8d4a5781613fee8a9f98dc6c90430b05bfb61e001a26978b78f148625" Feb 20 12:04:29.109012 master-0 kubenswrapper[7756]: I0220 12:04:29.108904 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" event={"ID":"9c078827-3bdb-4509-aeb3-eb558df1f6e7","Type":"ContainerStarted","Data":"92ccb17a64d5d636f088a02a33d8a748c9cf30cc098ea5de47cf046471465443"} Feb 20 12:04:29.877334 master-0 kubenswrapper[7756]: I0220 12:04:29.877048 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:04:29.877334 master-0 kubenswrapper[7756]: I0220 12:04:29.877317 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:04:29.880706 master-0 kubenswrapper[7756]: I0220 12:04:29.880656 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:29.880706 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:29.880706 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:29.880706 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:29.881010 master-0 kubenswrapper[7756]: I0220 12:04:29.880732 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:30.880509 master-0 kubenswrapper[7756]: I0220 12:04:30.880411 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:30.880509 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:30.880509 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:30.880509 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:30.880509 master-0 kubenswrapper[7756]: I0220 12:04:30.880497 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:31.880047 master-0 kubenswrapper[7756]: I0220 12:04:31.879969 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:31.880047 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:31.880047 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:31.880047 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:31.880514 master-0 kubenswrapper[7756]: I0220 12:04:31.880073 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:32.880077 master-0 kubenswrapper[7756]: I0220 12:04:32.879983 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:32.880077 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:32.880077 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:32.880077 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:32.880077 master-0 kubenswrapper[7756]: I0220 12:04:32.880075 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:33.880479 master-0 kubenswrapper[7756]: I0220 12:04:33.880397 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:33.880479 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:33.880479 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:33.880479 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:33.881559 master-0 kubenswrapper[7756]: I0220 12:04:33.880483 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:34.881821 master-0 kubenswrapper[7756]: I0220 12:04:34.881659 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:34.881821 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:34.881821 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:34.881821 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:34.881821 master-0 kubenswrapper[7756]: I0220 12:04:34.881805 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:35.879998 master-0 kubenswrapper[7756]: I0220 12:04:35.879896 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:35.879998 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:35.879998 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:35.879998 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:35.880470 master-0 kubenswrapper[7756]: I0220 12:04:35.880003 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:36.880010 master-0 kubenswrapper[7756]: I0220 12:04:36.879933 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:36.880010 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:36.880010 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:36.880010 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:36.881204 master-0 kubenswrapper[7756]: I0220 12:04:36.880031 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:37.578656 master-0 kubenswrapper[7756]: I0220 12:04:37.578589 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:37.616183 master-0 kubenswrapper[7756]: I0220 12:04:37.616105 7756 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7f52d858-a418-486c-bc37-148e4c4eb53e" Feb 20 12:04:37.616183 master-0 kubenswrapper[7756]: I0220 12:04:37.616176 7756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7f52d858-a418-486c-bc37-148e4c4eb53e" Feb 20 12:04:37.637788 master-0 kubenswrapper[7756]: I0220 12:04:37.637662 7756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 12:04:37.645484 master-0 kubenswrapper[7756]: I0220 12:04:37.645417 7756 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:37.655323 master-0 kubenswrapper[7756]: I0220 12:04:37.655242 7756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 12:04:37.664637 master-0 kubenswrapper[7756]: I0220 12:04:37.664574 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:37.669901 master-0 kubenswrapper[7756]: I0220 12:04:37.669834 7756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 12:04:37.697831 master-0 kubenswrapper[7756]: W0220 12:04:37.697748 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65774ccd44b6b404cec890cd0cfa3872.slice/crio-b2b9e99c760ef8b2b1d3b355cd2d86c95a75c8f2455bc3e22e89b188ba7101e3 WatchSource:0}: Error finding container b2b9e99c760ef8b2b1d3b355cd2d86c95a75c8f2455bc3e22e89b188ba7101e3: Status 404 returned error can't find the container with id b2b9e99c760ef8b2b1d3b355cd2d86c95a75c8f2455bc3e22e89b188ba7101e3 Feb 20 12:04:37.882846 master-0 kubenswrapper[7756]: I0220 12:04:37.882667 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:37.882846 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:37.882846 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:37.882846 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:37.882846 master-0 kubenswrapper[7756]: I0220 12:04:37.882776 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:38.189759 master-0 kubenswrapper[7756]: I0220 12:04:38.189573 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"65774ccd44b6b404cec890cd0cfa3872","Type":"ContainerStarted","Data":"fb26f752e48be63937e70537d486ea02b5e41733fdb3b27eed62024dc371a88d"} Feb 20 12:04:38.189759 master-0 kubenswrapper[7756]: I0220 12:04:38.189659 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"65774ccd44b6b404cec890cd0cfa3872","Type":"ContainerStarted","Data":"b2b9e99c760ef8b2b1d3b355cd2d86c95a75c8f2455bc3e22e89b188ba7101e3"} Feb 20 12:04:38.881051 master-0 kubenswrapper[7756]: I0220 12:04:38.880981 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:38.881051 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:38.881051 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:38.881051 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:38.881505 master-0 kubenswrapper[7756]: I0220 12:04:38.881055 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:39.202217 master-0 kubenswrapper[7756]: I0220 12:04:39.202046 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"65774ccd44b6b404cec890cd0cfa3872","Type":"ContainerStarted","Data":"b0d8055ab8671dd87e8c6f4600409f2168abd1ce04e2f64cb6ec241a84ad82db"} Feb 20 12:04:39.202217 master-0 kubenswrapper[7756]: I0220 12:04:39.202119 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"65774ccd44b6b404cec890cd0cfa3872","Type":"ContainerStarted","Data":"7d1e40608e20859f716be438c8e8c5245ae85a9137bce4bf53bfccc4ff8fc568"} Feb 20 12:04:39.202217 master-0 kubenswrapper[7756]: I0220 12:04:39.202140 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"65774ccd44b6b404cec890cd0cfa3872","Type":"ContainerStarted","Data":"f81c629f14de2675e27ed03b16f717338fc763727ad4d8279bef5f402d84b0bd"} Feb 20 12:04:39.232663 master-0 kubenswrapper[7756]: I0220 12:04:39.232554 7756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.232501848 podStartE2EDuration="2.232501848s" podCreationTimestamp="2026-02-20 12:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:04:39.229365429 +0000 UTC m=+924.971613477" watchObservedRunningTime="2026-02-20 12:04:39.232501848 +0000 UTC m=+924.974749866" Feb 20 12:04:39.880242 master-0 kubenswrapper[7756]: I0220 12:04:39.880142 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:39.880242 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:39.880242 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:39.880242 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:39.880749 master-0 kubenswrapper[7756]: I0220 12:04:39.880246 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:40.880201 master-0 kubenswrapper[7756]: I0220 12:04:40.880122 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:40.880201 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:40.880201 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:40.880201 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:40.881291 master-0 kubenswrapper[7756]: I0220 12:04:40.880238 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:41.879977 master-0 kubenswrapper[7756]: I0220 12:04:41.879904 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:41.879977 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:41.879977 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:41.879977 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:41.879977 master-0 kubenswrapper[7756]: I0220 12:04:41.879959 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:42.880109 master-0 kubenswrapper[7756]: I0220 12:04:42.880007 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:42.880109 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:42.880109 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:42.880109 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:42.880109 master-0 kubenswrapper[7756]: I0220 12:04:42.880096 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:43.880964 master-0 kubenswrapper[7756]: I0220 12:04:43.880875 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:43.880964 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:43.880964 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:43.880964 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:43.882043 master-0 kubenswrapper[7756]: I0220 12:04:43.881029 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:44.881436 master-0 kubenswrapper[7756]: I0220 12:04:44.881337 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:44.881436 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:44.881436 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:44.881436 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:44.882374 master-0 kubenswrapper[7756]: I0220 12:04:44.881432 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:45.880481 master-0 kubenswrapper[7756]: I0220 12:04:45.880408 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:45.880481 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:45.880481 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:45.880481 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:45.880920 master-0 kubenswrapper[7756]: I0220 12:04:45.880491 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:46.427760 master-0 kubenswrapper[7756]: I0220 12:04:46.427672 7756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 20 12:04:46.428518 master-0 kubenswrapper[7756]: E0220 12:04:46.428080 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41b23ca-9eed-4eb9-95dc-92418a6f4e86" containerName="installer" Feb 20 12:04:46.428518 master-0 kubenswrapper[7756]: I0220 12:04:46.428102 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41b23ca-9eed-4eb9-95dc-92418a6f4e86" containerName="installer" Feb 20 12:04:46.428518 master-0 kubenswrapper[7756]: I0220 12:04:46.428333 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41b23ca-9eed-4eb9-95dc-92418a6f4e86" containerName="installer" Feb 20 12:04:46.428975 master-0 kubenswrapper[7756]: I0220 12:04:46.428925 7756 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 20 12:04:46.429162 master-0 kubenswrapper[7756]: I0220 12:04:46.429094 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.429343 master-0 kubenswrapper[7756]: I0220 12:04:46.429284 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" containerID="cri-o://a1efa78f7f5d27240191b971820a5d5e18a579348d72495e656c080f9213d5fe" gracePeriod=15 Feb 20 12:04:46.429610 master-0 kubenswrapper[7756]: I0220 12:04:46.429371 7756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://553dca30a8dfd11fe251075900d8d07349a66d2b7a86bc97b7536eb7dfb88315" gracePeriod=15 Feb 20 12:04:46.431360 master-0 kubenswrapper[7756]: I0220 12:04:46.431295 7756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 20 12:04:46.432453 master-0 kubenswrapper[7756]: E0220 12:04:46.432396 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 20 12:04:46.432453 master-0 kubenswrapper[7756]: I0220 12:04:46.432447 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 20 12:04:46.432633 master-0 kubenswrapper[7756]: E0220 12:04:46.432503 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 20 12:04:46.432633 master-0 kubenswrapper[7756]: I0220 12:04:46.432523 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 20 12:04:46.432633 master-0 kubenswrapper[7756]: E0220 12:04:46.432616 7756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 20 12:04:46.432822 master-0 kubenswrapper[7756]: I0220 12:04:46.432639 7756 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 20 12:04:46.432947 master-0 kubenswrapper[7756]: I0220 12:04:46.432899 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 20 12:04:46.432947 master-0 kubenswrapper[7756]: I0220 12:04:46.432941 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 20 12:04:46.433084 master-0 kubenswrapper[7756]: I0220 12:04:46.432973 7756 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 20 12:04:46.437160 master-0 kubenswrapper[7756]: I0220 12:04:46.437084 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:46.475057 master-0 kubenswrapper[7756]: I0220 12:04:46.474943 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.475333 master-0 kubenswrapper[7756]: I0220 12:04:46.475064 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.475333 master-0 kubenswrapper[7756]: I0220 12:04:46.475157 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:46.475333 master-0 kubenswrapper[7756]: I0220 12:04:46.475229 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.475562 master-0 kubenswrapper[7756]: I0220 12:04:46.475363 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:46.475562 master-0 kubenswrapper[7756]: I0220 12:04:46.475456 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:46.475714 master-0 kubenswrapper[7756]: I0220 12:04:46.475592 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.475991 master-0 kubenswrapper[7756]: I0220 12:04:46.475916 7756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.492333 master-0 kubenswrapper[7756]: E0220 12:04:46.492250 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:46.576995 master-0 kubenswrapper[7756]: I0220 12:04:46.576922 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.577274 master-0 kubenswrapper[7756]: I0220 12:04:46.577085 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.577513 master-0 kubenswrapper[7756]: I0220 12:04:46.577462 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:46.577690 master-0 kubenswrapper[7756]: I0220 12:04:46.577557 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.577690 master-0 kubenswrapper[7756]: I0220 12:04:46.577594 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:46.577690 master-0 kubenswrapper[7756]: I0220 12:04:46.577643 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:46.577690 master-0 kubenswrapper[7756]: I0220 12:04:46.577673 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.579076 master-0 kubenswrapper[7756]: I0220 12:04:46.577720 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.579076 master-0 kubenswrapper[7756]: I0220 12:04:46.577751 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.579076 master-0 kubenswrapper[7756]: I0220 12:04:46.577763 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:46.579076 master-0 kubenswrapper[7756]: I0220 12:04:46.577789 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.579076 master-0 kubenswrapper[7756]: I0220 12:04:46.577887 7756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:46.579076 master-0 kubenswrapper[7756]: I0220 12:04:46.577819 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.579076 master-0 kubenswrapper[7756]: I0220 12:04:46.577949 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.579076 master-0 kubenswrapper[7756]: I0220 12:04:46.577992 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:46.579738 master-0 kubenswrapper[7756]: I0220 12:04:46.579166 7756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:46.793550 master-0 kubenswrapper[7756]: I0220 12:04:46.793445 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:46.829220 master-0 kubenswrapper[7756]: W0220 12:04:46.829152 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb342c942d3d92fd08ed7cf68fafb94c.slice/crio-1090f75c486a323ae59e4678633d1d8f31d3b0da933bc500d26a674a58096eb0 WatchSource:0}: Error finding container 1090f75c486a323ae59e4678633d1d8f31d3b0da933bc500d26a674a58096eb0: Status 404 returned error can't find the container with id 1090f75c486a323ae59e4678633d1d8f31d3b0da933bc500d26a674a58096eb0 Feb 20 12:04:46.834143 master-0 kubenswrapper[7756]: E0220 12:04:46.833925 7756 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.1895f2e360a28ec4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:eb342c942d3d92fd08ed7cf68fafb94c,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 12:04:46.832922308 +0000 UTC m=+932.575170356,LastTimestamp:2026-02-20 12:04:46.832922308 +0000 UTC m=+932.575170356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 12:04:46.879596 master-0 kubenswrapper[7756]: I0220 12:04:46.879453 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:46.879596 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:46.879596 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:46.879596 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:46.879876 master-0 kubenswrapper[7756]: I0220 12:04:46.879611 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:47.271821 master-0 kubenswrapper[7756]: I0220 12:04:47.271733 7756 generic.go:334] "Generic (PLEG): container finished" podID="97095f88-ee81-4a47-9bd7-1dbe71ec8d4d" containerID="54c853c11767fb2e9c16b82b830e00aa5d8a596a5498e4384e29c0cde6cc8aed" exitCode=0 Feb 20 12:04:47.271821 master-0 kubenswrapper[7756]: I0220 12:04:47.271775 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d","Type":"ContainerDied","Data":"54c853c11767fb2e9c16b82b830e00aa5d8a596a5498e4384e29c0cde6cc8aed"} Feb 20 12:04:47.272923 master-0 kubenswrapper[7756]: I0220 12:04:47.272869 7756 status_manager.go:851] "Failed to get status for pod" podUID="97095f88-ee81-4a47-9bd7-1dbe71ec8d4d" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:04:47.274675 master-0 kubenswrapper[7756]: I0220 12:04:47.274633 7756 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="553dca30a8dfd11fe251075900d8d07349a66d2b7a86bc97b7536eb7dfb88315" exitCode=0 Feb 20 12:04:47.276793 master-0 kubenswrapper[7756]: I0220 12:04:47.276760 7756 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="af8794e46bca44f5295255350b5f789a307ef0b49c6359ff00d86023682622b0" exitCode=0 Feb 20 12:04:47.276793 master-0 kubenswrapper[7756]: I0220 12:04:47.276781 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerDied","Data":"af8794e46bca44f5295255350b5f789a307ef0b49c6359ff00d86023682622b0"} Feb 20 12:04:47.276793 master-0 kubenswrapper[7756]: I0220 12:04:47.276795 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"1090f75c486a323ae59e4678633d1d8f31d3b0da933bc500d26a674a58096eb0"} Feb 20 12:04:47.277641 master-0 kubenswrapper[7756]: E0220 12:04:47.277599 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:47.277641 master-0 kubenswrapper[7756]: I0220 12:04:47.277604 7756 status_manager.go:851] "Failed to get status for pod" podUID="97095f88-ee81-4a47-9bd7-1dbe71ec8d4d" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:04:47.665862 master-0 kubenswrapper[7756]: I0220 12:04:47.665787 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:47.665862 master-0 kubenswrapper[7756]: I0220 12:04:47.665848 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:47.665862 master-0 kubenswrapper[7756]: I0220 12:04:47.665864 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:47.665862 master-0 kubenswrapper[7756]: I0220 12:04:47.665875 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:47.678351 master-0 kubenswrapper[7756]: I0220 12:04:47.677901 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:47.681197 master-0 kubenswrapper[7756]: I0220 12:04:47.680304 7756 status_manager.go:851] "Failed to get status for pod" podUID="65774ccd44b6b404cec890cd0cfa3872" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:04:47.684737 master-0 kubenswrapper[7756]: I0220 12:04:47.681697 7756 status_manager.go:851] "Failed to get status for pod" podUID="97095f88-ee81-4a47-9bd7-1dbe71ec8d4d" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:04:47.684737 master-0 kubenswrapper[7756]: I0220 12:04:47.683278 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:47.684737 master-0 kubenswrapper[7756]: I0220 12:04:47.684205 7756 status_manager.go:851] "Failed to get status for pod" podUID="65774ccd44b6b404cec890cd0cfa3872" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:04:47.685240 master-0 kubenswrapper[7756]: I0220 12:04:47.685096 7756 status_manager.go:851] "Failed to get status for pod" podUID="97095f88-ee81-4a47-9bd7-1dbe71ec8d4d" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:04:47.879833 master-0 kubenswrapper[7756]: I0220 12:04:47.879728 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:47.879833 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:47.879833 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:47.879833 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:47.880126 master-0 kubenswrapper[7756]: I0220 12:04:47.879886 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:48.287480 master-0 kubenswrapper[7756]: I0220 12:04:48.287402 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"f59e5b8432d51685db9583bb02bd7e9ee26b994dd372cb6fcd8949b7311e8f4c"} Feb 20 12:04:48.287706 master-0 kubenswrapper[7756]: I0220 12:04:48.287483 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"cc54e902a495db0a20ab369e2d2afe374c42435a5041faf1f245a36239c276fb"} Feb 20 12:04:48.287706 master-0 kubenswrapper[7756]: I0220 12:04:48.287504 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"98cdcc382cdaf9d32531fded311eebe18429997b139f27fb5370e4a6029e108d"} Feb 20 12:04:48.291116 master-0 kubenswrapper[7756]: I0220 12:04:48.291062 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:48.296656 master-0 kubenswrapper[7756]: I0220 12:04:48.294671 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:48.737632 master-0 kubenswrapper[7756]: I0220 12:04:48.736894 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:48.742986 master-0 kubenswrapper[7756]: I0220 12:04:48.742946 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 12:04:48.826006 master-0 kubenswrapper[7756]: I0220 12:04:48.825970 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kubelet-dir\") pod \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " Feb 20 12:04:48.826191 master-0 kubenswrapper[7756]: I0220 12:04:48.826042 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 20 12:04:48.826191 master-0 kubenswrapper[7756]: I0220 12:04:48.826115 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 20 12:04:48.826191 master-0 kubenswrapper[7756]: I0220 12:04:48.826146 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d" (UID: "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:48.826418 master-0 kubenswrapper[7756]: I0220 12:04:48.826174 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 20 12:04:48.826418 master-0 kubenswrapper[7756]: I0220 12:04:48.826219 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs" (OuterVolumeSpecName: "logs") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:48.826418 master-0 kubenswrapper[7756]: I0220 12:04:48.826259 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:48.826418 master-0 kubenswrapper[7756]: I0220 12:04:48.826259 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 20 12:04:48.826418 master-0 kubenswrapper[7756]: I0220 12:04:48.826282 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:48.826418 master-0 kubenswrapper[7756]: I0220 12:04:48.826296 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 20 12:04:48.826418 master-0 kubenswrapper[7756]: I0220 12:04:48.826304 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets" (OuterVolumeSpecName: "secrets") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:48.826418 master-0 kubenswrapper[7756]: I0220 12:04:48.826359 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") pod \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " Feb 20 12:04:48.826418 master-0 kubenswrapper[7756]: I0220 12:04:48.826387 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 20 12:04:48.826418 master-0 kubenswrapper[7756]: I0220 12:04:48.826403 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config" (OuterVolumeSpecName: "config") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:48.826418 master-0 kubenswrapper[7756]: I0220 12:04:48.826419 7756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-var-lock\") pod \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " Feb 20 12:04:48.827141 master-0 kubenswrapper[7756]: I0220 12:04:48.826810 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:48.827141 master-0 kubenswrapper[7756]: I0220 12:04:48.826869 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-var-lock" (OuterVolumeSpecName: "var-lock") pod "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d" (UID: "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:48.827141 master-0 kubenswrapper[7756]: I0220 12:04:48.826965 7756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:48.827141 master-0 kubenswrapper[7756]: I0220 12:04:48.826984 7756 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:48.827141 master-0 kubenswrapper[7756]: I0220 12:04:48.826998 7756 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:48.827141 master-0 kubenswrapper[7756]: I0220 12:04:48.827012 7756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:48.827141 master-0 kubenswrapper[7756]: I0220 12:04:48.827025 7756 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:48.827141 master-0 kubenswrapper[7756]: I0220 12:04:48.827036 7756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:48.827141 master-0 kubenswrapper[7756]: I0220 12:04:48.827047 7756 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:48.827141 master-0 kubenswrapper[7756]: I0220 12:04:48.827057 7756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:48.829840 master-0 kubenswrapper[7756]: I0220 12:04:48.829793 7756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d" (UID: "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:04:48.897052 master-0 kubenswrapper[7756]: I0220 12:04:48.896952 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:48.897052 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:48.897052 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:48.897052 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:48.897410 master-0 kubenswrapper[7756]: I0220 12:04:48.897058 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:48.928822 master-0 kubenswrapper[7756]: I0220 12:04:48.928747 7756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:49.308608 master-0 kubenswrapper[7756]: I0220 12:04:49.304917 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d","Type":"ContainerDied","Data":"3590f63863912596b171ca5f35809210ae59c7b19c2fdb801182abdc3cd97397"} Feb 20 12:04:49.308608 master-0 kubenswrapper[7756]: I0220 12:04:49.304971 7756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3590f63863912596b171ca5f35809210ae59c7b19c2fdb801182abdc3cd97397" Feb 20 12:04:49.308608 master-0 kubenswrapper[7756]: I0220 12:04:49.305054 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:49.325598 master-0 kubenswrapper[7756]: I0220 12:04:49.325550 7756 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="a1efa78f7f5d27240191b971820a5d5e18a579348d72495e656c080f9213d5fe" exitCode=0 Feb 20 12:04:49.325766 master-0 kubenswrapper[7756]: I0220 12:04:49.325646 7756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 12:04:49.325813 master-0 kubenswrapper[7756]: I0220 12:04:49.325660 7756 scope.go:117] "RemoveContainer" containerID="553dca30a8dfd11fe251075900d8d07349a66d2b7a86bc97b7536eb7dfb88315" Feb 20 12:04:49.329636 master-0 kubenswrapper[7756]: I0220 12:04:49.329607 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"ac763378dacfc4363dcfb084085dbc52f6dc5edd975cf1b421f17f519d7cca40"} Feb 20 12:04:49.329697 master-0 kubenswrapper[7756]: I0220 12:04:49.329643 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"47dcc57de81019756b69aa0cf77c795b704b232ab0a7c095b93f80ca1a705412"} Feb 20 12:04:49.329780 master-0 kubenswrapper[7756]: I0220 12:04:49.329764 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:49.348602 master-0 kubenswrapper[7756]: I0220 12:04:49.348573 7756 scope.go:117] "RemoveContainer" containerID="a1efa78f7f5d27240191b971820a5d5e18a579348d72495e656c080f9213d5fe" Feb 20 12:04:49.365321 master-0 kubenswrapper[7756]: I0220 12:04:49.365277 7756 scope.go:117] "RemoveContainer" containerID="916faa0bd31e938470f1917fc27df9d9c5c42d01e4d8c634e516e1d594156790" Feb 20 12:04:49.387469 master-0 kubenswrapper[7756]: I0220 12:04:49.387427 7756 scope.go:117] "RemoveContainer" containerID="553dca30a8dfd11fe251075900d8d07349a66d2b7a86bc97b7536eb7dfb88315" Feb 20 12:04:49.398619 master-0 kubenswrapper[7756]: E0220 12:04:49.397972 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"553dca30a8dfd11fe251075900d8d07349a66d2b7a86bc97b7536eb7dfb88315\": container with ID starting with 553dca30a8dfd11fe251075900d8d07349a66d2b7a86bc97b7536eb7dfb88315 not found: ID does not exist" containerID="553dca30a8dfd11fe251075900d8d07349a66d2b7a86bc97b7536eb7dfb88315" Feb 20 12:04:49.398619 master-0 kubenswrapper[7756]: I0220 12:04:49.398080 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"553dca30a8dfd11fe251075900d8d07349a66d2b7a86bc97b7536eb7dfb88315"} err="failed to get container status \"553dca30a8dfd11fe251075900d8d07349a66d2b7a86bc97b7536eb7dfb88315\": rpc error: code = NotFound desc = could not find container \"553dca30a8dfd11fe251075900d8d07349a66d2b7a86bc97b7536eb7dfb88315\": container with ID starting with 553dca30a8dfd11fe251075900d8d07349a66d2b7a86bc97b7536eb7dfb88315 not found: ID does not exist" Feb 20 12:04:49.398619 master-0 kubenswrapper[7756]: I0220 12:04:49.398130 7756 scope.go:117] "RemoveContainer" containerID="a1efa78f7f5d27240191b971820a5d5e18a579348d72495e656c080f9213d5fe" Feb 20 12:04:49.398888 master-0 kubenswrapper[7756]: E0220 12:04:49.398808 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1efa78f7f5d27240191b971820a5d5e18a579348d72495e656c080f9213d5fe\": container with ID starting with a1efa78f7f5d27240191b971820a5d5e18a579348d72495e656c080f9213d5fe not found: ID does not exist" containerID="a1efa78f7f5d27240191b971820a5d5e18a579348d72495e656c080f9213d5fe" Feb 20 12:04:49.398888 master-0 kubenswrapper[7756]: I0220 12:04:49.398858 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1efa78f7f5d27240191b971820a5d5e18a579348d72495e656c080f9213d5fe"} err="failed to get container status \"a1efa78f7f5d27240191b971820a5d5e18a579348d72495e656c080f9213d5fe\": rpc error: code = NotFound desc = could not find container \"a1efa78f7f5d27240191b971820a5d5e18a579348d72495e656c080f9213d5fe\": container with ID starting with a1efa78f7f5d27240191b971820a5d5e18a579348d72495e656c080f9213d5fe not found: ID does not exist" Feb 20 12:04:49.398888 master-0 kubenswrapper[7756]: I0220 12:04:49.398875 7756 scope.go:117] "RemoveContainer" containerID="916faa0bd31e938470f1917fc27df9d9c5c42d01e4d8c634e516e1d594156790" Feb 20 12:04:49.399598 master-0 kubenswrapper[7756]: E0220 12:04:49.399198 7756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"916faa0bd31e938470f1917fc27df9d9c5c42d01e4d8c634e516e1d594156790\": container with ID starting with 916faa0bd31e938470f1917fc27df9d9c5c42d01e4d8c634e516e1d594156790 not found: ID does not exist" containerID="916faa0bd31e938470f1917fc27df9d9c5c42d01e4d8c634e516e1d594156790" Feb 20 12:04:49.399598 master-0 kubenswrapper[7756]: I0220 12:04:49.399249 7756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"916faa0bd31e938470f1917fc27df9d9c5c42d01e4d8c634e516e1d594156790"} err="failed to get container status \"916faa0bd31e938470f1917fc27df9d9c5c42d01e4d8c634e516e1d594156790\": rpc error: code = NotFound desc = could not find container \"916faa0bd31e938470f1917fc27df9d9c5c42d01e4d8c634e516e1d594156790\": container with ID starting with 916faa0bd31e938470f1917fc27df9d9c5c42d01e4d8c634e516e1d594156790 not found: ID does not exist" Feb 20 12:04:49.882333 master-0 kubenswrapper[7756]: I0220 12:04:49.882263 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:49.882333 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:49.882333 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:49.882333 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:49.883118 master-0 kubenswrapper[7756]: I0220 12:04:49.882362 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:50.592521 master-0 kubenswrapper[7756]: I0220 12:04:50.592436 7756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="687e92a6cecf1e2beeef16a0b322ad08" path="/var/lib/kubelet/pods/687e92a6cecf1e2beeef16a0b322ad08/volumes" Feb 20 12:04:50.593281 master-0 kubenswrapper[7756]: I0220 12:04:50.593234 7756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 20 12:04:50.880722 master-0 kubenswrapper[7756]: I0220 12:04:50.880563 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:50.880722 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:50.880722 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:50.880722 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:50.880722 master-0 kubenswrapper[7756]: I0220 12:04:50.880658 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:51.493216 master-0 kubenswrapper[7756]: I0220 12:04:51.493141 7756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:51.527500 master-0 kubenswrapper[7756]: W0220 12:04:51.527418 7756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c4f5d60772fa42f26e9c219bffa62b9.slice/crio-b37a738de54db612fada8bd81cf2bdbaea3d8eea466401eb8ad83715ec6bea2f WatchSource:0}: Error finding container b37a738de54db612fada8bd81cf2bdbaea3d8eea466401eb8ad83715ec6bea2f: Status 404 returned error can't find the container with id b37a738de54db612fada8bd81cf2bdbaea3d8eea466401eb8ad83715ec6bea2f Feb 20 12:04:51.794733 master-0 kubenswrapper[7756]: I0220 12:04:51.794646 7756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:51.794733 master-0 kubenswrapper[7756]: I0220 12:04:51.794739 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:51.805853 master-0 kubenswrapper[7756]: I0220 12:04:51.805808 7756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:51.880281 master-0 kubenswrapper[7756]: I0220 12:04:51.880208 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:51.880281 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:51.880281 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:51.880281 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:51.880842 master-0 kubenswrapper[7756]: I0220 12:04:51.880301 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:52.358990 master-0 kubenswrapper[7756]: I0220 12:04:52.358871 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"fb183355686e4afc132c4d4de7e53c26823b10e5e50f94804dcb7abd86778e66"} Feb 20 12:04:52.358990 master-0 kubenswrapper[7756]: I0220 12:04:52.358959 7756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"b37a738de54db612fada8bd81cf2bdbaea3d8eea466401eb8ad83715ec6bea2f"} Feb 20 12:04:52.371027 master-0 kubenswrapper[7756]: E0220 12:04:52.370012 7756 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:52.604130 master-0 kubenswrapper[7756]: I0220 12:04:52.604057 7756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 20 12:04:52.881108 master-0 kubenswrapper[7756]: I0220 12:04:52.881013 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:52.881108 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:52.881108 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:52.881108 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:52.881894 master-0 kubenswrapper[7756]: I0220 12:04:52.881118 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:53.880287 master-0 kubenswrapper[7756]: I0220 12:04:53.880199 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:53.880287 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:53.880287 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:53.880287 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:53.881630 master-0 kubenswrapper[7756]: I0220 12:04:53.880311 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:54.591037 master-0 kubenswrapper[7756]: I0220 12:04:54.590963 7756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 20 12:04:54.882602 master-0 kubenswrapper[7756]: I0220 12:04:54.882106 7756 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-fkkd5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 12:04:54.882602 master-0 kubenswrapper[7756]: [-]has-synced failed: reason withheld Feb 20 12:04:54.882602 master-0 kubenswrapper[7756]: [+]process-running ok Feb 20 12:04:54.882602 master-0 kubenswrapper[7756]: healthz check failed Feb 20 12:04:54.882602 master-0 kubenswrapper[7756]: I0220 12:04:54.882192 7756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" podUID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 12:04:55.133185 master-0 kubenswrapper[7756]: I0220 12:04:55.133066 7756 request.go:700] Waited for 1.003841139s, retries: 1, retry-after: 5s - retry-reason: 503 - request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=12695&timeout=45m26s&timeoutSeconds=2726&watch=true Feb 20 12:04:55.135314 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 20 12:04:55.162469 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 20 12:04:55.163100 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 20 12:04:55.165831 master-0 systemd[1]: kubelet.service: Consumed 2min 40.937s CPU time. Feb 20 12:04:55.193924 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 20 12:04:55.342845 master-0 kubenswrapper[31420]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 12:04:55.342845 master-0 kubenswrapper[31420]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 20 12:04:55.342845 master-0 kubenswrapper[31420]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 12:04:55.342845 master-0 kubenswrapper[31420]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 12:04:55.342845 master-0 kubenswrapper[31420]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 20 12:04:55.342845 master-0 kubenswrapper[31420]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 12:04:55.344882 master-0 kubenswrapper[31420]: I0220 12:04:55.342911 31420 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 20 12:04:55.345181 master-0 kubenswrapper[31420]: W0220 12:04:55.345138 31420 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 12:04:55.345181 master-0 kubenswrapper[31420]: W0220 12:04:55.345156 31420 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 12:04:55.345181 master-0 kubenswrapper[31420]: W0220 12:04:55.345162 31420 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 12:04:55.345181 master-0 kubenswrapper[31420]: W0220 12:04:55.345167 31420 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 12:04:55.345181 master-0 kubenswrapper[31420]: W0220 12:04:55.345173 31420 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 12:04:55.345181 master-0 kubenswrapper[31420]: W0220 12:04:55.345179 31420 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 12:04:55.345181 master-0 kubenswrapper[31420]: W0220 12:04:55.345184 31420 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 12:04:55.345181 master-0 kubenswrapper[31420]: W0220 12:04:55.345188 31420 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 12:04:55.345181 master-0 kubenswrapper[31420]: W0220 12:04:55.345193 31420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 12:04:55.345181 master-0 kubenswrapper[31420]: W0220 12:04:55.345198 31420 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345203 31420 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345208 31420 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345212 31420 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345217 31420 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345221 31420 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345225 31420 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345229 31420 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345233 31420 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345236 31420 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345240 31420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345244 31420 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345248 31420 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345257 31420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345261 31420 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345265 31420 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345269 31420 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345273 31420 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345276 31420 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345280 31420 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 12:04:55.345876 master-0 kubenswrapper[31420]: W0220 12:04:55.345283 31420 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345287 31420 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345291 31420 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345294 31420 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345298 31420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345301 31420 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345305 31420 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345308 31420 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345312 31420 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345316 31420 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345320 31420 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345323 31420 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345327 31420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345330 31420 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345334 31420 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345337 31420 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345341 31420 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345345 31420 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345349 31420 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345353 31420 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 12:04:55.354440 master-0 kubenswrapper[31420]: W0220 12:04:55.345358 31420 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345363 31420 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345367 31420 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345371 31420 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345374 31420 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345379 31420 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345383 31420 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345387 31420 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345391 31420 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345395 31420 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345400 31420 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345404 31420 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345409 31420 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345412 31420 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345416 31420 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345420 31420 feature_gate.go:330] unrecognized feature gate: Example Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345424 31420 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345428 31420 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345431 31420 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 12:04:55.357296 master-0 kubenswrapper[31420]: W0220 12:04:55.345435 31420 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: W0220 12:04:55.345439 31420 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: W0220 12:04:55.345442 31420 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: W0220 12:04:55.345446 31420 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345561 31420 flags.go:64] FLAG: --address="0.0.0.0" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345570 31420 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345578 31420 flags.go:64] FLAG: --anonymous-auth="true" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345584 31420 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345589 31420 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345594 31420 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345600 31420 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345605 31420 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345609 31420 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345614 31420 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345618 31420 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345623 31420 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345627 31420 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345631 31420 flags.go:64] FLAG: --cgroup-root="" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345635 31420 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345640 31420 flags.go:64] FLAG: --client-ca-file="" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345644 31420 flags.go:64] FLAG: --cloud-config="" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345648 31420 flags.go:64] FLAG: --cloud-provider="" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345653 31420 flags.go:64] FLAG: --cluster-dns="[]" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345657 31420 flags.go:64] FLAG: --cluster-domain="" Feb 20 12:04:55.361356 master-0 kubenswrapper[31420]: I0220 12:04:55.345661 31420 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345666 31420 flags.go:64] FLAG: --config-dir="" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345670 31420 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345674 31420 flags.go:64] FLAG: --container-log-max-files="5" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345679 31420 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345684 31420 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345688 31420 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345692 31420 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345696 31420 flags.go:64] FLAG: --contention-profiling="false" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345700 31420 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345705 31420 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345709 31420 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345714 31420 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345719 31420 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345723 31420 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345727 31420 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345731 31420 flags.go:64] FLAG: --enable-load-reader="false" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345735 31420 flags.go:64] FLAG: --enable-server="true" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345741 31420 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345747 31420 flags.go:64] FLAG: --event-burst="100" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345752 31420 flags.go:64] FLAG: --event-qps="50" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345756 31420 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345760 31420 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345765 31420 flags.go:64] FLAG: --eviction-hard="" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345769 31420 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 20 12:04:55.365380 master-0 kubenswrapper[31420]: I0220 12:04:55.345774 31420 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345778 31420 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345783 31420 flags.go:64] FLAG: --eviction-soft="" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345788 31420 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345792 31420 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345796 31420 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345800 31420 flags.go:64] FLAG: --experimental-mounter-path="" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345804 31420 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345809 31420 flags.go:64] FLAG: --fail-swap-on="true" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345814 31420 flags.go:64] FLAG: --feature-gates="" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345819 31420 flags.go:64] FLAG: --file-check-frequency="20s" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345823 31420 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345827 31420 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345831 31420 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345836 31420 flags.go:64] FLAG: --healthz-port="10248" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345840 31420 flags.go:64] FLAG: --help="false" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345845 31420 flags.go:64] FLAG: --hostname-override="" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345848 31420 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345853 31420 flags.go:64] FLAG: --http-check-frequency="20s" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345857 31420 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345861 31420 flags.go:64] FLAG: --image-credential-provider-config="" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345865 31420 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345869 31420 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345873 31420 flags.go:64] FLAG: --image-service-endpoint="" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345878 31420 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 20 12:04:55.366476 master-0 kubenswrapper[31420]: I0220 12:04:55.345882 31420 flags.go:64] FLAG: --kube-api-burst="100" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345887 31420 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345891 31420 flags.go:64] FLAG: --kube-api-qps="50" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345903 31420 flags.go:64] FLAG: --kube-reserved="" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345908 31420 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345912 31420 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345917 31420 flags.go:64] FLAG: --kubelet-cgroups="" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345921 31420 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345925 31420 flags.go:64] FLAG: --lock-file="" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345930 31420 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345934 31420 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345939 31420 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345944 31420 flags.go:64] FLAG: --log-json-split-stream="false" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345948 31420 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345953 31420 flags.go:64] FLAG: --log-text-split-stream="false" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345957 31420 flags.go:64] FLAG: --logging-format="text" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345961 31420 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345966 31420 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345970 31420 flags.go:64] FLAG: --manifest-url="" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345974 31420 flags.go:64] FLAG: --manifest-url-header="" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345980 31420 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345984 31420 flags.go:64] FLAG: --max-open-files="1000000" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345989 31420 flags.go:64] FLAG: --max-pods="110" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345994 31420 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.345998 31420 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 20 12:04:55.367450 master-0 kubenswrapper[31420]: I0220 12:04:55.346002 31420 flags.go:64] FLAG: --memory-manager-policy="None" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346007 31420 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346011 31420 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346015 31420 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346023 31420 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346032 31420 flags.go:64] FLAG: --node-status-max-images="50" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346037 31420 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346042 31420 flags.go:64] FLAG: --oom-score-adj="-999" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346047 31420 flags.go:64] FLAG: --pod-cidr="" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346052 31420 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346062 31420 flags.go:64] FLAG: --pod-manifest-path="" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346067 31420 flags.go:64] FLAG: --pod-max-pids="-1" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346072 31420 flags.go:64] FLAG: --pods-per-core="0" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346077 31420 flags.go:64] FLAG: --port="10250" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346082 31420 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346088 31420 flags.go:64] FLAG: --provider-id="" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346093 31420 flags.go:64] FLAG: --qos-reserved="" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346099 31420 flags.go:64] FLAG: --read-only-port="10255" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346104 31420 flags.go:64] FLAG: --register-node="true" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346109 31420 flags.go:64] FLAG: --register-schedulable="true" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346115 31420 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346124 31420 flags.go:64] FLAG: --registry-burst="10" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346129 31420 flags.go:64] FLAG: --registry-qps="5" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346133 31420 flags.go:64] FLAG: --reserved-cpus="" Feb 20 12:04:55.368542 master-0 kubenswrapper[31420]: I0220 12:04:55.346138 31420 flags.go:64] FLAG: --reserved-memory="" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346143 31420 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346148 31420 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346152 31420 flags.go:64] FLAG: --rotate-certificates="false" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346157 31420 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346161 31420 flags.go:64] FLAG: --runonce="false" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346165 31420 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346170 31420 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346175 31420 flags.go:64] FLAG: --seccomp-default="false" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346179 31420 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346183 31420 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346187 31420 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346191 31420 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346198 31420 flags.go:64] FLAG: --storage-driver-password="root" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346202 31420 flags.go:64] FLAG: --storage-driver-secure="false" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346206 31420 flags.go:64] FLAG: --storage-driver-table="stats" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346210 31420 flags.go:64] FLAG: --storage-driver-user="root" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346214 31420 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346218 31420 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346222 31420 flags.go:64] FLAG: --system-cgroups="" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346226 31420 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346234 31420 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346238 31420 flags.go:64] FLAG: --tls-cert-file="" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346242 31420 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346247 31420 flags.go:64] FLAG: --tls-min-version="" Feb 20 12:04:55.369433 master-0 kubenswrapper[31420]: I0220 12:04:55.346251 31420 flags.go:64] FLAG: --tls-private-key-file="" Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: I0220 12:04:55.346255 31420 flags.go:64] FLAG: --topology-manager-policy="none" Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: I0220 12:04:55.346259 31420 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: I0220 12:04:55.346263 31420 flags.go:64] FLAG: --topology-manager-scope="container" Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: I0220 12:04:55.346268 31420 flags.go:64] FLAG: --v="2" Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: I0220 12:04:55.346273 31420 flags.go:64] FLAG: --version="false" Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: I0220 12:04:55.346278 31420 flags.go:64] FLAG: --vmodule="" Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: I0220 12:04:55.346283 31420 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: I0220 12:04:55.346288 31420 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: W0220 12:04:55.346406 31420 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: W0220 12:04:55.346412 31420 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: W0220 12:04:55.346416 31420 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: W0220 12:04:55.346420 31420 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: W0220 12:04:55.346424 31420 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: W0220 12:04:55.346428 31420 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: W0220 12:04:55.346432 31420 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: W0220 12:04:55.346436 31420 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: W0220 12:04:55.346440 31420 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: W0220 12:04:55.346444 31420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: W0220 12:04:55.346448 31420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: W0220 12:04:55.346454 31420 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: W0220 12:04:55.346458 31420 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 12:04:55.372714 master-0 kubenswrapper[31420]: W0220 12:04:55.346462 31420 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346465 31420 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346469 31420 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346473 31420 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346476 31420 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346481 31420 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346487 31420 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346491 31420 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346495 31420 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346498 31420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346503 31420 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346507 31420 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346512 31420 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346516 31420 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346535 31420 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346540 31420 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346545 31420 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346549 31420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346553 31420 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 12:04:55.373262 master-0 kubenswrapper[31420]: W0220 12:04:55.346557 31420 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346561 31420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346565 31420 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346569 31420 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346573 31420 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346577 31420 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346581 31420 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346585 31420 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346589 31420 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346593 31420 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346597 31420 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346602 31420 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346606 31420 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346611 31420 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346614 31420 feature_gate.go:330] unrecognized feature gate: Example Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346618 31420 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346622 31420 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346625 31420 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346629 31420 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346634 31420 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 12:04:55.373784 master-0 kubenswrapper[31420]: W0220 12:04:55.346638 31420 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346642 31420 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346645 31420 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346649 31420 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346652 31420 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346656 31420 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346660 31420 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346663 31420 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346667 31420 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346670 31420 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346674 31420 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346679 31420 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346684 31420 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346688 31420 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346692 31420 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346696 31420 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346700 31420 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346703 31420 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346707 31420 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 12:04:55.374276 master-0 kubenswrapper[31420]: W0220 12:04:55.346710 31420 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 12:04:55.374768 master-0 kubenswrapper[31420]: I0220 12:04:55.346723 31420 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 12:04:55.374768 master-0 kubenswrapper[31420]: I0220 12:04:55.351038 31420 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 20 12:04:55.374768 master-0 kubenswrapper[31420]: I0220 12:04:55.351071 31420 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 20 12:04:55.374768 master-0 kubenswrapper[31420]: W0220 12:04:55.351172 31420 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 12:04:55.374768 master-0 kubenswrapper[31420]: W0220 12:04:55.351181 31420 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 12:04:55.374768 master-0 kubenswrapper[31420]: W0220 12:04:55.351188 31420 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 12:04:55.374768 master-0 kubenswrapper[31420]: W0220 12:04:55.351193 31420 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 12:04:55.374768 master-0 kubenswrapper[31420]: W0220 12:04:55.351197 31420 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 12:04:55.374768 master-0 kubenswrapper[31420]: W0220 12:04:55.351201 31420 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 12:04:55.374768 master-0 kubenswrapper[31420]: W0220 12:04:55.351204 31420 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 12:04:55.374768 master-0 kubenswrapper[31420]: W0220 12:04:55.351208 31420 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 12:04:55.374768 master-0 kubenswrapper[31420]: W0220 12:04:55.351212 31420 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 12:04:55.374768 master-0 kubenswrapper[31420]: W0220 12:04:55.351216 31420 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 12:04:55.374768 master-0 kubenswrapper[31420]: W0220 12:04:55.351220 31420 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 12:04:55.374768 master-0 kubenswrapper[31420]: W0220 12:04:55.351224 31420 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351227 31420 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351231 31420 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351236 31420 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351249 31420 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351253 31420 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351257 31420 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351261 31420 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351265 31420 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351269 31420 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351273 31420 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351276 31420 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351280 31420 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351283 31420 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351287 31420 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351291 31420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351295 31420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351298 31420 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351302 31420 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351306 31420 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 12:04:55.375152 master-0 kubenswrapper[31420]: W0220 12:04:55.351310 31420 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351313 31420 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351318 31420 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351324 31420 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351329 31420 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351333 31420 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351338 31420 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351343 31420 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351347 31420 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351351 31420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351355 31420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351358 31420 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351362 31420 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351366 31420 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351369 31420 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351373 31420 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351378 31420 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351382 31420 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351385 31420 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 12:04:55.375706 master-0 kubenswrapper[31420]: W0220 12:04:55.351389 31420 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351393 31420 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351397 31420 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351400 31420 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351410 31420 feature_gate.go:330] unrecognized feature gate: Example Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351414 31420 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351418 31420 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351422 31420 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351425 31420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351429 31420 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351432 31420 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351436 31420 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351440 31420 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351443 31420 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351447 31420 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351452 31420 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351455 31420 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351459 31420 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351462 31420 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351466 31420 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 12:04:55.376206 master-0 kubenswrapper[31420]: W0220 12:04:55.351469 31420 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 12:04:55.376811 master-0 kubenswrapper[31420]: W0220 12:04:55.351475 31420 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 12:04:55.376811 master-0 kubenswrapper[31420]: I0220 12:04:55.351481 31420 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 12:04:55.376811 master-0 kubenswrapper[31420]: W0220 12:04:55.351633 31420 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 12:04:55.376811 master-0 kubenswrapper[31420]: W0220 12:04:55.351641 31420 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 12:04:55.376811 master-0 kubenswrapper[31420]: W0220 12:04:55.351646 31420 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 12:04:55.376811 master-0 kubenswrapper[31420]: W0220 12:04:55.351652 31420 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 12:04:55.376811 master-0 kubenswrapper[31420]: W0220 12:04:55.351656 31420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 12:04:55.376811 master-0 kubenswrapper[31420]: W0220 12:04:55.351660 31420 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 12:04:55.376811 master-0 kubenswrapper[31420]: W0220 12:04:55.351663 31420 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 12:04:55.376811 master-0 kubenswrapper[31420]: W0220 12:04:55.351669 31420 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 12:04:55.376811 master-0 kubenswrapper[31420]: W0220 12:04:55.351673 31420 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 12:04:55.376811 master-0 kubenswrapper[31420]: W0220 12:04:55.351677 31420 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 12:04:55.376811 master-0 kubenswrapper[31420]: W0220 12:04:55.351680 31420 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 12:04:55.376811 master-0 kubenswrapper[31420]: W0220 12:04:55.351684 31420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 12:04:55.376811 master-0 kubenswrapper[31420]: W0220 12:04:55.351688 31420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351693 31420 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351696 31420 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351700 31420 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351704 31420 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351707 31420 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351711 31420 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351715 31420 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351718 31420 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351722 31420 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351726 31420 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351729 31420 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351733 31420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351738 31420 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351743 31420 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351747 31420 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351751 31420 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351755 31420 feature_gate.go:330] unrecognized feature gate: Example Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351758 31420 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351762 31420 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 12:04:55.377189 master-0 kubenswrapper[31420]: W0220 12:04:55.351766 31420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351770 31420 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351775 31420 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351779 31420 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351783 31420 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351786 31420 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351789 31420 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351795 31420 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351799 31420 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351802 31420 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351806 31420 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351810 31420 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351813 31420 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351818 31420 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351823 31420 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351827 31420 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351831 31420 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351835 31420 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351838 31420 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351842 31420 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 12:04:55.377766 master-0 kubenswrapper[31420]: W0220 12:04:55.351845 31420 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351849 31420 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351853 31420 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351856 31420 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351860 31420 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351864 31420 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351868 31420 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351871 31420 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351875 31420 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351879 31420 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351883 31420 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351888 31420 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351892 31420 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351895 31420 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351899 31420 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351903 31420 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351906 31420 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351910 31420 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351917 31420 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 12:04:55.378591 master-0 kubenswrapper[31420]: W0220 12:04:55.351922 31420 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 12:04:55.379097 master-0 kubenswrapper[31420]: I0220 12:04:55.351928 31420 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 12:04:55.379097 master-0 kubenswrapper[31420]: I0220 12:04:55.352151 31420 server.go:940] "Client rotation is on, will bootstrap in background" Feb 20 12:04:55.379097 master-0 kubenswrapper[31420]: I0220 12:04:55.353628 31420 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 20 12:04:55.379097 master-0 kubenswrapper[31420]: I0220 12:04:55.353707 31420 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 20 12:04:55.379097 master-0 kubenswrapper[31420]: I0220 12:04:55.353922 31420 server.go:997] "Starting client certificate rotation" Feb 20 12:04:55.379097 master-0 kubenswrapper[31420]: I0220 12:04:55.353938 31420 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 20 12:04:55.379097 master-0 kubenswrapper[31420]: I0220 12:04:55.354112 31420 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-21 11:39:43 +0000 UTC, rotation deadline is 2026-02-21 08:18:57.160429603 +0000 UTC Feb 20 12:04:55.379097 master-0 kubenswrapper[31420]: I0220 12:04:55.354178 31420 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h14m1.806253666s for next certificate rotation Feb 20 12:04:55.379097 master-0 kubenswrapper[31420]: I0220 12:04:55.354801 31420 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 12:04:55.379097 master-0 kubenswrapper[31420]: I0220 12:04:55.360548 31420 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 12:04:55.379097 master-0 kubenswrapper[31420]: I0220 12:04:55.364703 31420 log.go:25] "Validated CRI v1 runtime API" Feb 20 12:04:55.379097 master-0 kubenswrapper[31420]: I0220 12:04:55.370809 31420 log.go:25] "Validated CRI v1 image API" Feb 20 12:04:55.379423 master-0 kubenswrapper[31420]: I0220 12:04:55.375191 31420 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 20 12:04:55.387063 master-0 kubenswrapper[31420]: I0220 12:04:55.386991 31420 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 e4a1b3a0-c6e7-4552-b1bb-6cc9ae049a6f:/dev/vda3] Feb 20 12:04:55.388232 master-0 kubenswrapper[31420]: I0220 12:04:55.387133 31420 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0318746ff4f748b910f4c4078a258eb92f24f864ae719352a32329d892129cdb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0318746ff4f748b910f4c4078a258eb92f24f864ae719352a32329d892129cdb/userdata/shm major:0 minor:1292 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0aa23336820d4847f443dc2f86a2ade4113e5076452290a0fb2cf4f2ca4f4941/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0aa23336820d4847f443dc2f86a2ade4113e5076452290a0fb2cf4f2ca4f4941/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0d45f4e60b11e0b0a317456c0195f07cdb88a32c6fdc95b3ec005464743a5f86/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0d45f4e60b11e0b0a317456c0195f07cdb88a32c6fdc95b3ec005464743a5f86/userdata/shm major:0 minor:426 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0ede86c860ac980d49efbb5f04d472fabe03c4653074a1a827ff49d2034894a1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0ede86c860ac980d49efbb5f04d472fabe03c4653074a1a827ff49d2034894a1/userdata/shm major:0 minor:716 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1090f75c486a323ae59e4678633d1d8f31d3b0da933bc500d26a674a58096eb0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1090f75c486a323ae59e4678633d1d8f31d3b0da933bc500d26a674a58096eb0/userdata/shm major:0 minor:97 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1206e415177b826b05bc4efd16176f68cc29c42141e8fa6d0d360426d4f33a85/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1206e415177b826b05bc4efd16176f68cc29c42141e8fa6d0d360426d4f33a85/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/166c259337ecc4f073ab3f6650460578e7d7cb947fe167df547591f1f002809b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/166c259337ecc4f073ab3f6650460578e7d7cb947fe167df547591f1f002809b/userdata/shm major:0 minor:1267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1691f192a8834aa22572ce2ad682bc87e326607190761ea473a2ecaf32c9e175/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1691f192a8834aa22572ce2ad682bc87e326607190761ea473a2ecaf32c9e175/userdata/shm major:0 minor:861 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/17031191ab6d96a7b42b27f8e62cc7de662a0a1661bf978c7cf3315a18929da9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/17031191ab6d96a7b42b27f8e62cc7de662a0a1661bf978c7cf3315a18929da9/userdata/shm major:0 minor:1128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1a01adc1f41522dbb8a1d23da740cfd44f6a53e272a46c5d7003ab771e7ccdcb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1a01adc1f41522dbb8a1d23da740cfd44f6a53e272a46c5d7003ab771e7ccdcb/userdata/shm major:0 minor:60 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/23b2cdbe43b5f53ee3da0198ee8c38e0997aeb51d9b1fb66eb114d3637b2718c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/23b2cdbe43b5f53ee3da0198ee8c38e0997aeb51d9b1fb66eb114d3637b2718c/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2a5fb83b35a727aa019fe00cd3fd649fdc6a109862d8c91e0031dff4209d98e3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2a5fb83b35a727aa019fe00cd3fd649fdc6a109862d8c91e0031dff4209d98e3/userdata/shm major:0 minor:277 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2be4e82eb96940a91f7ac36e8a59bd96b86a7b6fac8a7814b9cb48d762103f37/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2be4e82eb96940a91f7ac36e8a59bd96b86a7b6fac8a7814b9cb48d762103f37/userdata/shm major:0 minor:431 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/318e8d0079ec56751e5bcf03b814977bae46333d7a42c62cfe81d3ed0047c4ac/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/318e8d0079ec56751e5bcf03b814977bae46333d7a42c62cfe81d3ed0047c4ac/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/327d8b93a0b8136db5fa70fbc964d1cbd5cf33fa512a27f0f0cf22df8db25f21/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/327d8b93a0b8136db5fa70fbc964d1cbd5cf33fa512a27f0f0cf22df8db25f21/userdata/shm major:0 minor:195 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3372bbf7f4c306095391a5b4c0a6615ca5aaf373fb3cc461d59deb2a7e8dca2b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3372bbf7f4c306095391a5b4c0a6615ca5aaf373fb3cc461d59deb2a7e8dca2b/userdata/shm major:0 minor:1205 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/33f515505da92fce1875904be2b838a9fceeeb5773f300e97e9d391050d94811/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/33f515505da92fce1875904be2b838a9fceeeb5773f300e97e9d391050d94811/userdata/shm major:0 minor:373 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3492cbd782b3ac55acb0d1ebd2aa664af10267490d59604deb78eb50aef952ff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3492cbd782b3ac55acb0d1ebd2aa664af10267490d59604deb78eb50aef952ff/userdata/shm major:0 minor:566 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/36eb1911b1d84465d4f3614b052501f0ab8200fc09c3cd58c9e93b58066e3180/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/36eb1911b1d84465d4f3614b052501f0ab8200fc09c3cd58c9e93b58066e3180/userdata/shm major:0 minor:574 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/39a1a5d33692c6053b9e75c3ef75f6d5e551935ea080f8573acf4698acb62831/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/39a1a5d33692c6053b9e75c3ef75f6d5e551935ea080f8573acf4698acb62831/userdata/shm major:0 minor:582 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3a76972be7f15da250f8e27177b299ce05a6278ca9f8bfe782f7866364a2323b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3a76972be7f15da250f8e27177b299ce05a6278ca9f8bfe782f7866364a2323b/userdata/shm major:0 minor:429 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/441065bea23c74396afef0b5e83785e19b00c76012695c20dcc42243f3f809f3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/441065bea23c74396afef0b5e83785e19b00c76012695c20dcc42243f3f809f3/userdata/shm major:0 minor:452 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/47b2f781a814a8d1bcfc1cccd7e4c348407c92b6cdeff2bb7b600cfbaa766dff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/47b2f781a814a8d1bcfc1cccd7e4c348407c92b6cdeff2bb7b600cfbaa766dff/userdata/shm major:0 minor:580 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4925985880a2064a6380cae65dbb1eb737b503d2a9366dcfbcec286b6e942ef7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4925985880a2064a6380cae65dbb1eb737b503d2a9366dcfbcec286b6e942ef7/userdata/shm major:0 minor:430 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/507676e6f82ab903ac83daafdb4ad3f73a28bb521382cf0074ea56ae587cb87f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/507676e6f82ab903ac83daafdb4ad3f73a28bb521382cf0074ea56ae587cb87f/userdata/shm major:0 minor:759 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/529a8813da1db26a89da6c06d3a8fcc3afc05b6c872a6a5a2b9fb3ceb4df9687/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/529a8813da1db26a89da6c06d3a8fcc3afc05b6c872a6a5a2b9fb3ceb4df9687/userdata/shm major:0 minor:193 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/54f65f910e458ec6e67c421fe2cab6c8d04efb4552cacded48383019268d4056/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/54f65f910e458ec6e67c421fe2cab6c8d04efb4552cacded48383019268d4056/userdata/shm major:0 minor:762 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5b2746caab687d58b26002188b5ccba20de2a04cd6da171355541cf375046c0d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5b2746caab687d58b26002188b5ccba20de2a04cd6da171355541cf375046c0d/userdata/shm major:0 minor:542 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/638616f7252126f59d4bcab9c5e05a063b0ebaded1038582ec0b4152c67c3d10/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/638616f7252126f59d4bcab9c5e05a063b0ebaded1038582ec0b4152c67c3d10/userdata/shm major:0 minor:110 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/65f0cac0248f829995872c710eae2661c9c322f7d317b3a4dc6cd36bbbee0b47/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/65f0cac0248f829995872c710eae2661c9c322f7d317b3a4dc6cd36bbbee0b47/userdata/shm major:0 minor:309 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6a7bfbfdcb0537291cfa1b372b6f031e0ea91896123e7787ed049d4ad28854cc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6a7bfbfdcb0537291cfa1b372b6f031e0ea91896123e7787ed049d4ad28854cc/userdata/shm major:0 minor:379 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7127b21b93cf0d636eeb4e29ca5a97fd29d095d44d5d5c9994999fa758bf4565/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7127b21b93cf0d636eeb4e29ca5a97fd29d095d44d5d5c9994999fa758bf4565/userdata/shm major:0 minor:577 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/78ca76bb28058c596e989b94f315e85b6607b7b0e487f9746f2eff407fceb169/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/78ca76bb28058c596e989b94f315e85b6607b7b0e487f9746f2eff407fceb169/userdata/shm major:0 minor:300 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7b0a0741b1c4a0dbf76177da995e7cc407702a375fbd2c1f79e4ec49f22b6e5f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7b0a0741b1c4a0dbf76177da995e7cc407702a375fbd2c1f79e4ec49f22b6e5f/userdata/shm major:0 minor:544 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7bc9a872390b5d9f7e6deaa6fe763c395d3fe8f5593fe4a35eea402b1c688808/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7bc9a872390b5d9f7e6deaa6fe763c395d3fe8f5593fe4a35eea402b1c688808/userdata/shm major:0 minor:851 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/826db63109cf25d66ed31a255738b519d4a9faae58f44b83818b33fc45665543/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/826db63109cf25d66ed31a255738b519d4a9faae58f44b83818b33fc45665543/userdata/shm major:0 minor:578 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/827635bac05f32a4d1b33aabd85a52eb2d7b3922ab83e829cdc824722116be6c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/827635bac05f32a4d1b33aabd85a52eb2d7b3922ab83e829cdc824722116be6c/userdata/shm major:0 minor:286 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8341254b8ef7faec187b8fe415e34b54bbc9e2b3da20b0d37f8005ee126bc089/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8341254b8ef7faec187b8fe415e34b54bbc9e2b3da20b0d37f8005ee126bc089/userdata/shm major:0 minor:381 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84bc6873f1c2f152a188b93adf9b13caf01f769508b7055c0e1ef90ebe5496e8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84bc6873f1c2f152a188b93adf9b13caf01f769508b7055c0e1ef90ebe5496e8/userdata/shm major:0 minor:844 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8cf490279cd50e81a0597e17ffd2c0830f353d5b000ce0e906995ead9d10342b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8cf490279cd50e81a0597e17ffd2c0830f353d5b000ce0e906995ead9d10342b/userdata/shm major:0 minor:540 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f7330d7b1c8d5e165b36ef69bd54c2550cc5df6e53223d95c4871726c4c1402/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f7330d7b1c8d5e165b36ef69bd54c2550cc5df6e53223d95c4871726c4c1402/userdata/shm major:0 minor:282 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9a135ef2bf0cea92e0e6d6c962da99bd4bf9e44e47304e0bce9ab97fa97ad55c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9a135ef2bf0cea92e0e6d6c962da99bd4bf9e44e47304e0bce9ab97fa97ad55c/userdata/shm major:0 minor:194 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9cc7b181ab55ab6abb3242c925ed6067592af711ebb394b812dbd9cfe003dfbd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9cc7b181ab55ab6abb3242c925ed6067592af711ebb394b812dbd9cfe003dfbd/userdata/shm major:0 minor:85 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a7e80ad99f32fd1031084b1ec720eccfe0c30d3f2999f46f1a0b9a07c12c03d3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a7e80ad99f32fd1031084b1ec720eccfe0c30d3f2999f46f1a0b9a07c12c03d3/userdata/shm major:0 minor:581 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a92cb32c4be6840fe62cceeff083a250664f650a02bcc7c9c164c3636c13a84d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a92cb32c4be6840fe62cceeff083a250664f650a02bcc7c9c164c3636c13a84d/userdata/shm major:0 minor:293 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aaa9389e6efd83bcb84425795f77ecd0592b13d2955b3048aeff511ecb88fc48/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aaa9389e6efd83bcb84425795f77ecd0592b13d2955b3048aeff511ecb88fc48/userdata/shm major:0 minor:671 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ac8b90837a8f5e731e7b22ff050f1b380571286ef85231576020efe34cd2e430/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ac8b90837a8f5e731e7b22ff050f1b380571286ef85231576020efe34cd2e430/userdata/shm major:0 minor:568 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ad1e0968f9a0f9395b52d4138ec76c893d5513164ae2900823432b7870c6a271/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ad1e0968f9a0f9395b52d4138ec76c893d5513164ae2900823432b7870c6a271/userdata/shm major:0 minor:1135 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ad27979ee67ec73db6166a66f6c8de5d02655b589472440fd2f397e6aebb3ab2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ad27979ee67ec73db6166a66f6c8de5d02655b589472440fd2f397e6aebb3ab2/userdata/shm major:0 minor:143 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b246614c1f2f72db4cedbcce4b955bc3ac0b04e8bff7cc76cf229101226ee259/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b246614c1f2f72db4cedbcce4b955bc3ac0b04e8bff7cc76cf229101226ee259/userdata/shm major:0 minor:575 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b2b9e99c760ef8b2b1d3b355cd2d86c95a75c8f2455bc3e22e89b188ba7101e3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b2b9e99c760ef8b2b1d3b355cd2d86c95a75c8f2455bc3e22e89b188ba7101e3/userdata/shm major:0 minor:78 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b2d70b13e56c93d2b547edf220b4dd7dcd419773ebed8ee5ba82b3212eb438a5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b2d70b13e56c93d2b547edf220b4dd7dcd419773ebed8ee5ba82b3212eb438a5/userdata/shm major:0 minor:394 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b37a738de54db612fada8bd81cf2bdbaea3d8eea466401eb8ad83715ec6bea2f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b37a738de54db612fada8bd81cf2bdbaea3d8eea466401eb8ad83715ec6bea2f/userdata/shm major:0 minor:47 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b48391bd94beb64b336f15ad176f98f36973e5e545db832340669d5eac56bf63/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b48391bd94beb64b336f15ad176f98f36973e5e545db832340669d5eac56bf63/userdata/shm major:0 minor:127 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b77cf717eaf94cf8bf6837636ba7313b88c41d8f394ba5e1308558d0bca1c808/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b77cf717eaf94cf8bf6837636ba7313b88c41d8f394ba5e1308558d0bca1c808/userdata/shm major:0 minor:1130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b89fd2b72c95ae892c409ef90ceca60361969c1db213c09131f13705c3334986/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b89fd2b72c95ae892c409ef90ceca60361969c1db213c09131f13705c3334986/userdata/shm major:0 minor:390 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bccd3e3cca0e5a27f19803d019ffa435cc0a6a211a761789d34e9900fb9748dc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bccd3e3cca0e5a27f19803d019ffa435cc0a6a211a761789d34e9900fb9748dc/userdata/shm major:0 minor:337 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c16bea21819b6e1d15437de870badf6de1fc66d12913185c9cecf80f61f24b54/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c16bea21819b6e1d15437de870badf6de1fc66d12913185c9cecf80f61f24b54/userdata/shm major:0 minor:80 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5552d51223ad679691154e2dedf71641b800849a05e120dc501f6840be1e99e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5552d51223ad679691154e2dedf71641b800849a05e120dc501f6840be1e99e/userdata/shm major:0 minor:810 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5619c16f90d5aa3883b6b245c376b14384e785794b212891be3d9cc98f2155b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5619c16f90d5aa3883b6b245c376b14384e785794b212891be3d9cc98f2155b/userdata/shm major:0 minor:304 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cbd2814207ea73c81ee03ec39936289eb40513d40ec1dfdddcdf33cff0834b18/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cbd2814207ea73c81ee03ec39936289eb40513d40ec1dfdddcdf33cff0834b18/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d0a141b311d0fcd6bd712d0075c6fb1c7f72a45707678fd94f7971d15d34a88f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d0a141b311d0fcd6bd712d0075c6fb1c7f72a45707678fd94f7971d15d34a88f/userdata/shm major:0 minor:1166 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d3506d2533f5948044615b3daf194c86dee0685849b66763860811b20d32f418/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d3506d2533f5948044615b3daf194c86dee0685849b66763860811b20d32f418/userdata/shm major:0 minor:1304 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d45bdb88cf4fb87c1f9683f4dd82403ae62e23be61f87cc716489058be0075c3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d45bdb88cf4fb87c1f9683f4dd82403ae62e23be61f87cc716489058be0075c3/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d462fc60c97084643070378d982a956e1f53a8cb223bde5d6b24565dab2fc818/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d462fc60c97084643070378d982a956e1f53a8cb223bde5d6b24565dab2fc818/userdata/shm major:0 minor:1129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d5397ea31b615de3ed7b896751d9092d0868b0406feb889378a7e38993fb96df/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d5397ea31b615de3ed7b896751d9092d0868b0406feb889378a7e38993fb96df/userdata/shm major:0 minor:308 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d72373aa995597c762385fce3b659d1483668a485ad494b6b7d7dd517099e857/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d72373aa995597c762385fce3b659d1483668a485ad494b6b7d7dd517099e857/userdata/shm major:0 minor:380 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/daeb204866928dea63cda5d95ee5bd6ef7be131f67e5efa51523f3185688b49e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/daeb204866928dea63cda5d95ee5bd6ef7be131f67e5efa51523f3185688b49e/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/db83379789ef98a1c8bd3954093bb31968ab0139d9f5bc569d532d29a9e92213/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/db83379789ef98a1c8bd3954093bb31968ab0139d9f5bc569d532d29a9e92213/userdata/shm major:0 minor:90 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dd42f3b0e8e73a155f4ae8d3e76cb9c1f46437280ce91aa23b51a6b995b48869/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dd42f3b0e8e73a155f4ae8d3e76cb9c1f46437280ce91aa23b51a6b995b48869/userdata/shm major:0 minor:425 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e112dc6a9d5f726f666b1385197c77d837257cbee8251d26060f19151f5ada2f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e112dc6a9d5f726f666b1385197c77d837257cbee8251d26060f19151f5ada2f/userdata/shm major:0 minor:858 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e597c41c82bb3cdfce7c1bbc08b1c76dcd4cf2cd3b4feeb956d08f44152b7ef9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e597c41c82bb3cdfce7c1bbc08b1c76dcd4cf2cd3b4feeb956d08f44152b7ef9/userdata/shm major:0 minor:274 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e668bf18622f735aba88fe56630f792fd4bf653bbe4e51d87240b3f22f8d64bd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e668bf18622f735aba88fe56630f792fd4bf653bbe4e51d87240b3f22f8d64bd/userdata/shm major:0 minor:714 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e6e96cc43446a3135181efd422d5641ca4e6cd2f71bf5238bf91b4954d41a24a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e6e96cc43446a3135181efd422d5641ca4e6cd2f71bf5238bf91b4954d41a24a/userdata/shm major:0 minor:779 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e89e4070dae8204d097a2414e77e4c5c562c772569afe17fb4b2e8b090f82fda/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e89e4070dae8204d097a2414e77e4c5c562c772569afe17fb4b2e8b090f82fda/userdata/shm major:0 minor:784 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e9b84bcaac977feb96e17841e41bc90c7743f1709d32f1ab4ffe9c651b7c5436/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e9b84bcaac977feb96e17841e41bc90c7743f1709d32f1ab4ffe9c651b7c5436/userdata/shm major:0 minor:1227 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fb0b310e4353078b29e20eeb338d9c0abab57242511e7e72ade79783d9a85447/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fb0b310e4353078b29e20eeb338d9c0abab57242511e7e72ade79783d9a85447/userdata/shm major:0 minor:1207 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fb6f6ab6826113043c422e9cb31e951a4709e29a8f548f2f0410e49be87f511d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fb6f6ab6826113043c422e9cb31e951a4709e29a8f548f2f0410e49be87f511d/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/01e90033-9ddf-41b4-ab61-e89add6c2fde/volumes/kubernetes.io~projected/kube-api-access-j2tk7:{mountpoint:/var/lib/kubelet/pods/01e90033-9ddf-41b4-ab61-e89add6c2fde/volumes/kubernetes.io~projected/kube-api-access-j2tk7 major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/01e90033-9ddf-41b4-ab61-e89add6c2fde/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/01e90033-9ddf-41b4-ab61-e89add6c2fde/volumes/kubernetes.io~secret/serving-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/volumes/kubernetes.io~projected/kube-api-access-8k2dv:{mountpoint:/var/lib/kubelet/pods/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/volumes/kubernetes.io~projected/kube-api-access-8k2dv major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/042d8457-04dc-4171-8b0f-f9e3de695c46/volumes/kubernetes.io~projected/kube-api-access-hpz9d:{mountpoint:/var/lib/kubelet/pods/042d8457-04dc-4171-8b0f-f9e3de695c46/volumes/kubernetes.io~projected/kube-api-access-hpz9d major:0 minor:1203 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/042d8457-04dc-4171-8b0f-f9e3de695c46/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/042d8457-04dc-4171-8b0f-f9e3de695c46/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1200 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/042d8457-04dc-4171-8b0f-f9e3de695c46/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/042d8457-04dc-4171-8b0f-f9e3de695c46/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1201 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07281644-2789-424f-8429-aa4448dda01e/volumes/kubernetes.io~projected/kube-api-access-l5pw4:{mountpoint:/var/lib/kubelet/pods/07281644-2789-424f-8429-aa4448dda01e/volumes/kubernetes.io~projected/kube-api-access-l5pw4 major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/11aaad8c-2f25-460f-b4af-f27d8bc682a0/volumes/kubernetes.io~projected/kube-api-access-x5z86:{mountpoint:/var/lib/kubelet/pods/11aaad8c-2f25-460f-b4af-f27d8bc682a0/volumes/kubernetes.io~projected/kube-api-access-x5z86 major:0 minor:860 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1709ef31-9ddd-42bf-9a95-4be4502a0828/volumes/kubernetes.io~projected/kube-api-access-79j9f:{mountpoint:/var/lib/kubelet/pods/1709ef31-9ddd-42bf-9a95-4be4502a0828/volumes/kubernetes.io~projected/kube-api-access-79j9f major:0 minor:135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1709ef31-9ddd-42bf-9a95-4be4502a0828/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/1709ef31-9ddd-42bf-9a95-4be4502a0828/volumes/kubernetes.io~secret/metrics-certs major:0 minor:560 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19cf75ed-6a4e-444d-8975-fa6ecba79f13/volumes/kubernetes.io~projected/kube-api-access-7hxz5:{mountpoint:/var/lib/kubelet/pods/19cf75ed-6a4e-444d-8975-fa6ecba79f13/volumes/kubernetes.io~projected/kube-api-access-7hxz5 major:0 minor:843 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~projected/kube-api-access-lvjcp:{mountpoint:/var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~projected/kube-api-access-lvjcp major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~secret/etcd-client major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~secret/serving-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1df81fcc-f967-4874-ad16-1a89f0e7875a/volumes/kubernetes.io~projected/kube-api-access-7mggv:{mountpoint:/var/lib/kubelet/pods/1df81fcc-f967-4874-ad16-1a89f0e7875a/volumes/kubernetes.io~projected/kube-api-access-7mggv major:0 minor:270 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1df81fcc-f967-4874-ad16-1a89f0e7875a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1df81fcc-f967-4874-ad16-1a89f0e7875a/volumes/kubernetes.io~secret/serving-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1fb59696-1d5f-41bb-9211-b89c63b10840/volumes/kubernetes.io~projected/kube-api-access-8djgj:{mountpoint:/var/lib/kubelet/pods/1fb59696-1d5f-41bb-9211-b89c63b10840/volumes/kubernetes.io~projected/kube-api-access-8djgj major:0 minor:356 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9/volumes/kubernetes.io~projected/kube-api-access-8p4w6:{mountpoint:/var/lib/kubelet/pods/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9/volumes/kubernetes.io~projected/kube-api-access-8p4w6 major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9/volumes/kubernetes.io~secret/serving-cert major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/21e8e44b-b883-4afb-af90-d6c1265edf34/volumes/kubernetes.io~projected/kube-api-access-rk6hv:{mountpoint:/var/lib/kubelet/pods/21e8e44b-b883-4afb-af90-d6c1265edf34/volumes/kubernetes.io~projected/kube-api-access-rk6hv major:0 minor:699 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/21e8e44b-b883-4afb-af90-d6c1265edf34/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/21e8e44b-b883-4afb-af90-d6c1265edf34/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:666 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22bba1b3-587d-4802-b4ae-946827c3fa7a/volumes/kubernetes.io~projected/kube-api-access-2wnh5:{mountpoint:/var/lib/kubelet/pods/22bba1b3-587d-4802-b4ae-946827c3fa7a/volumes/kubernetes.io~projected/kube-api-access-2wnh5 major:0 minor:291 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22bba1b3-587d-4802-b4ae-946827c3fa7a/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/22bba1b3-587d-4802-b4ae-946827c3fa7a/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:562 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe/volumes/kubernetes.io~projected/kube-api-access-rxr6j:{mountpoint:/var/lib/kubelet/pods/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe/volumes/kubernetes.io~projected/kube-api-access-rxr6j major:0 minor:385 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe/volumes/kubernetes.io~secret/proxy-tls major:0 minor:378 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f9cd117-c84f-44c9-80a9-879a04d62934/volumes/kubernetes.io~projected/kube-api-access-m98rt:{mountpoint:/var/lib/kubelet/pods/2f9cd117-c84f-44c9-80a9-879a04d62934/volumes/kubernetes.io~projected/kube-api-access-m98rt major:0 minor:1165 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f9cd117-c84f-44c9-80a9-879a04d62934/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/2f9cd117-c84f-44c9-80a9-879a04d62934/volumes/kubernetes.io~secret/certs major:0 minor:1160 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f9cd117-c84f-44c9-80a9-879a04d62934/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/2f9cd117-c84f-44c9-80a9-879a04d62934/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/312ca024-c8f0-4994-8f9a-b707607341fe/volumes/kubernetes.io~projected/kube-api-access-bpnmz:{mountpoint:/var/lib/kubelet/pods/312ca024-c8f0-4994-8f9a-b707607341fe/volumes/kubernetes.io~projected/kube-api-access-bpnmz major:0 minor:107 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/312ca024-c8f0-4994-8f9a-b707607341fe/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/312ca024-c8f0-4994-8f9a-b707607341fe/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31969539-bfd1-466f-8697-f13cbbd957df/volumes/kubernetes.io~projected/kube-api-access-7ts6s:{mountpoint:/var/lib/kubelet/pods/31969539-bfd1-466f-8697-f13cbbd957df/volumes/kubernetes.io~projected/kube-api-access-7ts6s major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31969539-bfd1-466f-8697-f13cbbd957df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/31969539-bfd1-466f-8697-f13cbbd957df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34382460-b2d7-4154-87ba-c0347a4c0f1b/volumes/kubernetes.io~projected/kube-api-access-5dx9s:{mountpoint:/var/lib/kubelet/pods/34382460-b2d7-4154-87ba-c0347a4c0f1b/volumes/kubernetes.io~projected/kube-api-access-5dx9s major:0 minor:850 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51/volumes/kubernetes.io~projected/kube-api-access-qkn7h:{mountpoint:/var/lib/kubelet/pods/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51/volumes/kubernetes.io~projected/kube-api-access-qkn7h major:0 minor:902 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51/volumes/kubernetes.io~secret/proxy-tls major:0 minor:901 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39790258-73bc-4c37-a935-e8d3c2a2d5c6/volumes/kubernetes.io~projected/kube-api-access-94lkp:{mountpoint:/var/lib/kubelet/pods/39790258-73bc-4c37-a935-e8d3c2a2d5c6/volumes/kubernetes.io~projected/kube-api-access-94lkp major:0 minor:977 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39790258-73bc-4c37-a935-e8d3c2a2d5c6/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/39790258-73bc-4c37-a935-e8d3c2a2d5c6/volumes/kubernetes.io~secret/cert major:0 minor:783 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39ccf158-b40f-4dba-90e2-27b1409487b7/volumes/kubernetes.io~projected/kube-api-access-4zmwm:{mountpoint:/var/lib/kubelet/pods/39ccf158-b40f-4dba-90e2-27b1409487b7/volumes/kubernetes.io~projected/kube-api-access-4zmwm major:0 minor:332 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volumes/kubernetes.io~projected/kube-api-access-5j4cs:{mountpoint:/var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volumes/kubernetes.io~projected/kube-api-access-5j4cs major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/volumes/kubernetes.io~projected/kube-api-access-8nd7r:{mountpoint:/var/lib/kubelet/pods/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/volumes/kubernetes.io~projected/kube-api-access-8nd7r major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:422 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:421 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4d060bff-3c25-4eeb-bdd3-e20fb2687645/volumes/kubernetes.io~projected/kube-api-access-26x7b:{mountpoint:/var/lib/kubelet/pods/4d060bff-3c25-4eeb-bdd3-e20fb2687645/volumes/kubernetes.io~projected/kube-api-access-26x7b major:0 minor:292 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4d060bff-3c25-4eeb-bdd3-e20fb2687645/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/4d060bff-3c25-4eeb-bdd3-e20fb2687645/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4d060bff-3c25-4eeb-bdd3-e20fb2687645/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/4d060bff-3c25-4eeb-bdd3-e20fb2687645/volumes/kubernetes.io~secret/srv-cert major:0 minor:561 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4d8cd7c5-31fd-4dca-b39b-6d62eb573707/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/4d8cd7c5-31fd-4dca-b39b-6d62eb573707/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1120 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/533fe3c7-504f-40aa-aab0-8d66ef27920f/volumes/kubernetes.io~projected/kube-api-access-jrwcs:{mountpoint:/var/lib/kubelet/pods/533fe3c7-504f-40aa-aab0-8d66ef27920f/volumes/kubernetes.io~projected/kube-api-access-jrwcs major:0 minor:108 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5360f3f5-2d07-432f-af45-22659538c55e/volumes/kubernetes.io~projected/kube-api-access-7vvm8:{mountpoint:/var/lib/kubelet/pods/5360f3f5-2d07-432f-af45-22659538c55e/volumes/kubernetes.io~projected/kube-api-access-7vvm8 major:0 minor:295 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5360f3f5-2d07-432f-af45-22659538c55e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/5360f3f5-2d07-432f-af45-22659538c55e/volumes/kubernetes.io~secret/serving-cert major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59c1cc61-8692-4a35-83fc-6bbef7086117/volumes/kubernetes.io~projected/kube-api-access-mp57v:{mountpoint:/var/lib/kubelet/pods/59c1cc61-8692-4a35-83fc-6bbef7086117/volumes/kubernetes.io~projected/kube-api-access-mp57v major:0 minor:558 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59c1cc61-8692-4a35-83fc-6bbef7086117/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/59c1cc61-8692-4a35-83fc-6bbef7086117/volumes/kubernetes.io~secret/encryption-config major:0 minor:556 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59c1cc61-8692-4a35-83fc-6bbef7086117/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/59c1cc61-8692-4a35-83fc-6bbef7086117/volumes/kubernetes.io~secret/etcd-client major:0 minor:555 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59c1cc61-8692-4a35-83fc-6bbef7086117/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/59c1cc61-8692-4a35-83fc-6bbef7086117/volumes/kubernetes.io~secret/serving-cert major:0 minor:551 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c104245-d078-4856-9a60-207bb6efcfe8/volumes/kubernetes.io~projected/kube-api-access-nlcjf:{mountpoint:/var/lib/kubelet/pods/5c104245-d078-4856-9a60-207bb6efcfe8/volumes/kubernetes.io~projected/kube-api-access-nlcjf major:0 minor:884 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c104245-d078-4856-9a60-207bb6efcfe8/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/5c104245-d078-4856-9a60-207bb6efcfe8/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:883 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2/volumes/kubernetes.io~projected/kube-api-access-2dx69:{mountpoint:/var/lib/kubelet/pods/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2/volumes/kubernetes.io~projected/kube-api-access-2dx69 major:0 minor:1194 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/62fc400b-b3dd-4134-bd27-69dd8369153a/volumes/kubernetes.io~projected/kube-api-access-zbsxw:{mountpoint:/var/lib/kubelet/pods/62fc400b-b3dd-4134-bd27-69dd8369153a/volumes/kubernetes.io~projected/kube-api-access-zbsxw major:0 minor:900 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/62fc400b-b3dd-4134-bd27-69dd8369153a/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/62fc400b-b3dd-4134-bd27-69dd8369153a/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:899 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6479d88f-463f-48ed-846d-2747752a8abb/volumes/kubernetes.io~projected/kube-api-access-mfmdd:{mountpoint:/var/lib/kubelet/pods/6479d88f-463f-48ed-846d-2747752a8abb/volumes/kubernetes.io~projected/kube-api-access-mfmdd major:0 minor:1291 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6479d88f-463f-48ed-846d-2747752a8abb/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/6479d88f-463f-48ed-846d-2747752a8abb/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1287 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c/volumes/kubernetes.io~projected/kube-api-access-kfzqt:{mountpoint:/var/lib/kubelet/pods/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c/volumes/kubernetes.io~projected/kube-api-access-kfzqt major:0 minor:1266 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c3aa45a-44cc-48fb-a478-ce01a70c4b02/volumes/kubernetes.io~projected/kube-api-access-2zkbq:{mountpoint:/var/lib/kubelet/pods/6c3aa45a-44cc-48fb-a478-ce01a70c4b02/volumes/kubernetes.io~projected/kube-api-access-2zkbq major:0 minor:288 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c3aa45a-44cc-48fb-a478-ce01a70c4b02/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6c3aa45a-44cc-48fb-a478-ce01a70c4b02/volumes/kubernetes.io~secret/serving-cert major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6dfca740-0387-428a-b957-3e8a09c6e352/volumes/kubernetes.io~projected/kube-api-access-d4457:{mountpoint:/var/lib/kubelet/pods/6dfca740-0387-428a-b957-3e8a09c6e352/volumes/kubernetes.io~projected/kube-api-access-d4457 major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6dfca740-0387-428a-b957-3e8a09c6e352/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/6dfca740-0387-428a-b957-3e8a09c6e352/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:559 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7635c0ff-4d40-4310-8187-230323e504e0/volumes/kubernetes.io~projected/kube-api-access-p5m78:{mountpoint:/var/lib/kubelet/pods/7635c0ff-4d40-4310-8187-230323e504e0/volumes/kubernetes.io~projected/kube-api-access-p5m78 major:0 minor:484 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7635c0ff-4d40-4310-8187-230323e504e0/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/7635c0ff-4d40-4310-8187-230323e504e0/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/volumes/kubernetes.io~projected/kube-api-access-lqxhp:{mountpoint:/var/lib/kubelet/pods/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/volumes/kubernetes.io~projected/kube-api-access-lqxhp major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:415 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/836a6d7e-9b26-425f-ae21-00422515d7fe/volumes/kubernetes.io~projected/kube-api-access-ms8wk:{mountpoint:/var/lib/kubelet/pods/836a6d7e-9b26-425f-ae21-00422515d7fe/volumes/kubernetes.io~projected/kube-api-access-ms8wk major:0 minor:164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/836a6d7e-9b26-425f-ae21-00422515d7fe/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/836a6d7e-9b26-425f-ae21-00422515d7fe/volumes/kubernetes.io~secret/webhook-cert major:0 minor:163 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/839bf5b1-b242-4bbd-bc09-cf6abcf7f734/volumes/kubernetes.io~projected/kube-api-access-pvxsh:{mountpoint:/var/lib/kubelet/pods/839bf5b1-b242-4bbd-bc09-cf6abcf7f734/volumes/kubernetes.io~projected/kube-api-access-pvxsh major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89383482-190e-4f74-a81e-b1547e5b9ae6/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/89383482-190e-4f74-a81e-b1547e5b9ae6/volumes/kubernetes.io~projected/kube-api-access major:0 minor:697 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89383482-190e-4f74-a81e-b1547e5b9ae6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/89383482-190e-4f74-a81e-b1547e5b9ae6/volumes/kubernetes.io~secret/serving-cert major:0 minor:692 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89ed6373-78f8-4d77-82b2-1ab055b5b862/volumes/kubernetes.io~projected/kube-api-access-f64ql:{mountpoint:/var/lib/kubelet/pods/89ed6373-78f8-4d77-82b2-1ab055b5b862/volumes/kubernetes.io~projected/kube-api-access-f64ql major:0 minor:1204 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89ed6373-78f8-4d77-82b2-1ab055b5b862/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/89ed6373-78f8-4d77-82b2-1ab055b5b862/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1199 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89ed6373-78f8-4d77-82b2-1ab055b5b862/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/89ed6373-78f8-4d77-82b2-1ab055b5b862/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1202 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a97bbf5-7409-4f36-894b-b88284e1b6d0/volumes/kubernetes.io~projected/kube-api-access-vq4ct:{mountpoint:/var/lib/kubelet/pods/8a97bbf5-7409-4f36-894b-b88284e1b6d0/volumes/kubernetes.io~projected/kube-api-access-vq4ct major:0 minor:391 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a97bbf5-7409-4f36-894b-b88284e1b6d0/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/8a97bbf5-7409-4f36-894b-b88284e1b6d0/volumes/kubernetes.io~secret/signing-key major:0 minor:386 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ab951b1-6898-4357-b813-16365f3f89d5/volumes/kubernetes.io~projected/kube-api-access-xdzzt:{mountpoint:/var/lib/kubelet/pods/8ab951b1-6898-4357-b813-16365f3f89d5/volumes/kubernetes.io~projected/kube-api-access-xdzzt major:0 minor:889 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ab951b1-6898-4357-b813-16365f3f89d5/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/8ab951b1-6898-4357-b813-16365f3f89d5/volumes/kubernetes.io~secret/cert major:0 minor:888 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8df029f2-d0ec-4543-9371-7694b1e85a06/volumes/kubernetes.io~projected/kube-api-access-kwgg6:{mountpoint:/var/lib/kubelet/pods/8df029f2-d0ec-4543-9371-7694b1e85a06/volumes/kubernetes.io~projected/kube-api-access-kwgg6 major:0 minor:857 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/906307ef-d988-49e7-9d63-39116a2c4880/volumes/kubernetes.io~projected/kube-api-access-5j82z:{mountpoint:/var/lib/kubelet/pods/906307ef-d988-49e7-9d63-39116a2c4880/volumes/kubernetes.io~projected/kube-api-access-5j82z major:0 minor:280 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/98226a59-5234-48f3-a9cd-21de305810dc/volumes/kubernetes.io~projected/kube-api- Feb 20 12:04:55.388515 master-0 kubenswrapper[31420]: access-j2hwr:{mountpoint:/var/lib/kubelet/pods/98226a59-5234-48f3-a9cd-21de305810dc/volumes/kubernetes.io~projected/kube-api-access-j2hwr major:0 minor:698 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/98226a59-5234-48f3-a9cd-21de305810dc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/98226a59-5234-48f3-a9cd-21de305810dc/volumes/kubernetes.io~secret/serving-cert major:0 minor:656 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c078827-3bdb-4509-aeb3-eb558df1f6e7/volumes/kubernetes.io~projected/kube-api-access-x2qdb:{mountpoint:/var/lib/kubelet/pods/9c078827-3bdb-4509-aeb3-eb558df1f6e7/volumes/kubernetes.io~projected/kube-api-access-x2qdb major:0 minor:1124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c078827-3bdb-4509-aeb3-eb558df1f6e7/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/9c078827-3bdb-4509-aeb3-eb558df1f6e7/volumes/kubernetes.io~secret/default-certificate major:0 minor:1121 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c078827-3bdb-4509-aeb3-eb558df1f6e7/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/9c078827-3bdb-4509-aeb3-eb558df1f6e7/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1116 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c078827-3bdb-4509-aeb3-eb558df1f6e7/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/9c078827-3bdb-4509-aeb3-eb558df1f6e7/volumes/kubernetes.io~secret/stats-auth major:0 minor:1123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aae1df07-cf9f-47a3-b146-2a0adb182660/volumes/kubernetes.io~projected/kube-api-access-qqzpj:{mountpoint:/var/lib/kubelet/pods/aae1df07-cf9f-47a3-b146-2a0adb182660/volumes/kubernetes.io~projected/kube-api-access-qqzpj major:0 minor:1290 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aae1df07-cf9f-47a3-b146-2a0adb182660/volumes/kubernetes.io~secret/federate-client-tls:{mountpoint:/var/lib/kubelet/pods/aae1df07-cf9f-47a3-b146-2a0adb182660/volumes/kubernetes.io~secret/federate-client-tls major:0 minor:1281 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aae1df07-cf9f-47a3-b146-2a0adb182660/volumes/kubernetes.io~secret/secret-telemeter-client:{mountpoint:/var/lib/kubelet/pods/aae1df07-cf9f-47a3-b146-2a0adb182660/volumes/kubernetes.io~secret/secret-telemeter-client major:0 minor:1289 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aae1df07-cf9f-47a3-b146-2a0adb182660/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/aae1df07-cf9f-47a3-b146-2a0adb182660/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config major:0 minor:1288 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aae1df07-cf9f-47a3-b146-2a0adb182660/volumes/kubernetes.io~secret/telemeter-client-tls:{mountpoint:/var/lib/kubelet/pods/aae1df07-cf9f-47a3-b146-2a0adb182660/volumes/kubernetes.io~secret/telemeter-client-tls major:0 minor:1286 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae1fd116-6f63-4344-b7af-278665649e5a/volumes/kubernetes.io~projected/kube-api-access-wf682:{mountpoint:/var/lib/kubelet/pods/ae1fd116-6f63-4344-b7af-278665649e5a/volumes/kubernetes.io~projected/kube-api-access-wf682 major:0 minor:898 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae1fd116-6f63-4344-b7af-278665649e5a/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/ae1fd116-6f63-4344-b7af-278665649e5a/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:896 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae1fd116-6f63-4344-b7af-278665649e5a/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/ae1fd116-6f63-4344-b7af-278665649e5a/volumes/kubernetes.io~secret/webhook-cert major:0 minor:897 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af18215b-e749-4565-bb6c-24e92c452817/volumes/kubernetes.io~projected/kube-api-access-7c9xz:{mountpoint:/var/lib/kubelet/pods/af18215b-e749-4565-bb6c-24e92c452817/volumes/kubernetes.io~projected/kube-api-access-7c9xz major:0 minor:539 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af18215b-e749-4565-bb6c-24e92c452817/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/af18215b-e749-4565-bb6c-24e92c452817/volumes/kubernetes.io~secret/metrics-tls major:0 minor:537 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/afa174b3-912c-4b56-b5eb-f3e3df012c11/volumes/kubernetes.io~projected/kube-api-access-2795m:{mountpoint:/var/lib/kubelet/pods/afa174b3-912c-4b56-b5eb-f3e3df012c11/volumes/kubernetes.io~projected/kube-api-access-2795m major:0 minor:545 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1/volumes/kubernetes.io~projected/ca-certs major:0 minor:657 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1/volumes/kubernetes.io~projected/kube-api-access-sc9wx:{mountpoint:/var/lib/kubelet/pods/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1/volumes/kubernetes.io~projected/kube-api-access-sc9wx major:0 minor:675 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8/volumes/kubernetes.io~projected/kube-api-access-rcnmk:{mountpoint:/var/lib/kubelet/pods/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8/volumes/kubernetes.io~projected/kube-api-access-rcnmk major:0 minor:290 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8/volumes/kubernetes.io~secret/metrics-tls major:0 minor:424 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9eb45bd-fc01-4707-87ea-64f07f72f6f9/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/b9eb45bd-fc01-4707-87ea-64f07f72f6f9/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:518 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9eb45bd-fc01-4707-87ea-64f07f72f6f9/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/b9eb45bd-fc01-4707-87ea-64f07f72f6f9/volumes/kubernetes.io~empty-dir/tmp major:0 minor:517 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9eb45bd-fc01-4707-87ea-64f07f72f6f9/volumes/kubernetes.io~projected/kube-api-access-qxm8p:{mountpoint:/var/lib/kubelet/pods/b9eb45bd-fc01-4707-87ea-64f07f72f6f9/volumes/kubernetes.io~projected/kube-api-access-qxm8p major:0 minor:520 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9fe0660-fae4-4f97-8895-dbc4845cee40/volumes/kubernetes.io~projected/kube-api-access-7r85p:{mountpoint:/var/lib/kubelet/pods/b9fe0660-fae4-4f97-8895-dbc4845cee40/volumes/kubernetes.io~projected/kube-api-access-7r85p major:0 minor:1174 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9fe0660-fae4-4f97-8895-dbc4845cee40/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/b9fe0660-fae4-4f97-8895-dbc4845cee40/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1171 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9fe0660-fae4-4f97-8895-dbc4845cee40/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/b9fe0660-fae4-4f97-8895-dbc4845cee40/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1170 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bbdbadd9-eeaa-46ef-936e-5db8d395c118/volumes/kubernetes.io~projected/kube-api-access-ttmwx:{mountpoint:/var/lib/kubelet/pods/bbdbadd9-eeaa-46ef-936e-5db8d395c118/volumes/kubernetes.io~projected/kube-api-access-ttmwx major:0 minor:892 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bbdbadd9-eeaa-46ef-936e-5db8d395c118/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/bbdbadd9-eeaa-46ef-936e-5db8d395c118/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:891 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd609bd3-2525-4b88-8f07-94a0418fb582/volumes/kubernetes.io~projected/kube-api-access-zztmz:{mountpoint:/var/lib/kubelet/pods/bd609bd3-2525-4b88-8f07-94a0418fb582/volumes/kubernetes.io~projected/kube-api-access-zztmz major:0 minor:887 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd609bd3-2525-4b88-8f07-94a0418fb582/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/bd609bd3-2525-4b88-8f07-94a0418fb582/volumes/kubernetes.io~secret/cert major:0 minor:885 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd609bd3-2525-4b88-8f07-94a0418fb582/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/bd609bd3-2525-4b88-8f07-94a0418fb582/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:886 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/volumes/kubernetes.io~projected/kube-api-access-s4j88:{mountpoint:/var/lib/kubelet/pods/bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/volumes/kubernetes.io~projected/kube-api-access-s4j88 major:0 minor:538 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c29fd426-7c89-434e-8332-1ca31075d4bf/volumes/kubernetes.io~projected/kube-api-access-z7k2n:{mountpoint:/var/lib/kubelet/pods/c29fd426-7c89-434e-8332-1ca31075d4bf/volumes/kubernetes.io~projected/kube-api-access-z7k2n major:0 minor:877 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c29fd426-7c89-434e-8332-1ca31075d4bf/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c29fd426-7c89-434e-8332-1ca31075d4bf/volumes/kubernetes.io~secret/serving-cert major:0 minor:805 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce2b6fde-de56-49c3-9bd6-e81c679b02bc/volumes/kubernetes.io~projected/kube-api-access-2k8n8:{mountpoint:/var/lib/kubelet/pods/ce2b6fde-de56-49c3-9bd6-e81c679b02bc/volumes/kubernetes.io~projected/kube-api-access-2k8n8 major:0 minor:276 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce2b6fde-de56-49c3-9bd6-e81c679b02bc/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/ce2b6fde-de56-49c3-9bd6-e81c679b02bc/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d65a0af4-c96f-44f8-9384-6bae4585983b/volumes/kubernetes.io~projected/kube-api-access-bpk24:{mountpoint:/var/lib/kubelet/pods/d65a0af4-c96f-44f8-9384-6bae4585983b/volumes/kubernetes.io~projected/kube-api-access-bpk24 major:0 minor:266 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d65a0af4-c96f-44f8-9384-6bae4585983b/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/d65a0af4-c96f-44f8-9384-6bae4585983b/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d65a0af4-c96f-44f8-9384-6bae4585983b/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/d65a0af4-c96f-44f8-9384-6bae4585983b/volumes/kubernetes.io~secret/srv-cert major:0 minor:563 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9f9442b-25b9-420f-b748-bb13423809fe/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/d9f9442b-25b9-420f-b748-bb13423809fe/volumes/kubernetes.io~projected/ca-certs major:0 minor:479 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9f9442b-25b9-420f-b748-bb13423809fe/volumes/kubernetes.io~projected/kube-api-access-kxs4n:{mountpoint:/var/lib/kubelet/pods/d9f9442b-25b9-420f-b748-bb13423809fe/volumes/kubernetes.io~projected/kube-api-access-kxs4n major:0 minor:481 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9f9442b-25b9-420f-b748-bb13423809fe/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/d9f9442b-25b9-420f-b748-bb13423809fe/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:480 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/daf25ef5-8247-4dbb-bdc1-55104b1015b7/volumes/kubernetes.io~projected/kube-api-access-78bqv:{mountpoint:/var/lib/kubelet/pods/daf25ef5-8247-4dbb-bdc1-55104b1015b7/volumes/kubernetes.io~projected/kube-api-access-78bqv major:0 minor:893 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/daf25ef5-8247-4dbb-bdc1-55104b1015b7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/daf25ef5-8247-4dbb-bdc1-55104b1015b7/volumes/kubernetes.io~secret/serving-cert major:0 minor:890 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db2a7cb1-1d05-4b24-86ed-f823fad5013e/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/db2a7cb1-1d05-4b24-86ed-f823fad5013e/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db2a7cb1-1d05-4b24-86ed-f823fad5013e/volumes/kubernetes.io~projected/kube-api-access-6td56:{mountpoint:/var/lib/kubelet/pods/db2a7cb1-1d05-4b24-86ed-f823fad5013e/volumes/kubernetes.io~projected/kube-api-access-6td56 major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db2a7cb1-1d05-4b24-86ed-f823fad5013e/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/db2a7cb1-1d05-4b24-86ed-f823fad5013e/volumes/kubernetes.io~secret/metrics-tls major:0 minor:420 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dbce6cdc-040a-48e1-8a81-b6ff9c180eba/volumes/kubernetes.io~projected/kube-api-access-z2kct:{mountpoint:/var/lib/kubelet/pods/dbce6cdc-040a-48e1-8a81-b6ff9c180eba/volumes/kubernetes.io~projected/kube-api-access-z2kct major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dbce6cdc-040a-48e1-8a81-b6ff9c180eba/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/dbce6cdc-040a-48e1-8a81-b6ff9c180eba/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:557 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e0b28c90-d5b6-44f3-867c-020ece32ac7d/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e0b28c90-d5b6-44f3-867c-020ece32ac7d/volumes/kubernetes.io~projected/kube-api-access major:0 minor:289 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e0b28c90-d5b6-44f3-867c-020ece32ac7d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e0b28c90-d5b6-44f3-867c-020ece32ac7d/volumes/kubernetes.io~secret/serving-cert major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e8c48a22-ed96-42c5-ac4a-dd7d4f204539/volumes/kubernetes.io~projected/kube-api-access-ksx6l:{mountpoint:/var/lib/kubelet/pods/e8c48a22-ed96-42c5-ac4a-dd7d4f204539/volumes/kubernetes.io~projected/kube-api-access-ksx6l major:0 minor:377 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e8c48a22-ed96-42c5-ac4a-dd7d4f204539/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/e8c48a22-ed96-42c5-ac4a-dd7d4f204539/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:376 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eb135cff-1a2e-468d-80ab-f7db3f57552a/volumes/kubernetes.io~projected/kube-api-access-tk5sc:{mountpoint:/var/lib/kubelet/pods/eb135cff-1a2e-468d-80ab-f7db3f57552a/volumes/kubernetes.io~projected/kube-api-access-tk5sc major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eb135cff-1a2e-468d-80ab-f7db3f57552a/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/eb135cff-1a2e-468d-80ab-f7db3f57552a/volumes/kubernetes.io~secret/proxy-tls major:0 minor:565 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef18ace4-7316-4600-9be9-2adc792705e9/volumes/kubernetes.io~projected/kube-api-access-kn7cs:{mountpoint:/var/lib/kubelet/pods/ef18ace4-7316-4600-9be9-2adc792705e9/volumes/kubernetes.io~projected/kube-api-access-kn7cs major:0 minor:882 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef18ace4-7316-4600-9be9-2adc792705e9/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/ef18ace4-7316-4600-9be9-2adc792705e9/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:881 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f1388469-5e55-4c1b-97c3-c88777f29ae7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/f1388469-5e55-4c1b-97c3-c88777f29ae7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f1388469-5e55-4c1b-97c3-c88777f29ae7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f1388469-5e55-4c1b-97c3-c88777f29ae7/volumes/kubernetes.io~secret/serving-cert major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f98aeaf7-bf1a-46af-bf1b-85713baa4c67/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/f98aeaf7-bf1a-46af-bf1b-85713baa4c67/volumes/kubernetes.io~projected/kube-api-access major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f98aeaf7-bf1a-46af-bf1b-85713baa4c67/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f98aeaf7-bf1a-46af-bf1b-85713baa4c67/volumes/kubernetes.io~secret/serving-cert major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fca213c3-42ca-4341-a2e6-a143b9389f9e/volumes/kubernetes.io~projected/kube-api-access-7krn8:{mountpoint:/var/lib/kubelet/pods/fca213c3-42ca-4341-a2e6-a143b9389f9e/volumes/kubernetes.io~projected/kube-api-access-7krn8 major:0 minor:727 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fca213c3-42ca-4341-a2e6-a143b9389f9e/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/fca213c3-42ca-4341-a2e6-a143b9389f9e/volumes/kubernetes.io~secret/encryption-config major:0 minor:755 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fca213c3-42ca-4341-a2e6-a143b9389f9e/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/fca213c3-42ca-4341-a2e6-a143b9389f9e/volumes/kubernetes.io~secret/etcd-client major:0 minor:753 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fca213c3-42ca-4341-a2e6-a143b9389f9e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fca213c3-42ca-4341-a2e6-a143b9389f9e/volumes/kubernetes.io~secret/serving-cert major:0 minor:754 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fca78741-ca32-4867-b44f-483fd62f2942/volumes/kubernetes.io~projected/kube-api-access-2cnvt:{mountpoint:/var/lib/kubelet/pods/fca78741-ca32-4867-b44f-483fd62f2942/volumes/kubernetes.io~projected/kube-api-access-2cnvt major:0 minor:1126 fsType:tmpfs blockSize:0} overlay_0-100:{mountpoint:/var/lib/containers/storage/overlay/22ea9c924a9102bb524fd29a7926bfc2aecca9dc1dc0702489b6d9f56ed98e55/merged major:0 minor:100 fsType:overlay blockSize:0} overlay_0-1023:{mountpoint:/var/lib/containers/storage/overlay/1bf594c692d85c641b6942cc20b94f457a188551bdf94dd796ad9df475887966/merged major:0 minor:1023 fsType:overlay blockSize:0} overlay_0-1028:{mountpoint:/var/lib/containers/storage/overlay/2caace05bb4ae266755af5517bf80eb6644572a6cb0baf030a7307c666dbf92e/merged major:0 minor:1028 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/b3e8167dd2e905cfa15d0a342d8833e5e888056ad7d36147e746b92b784a3862/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-1030:{mountpoint:/var/lib/containers/storage/overlay/98b09a25a595c19ecf748a3daaed952069b798ce6b7600cfe78e78afa15c8ad5/merged major:0 minor:1030 fsType:overlay blockSize:0} overlay_0-1032:{mountpoint:/var/lib/containers/storage/overlay/45eb804a7952314b6d5c2e201a5cd4d8dcdd7f083fee2b43bc74cfc260d4f86f/merged major:0 minor:1032 fsType:overlay blockSize:0} overlay_0-1038:{mountpoint:/var/lib/containers/storage/overlay/f0444685cd801780ff614703b2abaebc4d97183a773747932e256937ed350af3/merged major:0 minor:1038 fsType:overlay blockSize:0} overlay_0-1040:{mountpoint:/var/lib/containers/storage/overlay/76de2cd7b756baa7373fab875d11fe085200dea00d697989a9e029bfdf4ef779/merged major:0 minor:1040 fsType:overlay blockSize:0} overlay_0-1044:{mountpoint:/var/lib/containers/storage/overlay/90fc718dc8330b5776875cfbe89989541a631128857e2c08f318f8eab3b00622/merged major:0 minor:1044 fsType:overlay blockSize:0} overlay_0-105:{mountpoint:/var/lib/containers/storage/overlay/81cba281f97e9eec9eb72641d63eb3998dcd493fde97886cadca16409aece607/merged major:0 minor:105 fsType:overlay blockSize:0} overlay_0-1050:{mountpoint:/var/lib/containers/storage/overlay/5537d4856519d31ad274977c9531c6b64bca1a0d0997d9e77d1caac22d8c990d/merged major:0 minor:1050 fsType:overlay blockSize:0} overlay_0-1054:{mountpoint:/var/lib/containers/storage/overlay/61ddd8a8095751a084ca7a452242dedadcf9fe11c6a4442cd66610703c91e12e/merged major:0 minor:1054 fsType:overlay blockSize:0} overlay_0-1074:{mountpoint:/var/lib/containers/storage/overlay/f3b8251ff986fdbff963d327144c3f4782953ba23e367f9eddf4927a1302066a/merged major:0 minor:1074 fsType:overlay blockSize:0} overlay_0-1077:{mountpoint:/var/lib/containers/storage/overlay/babfb6de4f9b5a690033e249fe492d51063e5b7583325c6f0e876fdfd5d5fd6b/merged major:0 minor:1077 fsType:overlay blockSize:0} overlay_0-1089:{mountpoint:/var/lib/containers/storage/overlay/96acb4a1c37bb20923a3371da28bf3307e9ff3b04b5ad1623298f780dbe5f49d/merged major:0 minor:1089 fsType:overlay blockSize:0} overlay_0-1097:{mountpoint:/var/lib/containers/storage/overlay/d96cbc86797031a5df209db859cd4336274e474186205c4023701ca5d3940012/merged major:0 minor:1097 fsType:overlay blockSize:0} overlay_0-1108:{mountpoint:/var/lib/containers/storage/overlay/88a9208698045924e1199bf010edfa8612f60f61212c39c9364755df20967e02/merged major:0 minor:1108 fsType:overlay blockSize:0} overlay_0-1112:{mountpoint:/var/lib/containers/storage/overlay/ce24e7045035b51c646d07eeea834bb0ea304260254fe65a85bbe17202e2dc89/merged major:0 minor:1112 fsType:overlay blockSize:0} overlay_0-112:{mountpoint:/var/lib/containers/storage/overlay/a0709a6e5ed1770dae849b00a9e10d6fbe9286956d53800d219fad5d23a3fea8/merged major:0 minor:112 fsType:overlay blockSize:0} overlay_0-1122:{mountpoint:/var/lib/containers/storage/overlay/7257b97632ca75a56c0476508211b289b75311003e7a0e7a9348a65b31bd32c3/merged major:0 minor:1122 fsType:overlay blockSize:0} overlay_0-1133:{mountpoint:/var/lib/containers/storage/overlay/e484725008b06e0e16d0e77c533b57930d6b42c6dc47bbc8971d9f774d1532cd/merged major:0 minor:1133 fsType:overlay blockSize:0} overlay_0-1139:{mountpoint:/var/lib/containers/storage/overlay/f2604773dbb71321375def7700dc618d9965b5e59649dfc778c77bc764b3b5b3/merged major:0 minor:1139 fsType:overlay blockSize:0} overlay_0-1141:{mountpoint:/var/lib/containers/storage/overlay/a7e7864b41e55a0b699510dcd827051a931e92e49e133661237f5e89e21687b4/merged major:0 minor:1141 fsType:overlay blockSize:0} overlay_0-1144:{mountpoint:/var/lib/containers/storage/overlay/7cf263a3ef495505ecf90993463ae51a98e2f8030f06bd6225d88f9ec8aa90fe/merged major:0 minor:1144 fsType:overlay blockSize:0} overlay_0-1149:{mountpoint:/var/lib/containers/storage/overlay/03166add8b8ba382dd4c321764420732b58989583830f1dfa405296475e023d2/merged major:0 minor:1149 fsType:overlay blockSize:0} overlay_0-1150:{mountpoint:/var/lib/containers/storage/overlay/51355d407a16d57d2d48e6ed336414a1a1a5f3bd0a8cc08f5aa22c19a5367897/merged major:0 minor:1150 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/38f10990a5df06d3a2c9c64e81a15c4b79a393e7c3ab25a9f7beab2843b63e5d/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-1168:{mountpoint:/var/lib/containers/storage/overlay/36028eb309b5ace88719d3324c2381f06138ee3ab2479a3b04b26fb63ba56462/merged major:0 minor:1168 fsType:overlay blockSize:0} overlay_0-1172:{mountpoint:/var/lib/containers/storage/overlay/ce2c664033e56bc63e9a6b8e4cdc3b2665d9bdadaa0a098cf23bc2594b4d0494/merged major:0 minor:1172 fsType:overlay blockSize:0} overlay_0-1177:{mountpoint:/var/lib/containers/storage/overlay/e1ea2fb74ca94f7bbb3a368cb2a69b88e56467898da03bc5a95bd5ee606a1405/merged major:0 minor:1177 fsType:overlay blockSize:0} overlay_0-1179:{mountpoint:/var/lib/containers/storage/overlay/862c947f526c3f5324819b29b4f065382f8dc7d28eb8de5257f5d37967430efd/merged major:0 minor:1179 fsType:overlay blockSize:0} overlay_0-1181:{mountpoint:/var/lib/containers/storage/overlay/957eabeb4948a9d06f0ed25bb5e854f2d01a1189cda7a9d87bf6596cb2d3c16c/merged major:0 minor:1181 fsType:overlay blockSize:0} overlay_0-1192:{mountpoint:/var/lib/containers/storage/overlay/fce1fae46a7dc30339931732dcfd9eab1630f36e833029a3b4e22de3212098dc/merged major:0 minor:1192 fsType:overlay blockSize:0} overlay_0-1210:{mountpoint:/var/lib/containers/storage/overlay/3bc00b8ab6d2c0cca9267f1fccd3233d5fb371769f91cff9c53c8527e4a94ca3/merged major:0 minor:1210 fsType:overlay blockSize:0} overlay_0-1212:{mountpoint:/var/lib/containers/storage/overlay/03098f1d20bd5fe27be5c883ddd81a178eba7f3f6995caf7fee1c93d0508ba97/merged major:0 minor:1212 fsType:overlay blockSize:0} overlay_0-1215:{mountpoint:/var/lib/containers/storage/overlay/d0e83676b6dca857a6f29588873da86e354c4e4a4d7ac786f48a08d029340fd7/merged major:0 minor:1215 fsType:overlay blockSize:0} overlay_0-1217:{mountpoint:/var/lib/containers/storage/overlay/6c7c07adb80eb3f8a14ad7797c070cbca78d5dc9ecc75ba37653e9f5c992ef4e/merged major:0 minor:1217 fsType:overlay blockSize:0} overlay_0-1219:{mountpoint:/var/lib/containers/storage/overlay/5684abe0399a31b2c256b1c95e82fa96ece7d099485abe9a1b3bbfb67274a5ad/merged major:0 minor:1219 fsType:overlay blockSize:0} overlay_0-1229:{mountpoint:/var/lib/containers/storage/overlay/d1119838c905ffb438955c7dbbc9f85969b02e0db38077bcfe044fe570452b15/merged major:0 minor:1229 fsType:overlay blockSize:0} overlay_0-1232:{mountpoint:/var/lib/containers/storage/overlay/ca67f74ce86abf7800c9f49d5597e8b984b464fc488c24c5645edd68eb6e964b/merged major:0 minor:1232 fsType:overlay blockSize:0} overlay_0-1234:{mountpoint:/var/lib/containers/storage/overlay/fb2f945bca3c07b6d86c1be84df1f44a7f54e09266826ae4722a61f8c7ce2199/merged major:0 minor:1234 fsType:overlay blockSize:0} overlay_0-1236:{mountpoint:/var/lib/containers/storage/overlay/34f5209022300dba94b006b9dca18ac4126a7b334bf24b5faf01fa1acb88d5eb/merged major:0 minor:1236 fsType:overlay blockSize:0} overlay_0-1238:{mountpoint:/var/lib/containers/storage/overlay/259f64dec5733aaf592f44b16cff0f89f3f79cfff57fa011557bcb12afaae9a7/merged major:0 minor:1238 fsType:overlay blockSize:0} overlay_0-1240:{mountpoint:/var/lib/containers/storage/overlay/2229ad659d94be8df2c5127c74ca1ffd07dcd3f3b01ef86df8e47dfdd9d415e3/merged major:0 minor:1240 fsType:overlay blockSize:0} overlay_0-125:{mountpoint:/var/lib/containers/storage/overlay/9d747fcad14468638101cbce020d215de91d0e26967a987a3d616227fc6ee723/merged major:0 minor:125 fsType:overlay blockSize:0} overlay_0-1255:{mountpoint:/var/lib/containers/storage/overlay/7612b9934a3d2853296e4d2d80b5e9dabf673cf2ff79f7a635e9f82ea1fecaa6/merged major:0 minor:1255 fsType:overlay blockSize:0} overlay_0-1269:{mountpoint:/var/lib/containers/storage/overlay/579ae5dc2d05c4d33b3609d226917366b398cf6afdb7156e810500f8ecfbb038/merged major:0 minor:1269 fsType:overlay blockSize:0} overlay_0-1271:{mountpoint:/var/lib/containers/storage/overlay/14b1628ce2ebae606ee883b402c4d154dfdf30ae408046f07b0829ba78d4dd52/merged major:0 minor:1271 fsType:overlay blockSize:0} overlay_0-1283:{mountpoint:/var/lib/containers/storage/overlay/2b8c2b5982210077e908d46268144f8933cac99a1f9acc6fce1a1ff986b76ed2/merged major:0 minor:1283 fsType:overlay blockSize:0} overlay_0-129:{mountpoint:/var/lib/containers/storage/overlay/5ead3c2fe9331eee2392c60bf3d1430faf4c5b72ab359230a6f80cb9ce800024/merged major:0 minor:129 fsType:overlay blockSize:0} overlay_0-1294:{mountpoint:/var/lib/containers/storage/overlay/74f802446f803442a152026d8d4cbefe7128a47e6f378bb039e685188e14ad48/merged major:0 minor:1294 fsType:overlay blockSize:0} overlay_0-1296:{mountpoint:/var/lib/containers/storage/overlay/bac25fa3f067aaa9202be4a972846f5edb963a6452f4cc58b17c9430bd9914c3/merged major:0 minor:1296 fsType:overlay blockSize:0} overlay_0-1298:{mountpoint:/var/lib/containers/storage/overlay/b197a30cb0687af2607e1fe4fa6c3113f2c0b9c9541246884e54f5a77df29a74/merged major:0 minor:1298 fsType:overlay blockSize:0} overlay_0-1309:{mountpoint:/var/lib/containers/storage/overlay/79466c79b0661d07647637720332cdd29d60a4c440a7fae4468c35c283f7aab2/merged major:0 minor:1309 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/de9d934e774c0495b0f3e1b732335dc2a195b55b5955da2ce8d72c807de4d267/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-1316:{mountpoint:/var/lib/containers/storage/overlay/bcf42174baefd8ddb1604eb8074e8bedc12c69ae56a38f7d72511f96733cc522/merged major:0 minor:1316 fsType:overlay blockSize:0} overlay_0-1318:{mountpoint:/var/lib/containers/storage/overlay/f5d67ae289de1f8b9726d5da2eb2b8baa61f2fe7f7b18b71f3534f6a2d40a92f/merged major:0 minor:1318 fsType:overlay blockSize:0} overlay_0-1324:{mountpoint:/var/lib/containers/storage/overlay/03fe19fc543a785c1f1c00e50b51dcf74b24862f4696cc05adeaf7c7f2990f47/merged major:0 minor:1324 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/f6b4fb8651bac521ae7784eda67f45f5d35005b38755d8975940556eb2262fd9/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-1330:{mountpoint:/var/lib/containers/storage/overlay/8017c83b73cda675ff1be1919899e4379205faebdd7673b239c6cd9c977c306f/merged major:0 minor:1330 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/5d498c3a8294fedcc9d94bd1dbe29b45f6517e03adc5006ed4f2964d466eb992/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-1367:{mountpoint:/var/lib/containers/storage/overlay/aab4388cae85be31d4736e273d2826fc67e3e244d5b6a17b44fc9995cd7e4847/merged major:0 minor:1367 fsType:overlay blockSize:0} overlay_0-1378:{mountpoint:/var/lib/containers/storage/overlay/3506313797cdb4a67c88dc9119e1fa936e39560ae0531401e2aa26241eee636b/merged major:0 minor:1378 fsType:overlay blockSize:0} overlay_0-145:{mountpoint:/var/lib/containers/storage/overlay/45c0b99da5d66b0798fc61ef580f24e38dee8ceb190011774726bba3e8fff709/merged major:0 minor:145 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/be69df5233b5af0c4e3370bc8cf303f59459e2ef1d455eabb519f4fa40402cec/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/d469f84cae0c1588db395b1c865d99c7a18020839d0bf74e66f2594ecb946424/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/f7fa3e49692e2c82192386b57c71c192386de18e3a812e5bb79008861aa47289/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/d49e3416cb986f9a651c666682c110f4fb734709abc650627fcdd4aa04efed50/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/e5d06a3335de6480cadc16ef6806428eac761ec1c4de61ef5135916221fd600f/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/c4457d3ef99a05549ebd926d7a03708d9f14421ebff8f10d486e1e5375567809/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/9a50879f693bf49f1958361b72b697efa29d966960118671a041b0df202a75be/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/6a75ba6e6a689efe7d539fff12f1b219de9af43f40c1f3c5281a59a9039d7862/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/1204a7d44ce1cd4f8e62d2f7d57112a4e984b156fdcef260b7f422b8d54f241d/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/5f023b95f6754db86158e9fb51578abc76440ac93211ba1e306cebe18d24a394/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-178:{mountpoint:/var/lib/containers/storage/overlay/f91e86a9a8abc36eb8193a972418afe7eac6588b8a9023457f595149053bf86e/merged major:0 minor:178 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/c4d3d5230916f1539bebd3111522faff975309105e0e6394089928bc51517fbd/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-181:{mountpoint:/var/lib/containers/storage/overlay/65129a57cc461dbe336b65ad5b3c4606cc84e25598f31e0556df2e0ecfa78258/merged major:0 minor:181 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/38cfe351a3623e004bb59547ad926bbf93f6c3bf98a2182805aaf44c49d66151/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-187:{mountpoint:/var/lib/containers/storage/overlay/2dbbf13af7cb0cc9e6a619d6fdb74eb7c26373f197cfc94e229954666dae5405/merged major:0 minor:187 fsType:overlay blockSize:0} overlay_0-188:{mountpoint:/var/lib/containers/storage/overlay/96622f2d08ca88ad0c31f97226c2b2313196f10ba2efa065798a6f7d4d9e3df4/merged major:0 minor:188 fsType:overlay blockSize:0} overlay_0-192:{mountpoint:/var/lib/containers/storage/overlay/b8af98a8fb3619557c5d235fcf91ec76b069c629788e19c8023fd6f8788387c9/merged major:0 minor:192 fsType:overlay blockSize:0} overlay_0-200:{mountpoint:/var/lib/containers/storage/overlay/5bf16715eaed0bf1e5d465d599f51b31436ef0c615c7f55e4913f9eeeb1bfaab/merged major:0 minor:200 fsType:overlay blockSize:0} overlay_0-205:{mountpoint:/var/lib/containers/storage/overlay/7c610bf44f623fe8f94ec995c09015edfa4af1587cbffe7a059c7a4cad072a42/merged major:0 minor:205 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/240cf89cbcd8abc9a162f492ff1e98c487e8a8b01f998848db9e6ace84e55d7d/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/d8056bf682ee65a8a3f4bf943557caabe0efaaf5977278855a317b3e8fc3871a/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-220:{mountpoint:/var/lib/containers/storage/overlay/a3712d9182f65cb49ca86b5e488d31f94753347863238f36f7069b73f8605c13/merged major:0 minor:220 fsType:overlay blockSize:0} overlay_0-221:{mountpoint:/var/lib/containers/storage/overlay/353c37e500cd11a05a1699cabffb77c279258fda4d09352a8e92bfe06d3ddcd8/merged major:0 minor:221 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/5ef347ad83e552ec7e1bc00c1ca51245e8749adf576e709cde7047acd5c1ff08/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-296:{mountpoint:/var/lib/containers/storage/overlay/35bf4119eab559956fc810fed7abf821fa146c38402f736ed8f4b3ddb170550b/merged major:0 minor:296 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/7aecd28550392750f7569fa684701738e1eea651aa7b4434c78f80ecb5ae5dc3/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-302:{mountpoint:/var/lib/containers/storage/overlay/dfec895fbbef2cb5a492b8aca6351498d6f0380fecc837a68002e4fca2453a80/merged major:0 minor:302 fsType:overlay blockSize:0} overlay_0-306:{mountpoint:/var/lib/containers/storage/overlay/843b29e3b1b64772806fac526d0c2bdfa52aa840fb7b336a10b3de9446b6322b/merged major:0 minor:306 fsType:overlay blockSize:0} overlay_0-312:{mountpoint:/var/lib/containers/storage/overlay/9fa73a3c5987dd4b016484975c84611c66ad31f2adb8417387289e6cb250a01f/merged major:0 minor:312 fsType:overlay blockSize:0} overlay_0-314:{mountpoint:/var/lib/containers/storage/overlay/a0f1273e9383fe485d886c7a2a383c9fd3d111ff7e2c1f2aacfcb2d009e98706/merged major:0 minor:314 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/e802df0597871f848b404bc72df01c6f4e2b4497e11dd46e7500b966d110fa15/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-318:{mountpoint:/var/lib/containers/storage/overlay/36c643938b75ac36c687ef3cbb4112c2ee738a06748769a8d691fa5dfea33ebf/merged major:0 minor:318 fsType:overlay blockSize:0} overlay_0-320:{mountpoint:/var/lib/containers/storage/overlay/451ffb193175408b1597622ead90101ad8677531d255a5cc34b3c971f75986b2/merged major:0 minor:320 fsType:overlay blockSize:0} overlay_0-322:{mountpoint:/var/lib/containers/storage/overlay/a2ef9aec2c425d431d78d5f3c9e34bfc3980e8157963c83db3a8e51264e829c6/merged major:0 minor:322 fsType:overlay blockSize:0} overlay_0-324:{mountpoint:/var/lib/containers/storage/overlay/a25c5dc742f70dd376cca0343fa32202b20d19d1e4af1e67611c2eee6a54a266/merged major:0 minor:324 fsType:overlay blockSize:0} overlay_0-326:{mountpoint:/var/lib/containers/storage/overlay/8a696c78f381a7d480e4d7227b831c94ed505e65ee35149b48e3ec0b1afedc40/merged major:0 minor:326 fsType:overlay blockSize:0} overlay_0-328:{mountpoint:/var/lib/containers/storage/overlay/9c368d32b1f2586a5e53b2c2703871907b8222e7bc24d5c7eea326433593bad8/merged major:0 minor:328 fsType:overlay blockSize:0} overlay_0-330:{mountpoint:/var/lib/containers/storage/overlay/93a9596948d7f4928710ce170248e951d90c6ae95edd9762b4e3785261bee2a9/merged major:0 minor:330 fsType:overlay blockSize:0} overlay_0-333:{mountpoint:/var/lib/containers/storage/overlay/f92d9577e14272797da4a33346d1fad55b678d72b5d77e04167b78eda80bf309/merged major:0 minor:333 fsType:overlay blockSize:0} overlay_0-335:{mountpoint:/var/lib/containers/storage/overlay/b7e86fabd893abe9fad6319f9361a6fe535ee3f7e5cf3dc07b47cf51622754f5/merged major:0 minor:335 fsType:overlay blockSize:0} overlay_0-336:{mountpoint:/var/lib/containers/storage/overlay/6291ece804ca6d3334e2677c9a83c1df8f1008755c8eb7fb43f2b8ce23a0847f/merged major:0 minor:336 fsType:overlay blockSize:0} overlay_0-339:{mountpoint:/var/lib/containers/storage/overlay/060734d728faa4ae055b5052f463de962d5029c65e4a1a07e6100696a1daa0c4/merged major:0 minor:339 fsType:overlay blockSize:0} overlay_0-341:{mountpoint:/var/lib/containers/storage/overlay/0a4335f53b9237e2deefa3ff7b29666eeb907837b50679131044b38091baa4a7/merged major:0 minor:341 fsType:overlay blockSize:0} overlay_0-343:{mountpoint:/var/lib/containers/storage/overlay/370111c5b412e518413797b34b517cc10c47f742f80a1c2c024c0eb36461649e/merged major:0 minor:343 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/800f912b95d6f85c267b69451c6a9d591bdb252bf5aedcdf36f7e803600c38e3/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-355:{mountpoint:/var/lib/containers/storage/overlay/2afc66e39d63d5bc8f0c46c9d3dcb4a6bda8b8eea4f735bb7d09a47926c4275c/merged major:0 minor:355 fsType:overlay blockSize:0} overlay_0-358:{mountpoint:/var/lib/containers/storage/overlay/80983b6a7190fc06acc4e357ce69e514504a1f84735e7e894ffb8fa5e80684d5/merged major:0 minor:358 fsType:overlay blockSize:0} overlay_0-360:{mountpoint:/var/lib/containers/storage/overlay/6789ff5af7ea75b29b000460191156afd8819d29e8cafaea71c64437f31cb249/merged major:0 minor:360 fsType:overlay blockSize:0} overlay_0-371:{mountpoint:/var/lib/containers/storage/overlay/5e836da9042940a754a925a38a9dea3f03f73c18064bf1d283011208da18a925/merged major:0 minor:371 fsType:overlay blockSize:0} overlay_0-375:{mountpoint:/var/lib/containers/storage/overlay/5a4b0681bec5cd9c480034391183851a806b9deb18eca64d45dcf4608c5df684/merged major:0 minor:375 fsType:overlay blockSize:0} overlay_0-392:{mountpoint:/var/lib/containers/storage/overlay/1fb426478ad94055f5ad278717d4af6655b2caaa93ba45cbc1430cb4b04eb972/merged major:0 minor:392 fsType:overlay blockSize:0} overlay_0-400:{mountpoint:/var/lib/containers/storage/overlay/a635d4efddaea75e454f3534df419b08a8b23e63eab424f1beb9cebdc97f1696/merged major:0 minor:400 fsType:overlay blockSize:0} overlay_0-402:{mountpoint:/var/lib/containers/storage/overlay/293b8a52a2ca189d869c4d225dfae4a949ebb9495790af63f839b71e4d824ed5/merged major:0 minor:402 fsType:overlay blockSize:0} overlay_0-407:{mountpoint:/var/lib/containers/storage/overlay/138d56e55b93943e325fe49293877b92bb5b0ca9d4991755a841eecfb2c15b6f/merged major:0 minor:407 fsType:overlay blockSize:0} overlay_0-410:{mountpoint:/var/lib/containers/storage/overlay/d88a332468d589e28389a23bbbd27951d761092d36fa2610824c37228788a01b/merged major:0 minor:410 fsType:overlay blockSize:0} overlay_0-412:{mountpoint:/var/lib/containers/storage/overlay/8493e057139dea5f015730b0a8a74b857104da3636c22e81203674e347a8e871/merged major:0 minor:412 fsType:overlay blockSize:0} overlay_0-416:{mountpoint:/var/lib/containers/storage/overlay/3e09b6cd0ab4c2dbe15995d7f98e57c0115c594e064120d3f03454e548ec76f9/merged major:0 minor:416 fsType:overlay blockSize:0} overlay_0-418:{mountpoint:/var/lib/containers/storage/overlay/87f6ba0fbf31cc3005fe33f11b6951cdb8c043e9d9fd6faed06f55afc54ae7f9/merged major:0 minor:418 fsType:overlay blockSize:0} overlay_0-437:{mountpoint:/var/lib/containers/storage/overlay/be80625ea4fd916f70c249882c5590d869b4bad74236afa08d9cf6b29e7f6917/merged major:0 minor:437 fsType:overlay blockSize:0} overlay_0-439:{mountpoint:/var/lib/containers/storage/overlay/9a36ff89ad76d098a3cb3a256c01884782b891a69b0346e425dfae40ed2b9c70/merged major:0 minor:439 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/7896ac7b1783f3499221333dd0f777e25c94592494e71f3024b93b93622e6874/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-441:{mountpoint:/var/lib/containers/storage/overlay/01211846f3cf6ecbeec19f2d689955965b365774506498cd28226cb0fff792ff/merged major:0 minor:441 fsType:overlay blockSize:0} overlay_0-443:{mountpoint:/var/lib/containers/storage/overlay/4892f6bf2675e552ec51fab27a301fc82d232f308044568b789929f1c570b060/merged major:0 minor:443 fsType:overlay blockSize:0} overlay_0-445:{mountpoint:/var/lib/containers/storage/overlay/3707a430a2147846c56139fba923bed13a74cf8ff676fc418fefd0f8be338a80/merged major:0 minor:445 fsType:overlay blockSize:0} overlay_0-447:{mountpoint:/var/lib/containers/storage/overlay/5351721e4c19ccd6c07545052c379c2c8779281527501ad324fcb363c087324a/merged major:0 minor:447 fsType:overlay blockSize:0} overlay_0-449:{mountpoint:/var/lib/containers/storage/overlay/1e4c14ca570becdbf708b7a6c3f9e77a1a335aa63e2270ef6e3a808f8c839ed0/merged major:0 minor:449 fsType:overlay blockSize:0} overlay_0-45:{mountpoint:/var/lib/containers/storage/overlay/f56b3d219752f4d0d0fd31922d659c8e012cd0f204b12b039b0a30b7ec39c364/merged major:0 minor:45 fsType:overlay blockSize:0} overlay_0-455:{mountpoint:/var/lib/containers/storage/overlay/c8f3b17e6fcbdcd061160c90c299d07b11f14d95c6827aca80d134ac81b2189c/merged major:0 minor:455 fsType:overlay blockSize:0} overlay_0-457:{mountpoint:/var/lib/containers/storage/overlay/fb58aab3e7d509fb6b42a8f92e8c0c69ef6fe1d9912dd99c13a3a9b4037af3c1/merged major:0 minor:457 fsType:overlay blockSize:0} overlay_0-459:{mountpoint:/var/lib/containers/storage/overlay/9a9cb980a7a0af2757a7c6d1309bc58945beefe5d689449a33fe27ec32e01a39/merged major:0 minor:459 fsType:overlay blockSize:0} overlay_0-461:{mountpoint:/var/lib/containers/storage/overlay/24b6e197ba7c49067937d0dee0c721a3116f0ebd575b2ac8f1876a971b2c234f/merged major:0 minor:461 fsType:overlay blockSize:0} overlay_0-462:{mountpoint:/var/lib/containers/storage/overlay/ce399601c9f83837754e4ace3b35eb9ba874c003099655a275ed04f0a1a64d84/merged major:0 minor:462 fsType:overlay blockSize:0} overlay_0-466:{mountpoint:/var/lib/containers/storage/overlay/aca251d297295b39c2614f5c2b3862dd7802a4f1f75421fe1fbdd0fd9b6c5d31/merged major:0 minor:466 fsType:overlay blockSize:0} overlay_0-468:{mountpoint:/var/lib/containers/storage/overlay/76450d8614329af579b4a41018fe7c8df6776652234763444b9af79966833738/merged major:0 minor:468 fsType:overlay blockSize:0} overlay_0-470:{mountpoint:/var/lib/containers/storage/overlay/9b1ff36d79e3e7847395befa0acfeb1ba826c8ad01fb0562212d6559e23815d1/merged major:0 minor:470 fsType:overlay blockSize:0} overlay_0-473:{mountpoint:/var/lib/containers/storage/overlay/6077c2d23de16f14e8156899cff0e14391a6f55296b7f31e006871d79da4e616/merged major:0 minor:473 fsType:overlay blockSize:0} overlay_0-475:{mountpoint:/var/lib/containers/storage/overlay/023ee5ad61df5bc79164168c2d150eb40e6c248fe05476f983911b89c1d4baa9/merged major:0 minor:475 fsType:overlay blockSize:0} overlay_0-476:{mountpoint:/var/lib/containers/storage/overlay/f044d0ebe070f235fb7ec02f63e448e762331a420681b926e099e62e86d0b9bf/merged major:0 minor:476 fsType:overlay blockSize:0} overlay_0-477:{mountpoint:/var/lib/containers/storage/overlay/3d317873ea3d74bb2e89224a919c275c6ea98abdbbcd7b92f74c79ef0c632b20/merged major:0 minor:477 fsType:overlay blockSize:0} overlay_0-486:{mountpoint:/var/lib/containers/storage/overlay/6123ea9cde228fb18ef6a7bc04c1f7c6f86b85f036d9948a44a6d5695c115e10/merged major:0 minor:486 fsType:overlay blockSize:0} overlay_0-488:{mountpoint:/var/lib/containers/storage/overlay/e5ec5051cd85d4b9cd89513b5fcd914b85c17c72ef5c8fc7595716e519ae83c8/merged major:0 minor:488 fsType:overlay blockSize:0} overlay_0-492:{mountpoint:/var/lib/containers/storage/overlay/cc2a6519fd22c0cf997f28e29b5008cc96e8c6d2a1160e6c5584c57d8a9f2911/merged major:0 minor:492 fsType:overlay blockSize:0} overlay_0-494:{mountpoint:/var/lib/containers/storage/overlay/0f82c4766583c043d4651ffb07d5e738c87f346748d775c03687a160bbc51524/merged major:0 minor:494 fsType:overlay blockSize:0} overlay_0-497:{mountpoint:/var/lib/containers/storage/overlay/d3d9543a80ab390735db3ce9b2dabaf547a6abcc764b2c0e67717c065ee555f6/merged major:0 minor:497 fsType:overlay blockSize:0} overlay_0-498:{mountpoint:/var/lib/containers/storage/overlay/f1703bffb9a1fdaca9109cf3716049dd6dd1d7fc485bf78c8b7c45110416b026/merged major:0 minor:498 fsType:overlay blockSize:0} overlay_0-499:{mountpoint:/var/lib/containers/storage/overlay/82cb0ab5f3116d89537cdf602bef13497ad55d67511e829d2357c885a6d94303/merged major:0 minor:499 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/75fdbad8721879379bdef37227052facc763b603aff5c0c7ce627f23d7b83507/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-504:{mountpoint:/var/lib/containers/storage/overlay/06692521472b7d1259d6dd3cc52e921c7888fd9f9e29670324d67c058dfbf6fd/merged major:0 minor:504 fsType:overlay blockSize:0} overlay_0-51:{mountpoint:/var/lib/containers/storage/overlay/cce108d0a5b9b3164085a067e50c44d45d01b2ac1a2531b314f58a784815b7eb/merged major:0 minor:51 fsType:overlay blockSize:0} overlay_0-511:{mountpoint:/var/lib/containers/storage/overlay/70de4e4432cae8827e7cb32381f0e44e4ae049d77516ef31f039e43d38dc5f40/merged major:0 minor:511 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/d74a8d11369e225ad662a83c1f057725b83a9e1811b2e865c520d02f308bf5a6/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-521:{mountpoint:/var/lib/containers/storage/overlay/101096a1decf8c64cc65039a5a158324f21391e097f3819bb1ec0730e5fdd83c/merged major:0 minor:521 fsType:overlay blockSize:0} overlay_0-523:{mountpoint:/var/lib/containers/storage/overlay/7453f7000941563b2db4cf373c04927a2ee6471c008b8a493f760ded106bf7ea/merged major:0 minor:523 fsType:overlay blockSize:0} overlay_0-529:{mountpoint:/var/lib/containers/storage/overlay/2aa2f34d22eb435c3f86f1b022e67f48fc2a3a72c2432582a9cc0fc75f60b2e6/merged major:0 minor:529 fsType:overlay blockSize:0} overlay_0-530:{mountpoint:/var/lib/containers/storage/overlay/077c05838304e9f7e69eb1c73d7bb40101e84f2d72e580e843ed2ddd2bd77d76/merged major:0 minor:530 fsType:overlay blockSize:0} overlay_0-532:{mountpoint:/var/lib/containers/storage/overlay/dc914bebdaafa673197fefdab5b8f28ae8efaaaf6e9674885ccbf31e796e3330/merged major:0 minor:532 fsType:overlay blockSize:0} overlay_0-535:{mountpoint:/var/lib/containers/storage/overlay/e30917b616992c99e0eb22badccdb2b04223a1976113726da64a463d082ca54d/merged major:0 minor:535 fsType:overlay blockSize:0} overlay_0-54:{mountpoint:/var/lib/containers/storage/overlay/a8bcf92287008d8b751dbc17ba05dc31fb8fcc0076ebc88c969055404534bc01/merged major:0 minor:54 fsType:overlay blockSize:0} overlay_0-547:{mountpoint:/var/lib/containers/storage/overlay/53f691b1d58d177c957e3f85d8ac589e0faefd4d79a2e8e63cf384fe2869be44/merged major:0 minor:547 fsType:overlay blockSize:0} overlay_0-549:{mountpoint:/var/lib/containers/storage/overlay/bbac5ea976c7b6477f4ef91751f60fe2d577c2b91b1d86dbd4f0397de9d95b12/merged major:0 minor:549 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/3a92e65749683dd671d759c78b134dea1ac2d73b207858d337622f031fd94d44/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-564:{mountpoint:/var/lib/containers/storage/overlay/847cd11ff7c12dd964a3e2b1f97fadab0cbf2c0a549f9dbc9e664537a4aeb730/merged major:0 minor:564 fsType:overlay blockSize:0} overlay_0-570:{mountpoint:/var/lib/containers/storage/overlay/c22f9885f89cfb5541f04d9f5361544d5fccaad59a57e25f478a43a44493a60e/merged major:0 minor:570 fsType:overlay blockSize:0} overlay_0-572:{mountpoint:/var/lib/containers/storage/overlay/1bdb2d7cc33aad4c35ec23c8a0e86849b33e0d15caea169c65da31fde193c68a/merged major:0 minor:572 fsType:overlay blockSize:0} overlay_0-579:{mountpoint:/var/lib/containers/storage/overlay/59b3fdb30defc61156e02a4b0365d8a1a3984ea19730f1decfb59033251b361f/merged major:0 minor:579 fsType:overlay blockSize:0} overlay_0-58:{mountpoint:/var/lib/containers/storage/overlay/9bdc23c458b126ecb0e3a733870ed8d17def44d44805e0aebfc261f8698f9dca/merged major:0 minor:58 fsType:overlay blockSize:0} overlay_0-592:{mountpoint:/var/lib/containers/storage/overlay/214aebd01ea8ca122e1d638b1f60ef91415742439c9f46164c6035bdcb0d9678/merg Feb 20 12:04:55.388816 master-0 kubenswrapper[31420]: ed major:0 minor:592 fsType:overlay blockSize:0} overlay_0-596:{mountpoint:/var/lib/containers/storage/overlay/5230d8c90b41b5de432d068f968d00ea935274d679b074117b53b436da5698f5/merged major:0 minor:596 fsType:overlay blockSize:0} overlay_0-597:{mountpoint:/var/lib/containers/storage/overlay/ebd1c67f4c805253c7101529d33882dd1f2d06217a6cfdb7619f2ff92460f0c9/merged major:0 minor:597 fsType:overlay blockSize:0} overlay_0-599:{mountpoint:/var/lib/containers/storage/overlay/1bf62c9602a1e94a624a2b781bb5b545512169052739a2e0706c460fa8ab83d4/merged major:0 minor:599 fsType:overlay blockSize:0} overlay_0-606:{mountpoint:/var/lib/containers/storage/overlay/643811eddafebc7706eb73d2f759d4fb6bbdf9fa766c3ebeeeb5b72f10279fa6/merged major:0 minor:606 fsType:overlay blockSize:0} overlay_0-611:{mountpoint:/var/lib/containers/storage/overlay/f6b3864bab95f24766890a3f71d7140c5dd003f0cfae2de96df729a31c396d5c/merged major:0 minor:611 fsType:overlay blockSize:0} overlay_0-612:{mountpoint:/var/lib/containers/storage/overlay/80668c9c93d66d5443a1e8a75b8253d81fb7c0caa56fb98f17c1b4402c05e221/merged major:0 minor:612 fsType:overlay blockSize:0} overlay_0-615:{mountpoint:/var/lib/containers/storage/overlay/a2a8d23a4108c6e8301a8fd7475462937cf28a1912c0908bd11a70e7681fe682/merged major:0 minor:615 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/56f800df1e0a01f685ee05550d57fe0af12b9d35038cf63183c2c1ab6f09e571/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-623:{mountpoint:/var/lib/containers/storage/overlay/f43d05fd616821a81f1348e46dd4bda9ade6175d36bf489c6fd320ed04d0f826/merged major:0 minor:623 fsType:overlay blockSize:0} overlay_0-625:{mountpoint:/var/lib/containers/storage/overlay/581d297b3db273a168d1eb37600f2c32216538c88cbdca2f2c023471041f8199/merged major:0 minor:625 fsType:overlay blockSize:0} overlay_0-627:{mountpoint:/var/lib/containers/storage/overlay/87ca1ef4ed320956664219e814789abc76409834ed7ccb240187135cd3b59703/merged major:0 minor:627 fsType:overlay blockSize:0} overlay_0-629:{mountpoint:/var/lib/containers/storage/overlay/dd0b5478d13d3f25af12539bba4c9061094c11935e2e82517bde7122a435c92b/merged major:0 minor:629 fsType:overlay blockSize:0} overlay_0-631:{mountpoint:/var/lib/containers/storage/overlay/c277f9d26765a90432e63b6d3e5c93075aef3278fd6e2d9ad7cb8d8c9eeca326/merged major:0 minor:631 fsType:overlay blockSize:0} overlay_0-633:{mountpoint:/var/lib/containers/storage/overlay/72f448c1d42e2f665eddee233cc3cf31604916ca637115638e7e84ac43a7e8ea/merged major:0 minor:633 fsType:overlay blockSize:0} overlay_0-635:{mountpoint:/var/lib/containers/storage/overlay/c9bed27306aa29534258748d5095ce5b12b49ba17a6168aa1687638bd414daa7/merged major:0 minor:635 fsType:overlay blockSize:0} overlay_0-636:{mountpoint:/var/lib/containers/storage/overlay/2050aa07ed2077001e60f78cbe5d6c15d37e58a48f13286d299375c15b24d9de/merged major:0 minor:636 fsType:overlay blockSize:0} overlay_0-638:{mountpoint:/var/lib/containers/storage/overlay/e4e4c9eb0ced9ed5d9d6fc548e9fdbe994f089b5acfe7def6d0230e17630ffd4/merged major:0 minor:638 fsType:overlay blockSize:0} overlay_0-641:{mountpoint:/var/lib/containers/storage/overlay/7dc57a814dc057d4e5923e0572c1782456009ef844ff3ea4df41f0a924cfb6b5/merged major:0 minor:641 fsType:overlay blockSize:0} overlay_0-648:{mountpoint:/var/lib/containers/storage/overlay/9d95667ffafb18b16a1dcbb54b87dd07efa79f4d14cf214f570f686341dfc028/merged major:0 minor:648 fsType:overlay blockSize:0} overlay_0-65:{mountpoint:/var/lib/containers/storage/overlay/2438cc20366af2e56c9f6a9aa2f4fdf6ddb23da179b3861af65ac4e38b0f99ea/merged major:0 minor:65 fsType:overlay blockSize:0} overlay_0-650:{mountpoint:/var/lib/containers/storage/overlay/06967d865c6b8cfaf6d49c906d22eb67c0b47a8306d2351cae910560c4fff788/merged major:0 minor:650 fsType:overlay blockSize:0} overlay_0-652:{mountpoint:/var/lib/containers/storage/overlay/fe6ae3871c59872ecc3bfede0d817cdfcaa32ff6b801156b441e71b804f784b9/merged major:0 minor:652 fsType:overlay blockSize:0} overlay_0-654:{mountpoint:/var/lib/containers/storage/overlay/8d64f36a648fb9d70e5b81f45998a81f82a88e2a45606ccde3d3be62c62291b9/merged major:0 minor:654 fsType:overlay blockSize:0} overlay_0-658:{mountpoint:/var/lib/containers/storage/overlay/4b0938427eea223487189485eee9d58daeca7d4359fd20e216beae3b3747e576/merged major:0 minor:658 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/e93a64957e2e495f0e24ffa73bd18bb29b7f28b6c88d33ac3041d3788526502e/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-660:{mountpoint:/var/lib/containers/storage/overlay/9349173a0e69f1368b7e46adf834139b4c73ca909dbc8c5c0e12889a189c5f88/merged major:0 minor:660 fsType:overlay blockSize:0} overlay_0-669:{mountpoint:/var/lib/containers/storage/overlay/43a91c8e81ada24d513ac12c4b42b90ca63c5715b53bb75e4ea74d4b7a600310/merged major:0 minor:669 fsType:overlay blockSize:0} overlay_0-670:{mountpoint:/var/lib/containers/storage/overlay/b4d107c8fcbf8776bf501cf69e52e378a6c5e1c8297bcd5b41cf67d5aeb6edad/merged major:0 minor:670 fsType:overlay blockSize:0} overlay_0-674:{mountpoint:/var/lib/containers/storage/overlay/ae1b170c7bf5d58c827127db520223e5e7e9a7ca4d6e52410c3a0346f45793ce/merged major:0 minor:674 fsType:overlay blockSize:0} overlay_0-677:{mountpoint:/var/lib/containers/storage/overlay/94d26b3c3a0f78250cd30b250f4733df7ccae921fc7c56c3fbdd249b0d37e6b7/merged major:0 minor:677 fsType:overlay blockSize:0} overlay_0-680:{mountpoint:/var/lib/containers/storage/overlay/115fcb0ab72cee655c18decab4ba4150d108eb385013d5c55fccbcb4f9f6775e/merged major:0 minor:680 fsType:overlay blockSize:0} overlay_0-685:{mountpoint:/var/lib/containers/storage/overlay/099024864eeb38d2e5d51aa2c36016781385e8609e168efdc16f1c048402ac8b/merged major:0 minor:685 fsType:overlay blockSize:0} overlay_0-686:{mountpoint:/var/lib/containers/storage/overlay/20ae7765aa02b4eecb1885def5a2a2f94f93d06c7795c3cb56b181ac53a075f2/merged major:0 minor:686 fsType:overlay blockSize:0} overlay_0-688:{mountpoint:/var/lib/containers/storage/overlay/4dd6f05e91cb50b362c4c4a5a058c24e16c8bfea3d20284c034f9419bbe7fee9/merged major:0 minor:688 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/66bcb24c18c8de1d2f8882f28234c9b901e704aa1a8c5f560bd9d235a7ae7014/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-690:{mountpoint:/var/lib/containers/storage/overlay/3cfb1eb4a960be2bdd23913d77e89312ccaa6454bc4593e4598f3e1dabce5b74/merged major:0 minor:690 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/c644c8eed288f52942d8e4d0374283a440c768320d3421f3771c395bd2a36cd6/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-700:{mountpoint:/var/lib/containers/storage/overlay/6d2f96413e7c8a81d7e8da4c5d019bcbeaedfc2bc6f3541dd83a2e3245400576/merged major:0 minor:700 fsType:overlay blockSize:0} overlay_0-703:{mountpoint:/var/lib/containers/storage/overlay/071b22bd07bbb80732020e218ca173765b81bd0ab20cdd8c61f50a57f6df2bcf/merged major:0 minor:703 fsType:overlay blockSize:0} overlay_0-704:{mountpoint:/var/lib/containers/storage/overlay/a067c6180c7cf8a8f9cefc1b3c5a9478230b8c760a42673fdfe6c1663affdb03/merged major:0 minor:704 fsType:overlay blockSize:0} overlay_0-705:{mountpoint:/var/lib/containers/storage/overlay/2e30bf5aeb2086fbb537e1b84fa5779e56d4febfa30128a84482c4edc58be8de/merged major:0 minor:705 fsType:overlay blockSize:0} overlay_0-708:{mountpoint:/var/lib/containers/storage/overlay/372306a4b39ee8c67eef0d59bbdf15ad95ee1e7dd900af7e67b60076297492db/merged major:0 minor:708 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/0902f821ad3a2248e03ef064ed563742082cd25c761b2ba14727f0058c564484/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-710:{mountpoint:/var/lib/containers/storage/overlay/209697120e7f36dd28039393b2057b87aca0e95c1cd1af68a4c83d349722c870/merged major:0 minor:710 fsType:overlay blockSize:0} overlay_0-711:{mountpoint:/var/lib/containers/storage/overlay/1b6a8bcd0fe0a111f37d9ca4b0e2b40673ccca844fd9662254aa1d3cc2e76d09/merged major:0 minor:711 fsType:overlay blockSize:0} overlay_0-728:{mountpoint:/var/lib/containers/storage/overlay/094227f07fe36fd2cd7a2fdfda81290397d22435dd5222c7015ce9f13dcfeaf0/merged major:0 minor:728 fsType:overlay blockSize:0} overlay_0-729:{mountpoint:/var/lib/containers/storage/overlay/af8f3735ce7e4e2c349bb2c848e3b6397ba8c42a9fcb656080bdcc149eab4e77/merged major:0 minor:729 fsType:overlay blockSize:0} overlay_0-73:{mountpoint:/var/lib/containers/storage/overlay/005a9902f13b19f3c023fbbf7af30fac581b62ed820df0b70dd535e92850ec96/merged major:0 minor:73 fsType:overlay blockSize:0} overlay_0-731:{mountpoint:/var/lib/containers/storage/overlay/12ca1ba2218f67e02287ab2ebe1061a638caa447ae33bda80d31ddaca06205e5/merged major:0 minor:731 fsType:overlay blockSize:0} overlay_0-738:{mountpoint:/var/lib/containers/storage/overlay/09b57466b2de72a857279f37b6f9fe62ae940c33c52fac31f21a28a54ea2fca8/merged major:0 minor:738 fsType:overlay blockSize:0} overlay_0-745:{mountpoint:/var/lib/containers/storage/overlay/faf7f15b03ee5d31ac211f4bff9fe9ab27157cf9544d7acbb403fc5885f9f382/merged major:0 minor:745 fsType:overlay blockSize:0} overlay_0-746:{mountpoint:/var/lib/containers/storage/overlay/2849d85fdfa6c2e90b75c05b3d0d5509fd10839bf096aafbf85479802cc04768/merged major:0 minor:746 fsType:overlay blockSize:0} overlay_0-758:{mountpoint:/var/lib/containers/storage/overlay/c473aba1896572003a4f3db88b9e6b9dd31ad98715c816d5403de977a7a3c8cc/merged major:0 minor:758 fsType:overlay blockSize:0} overlay_0-764:{mountpoint:/var/lib/containers/storage/overlay/4f43a705b42d335050290acd3125fe2338136891ee488386c176cdce71f0296e/merged major:0 minor:764 fsType:overlay blockSize:0} overlay_0-766:{mountpoint:/var/lib/containers/storage/overlay/944aa99fb382c38d972bdd2e0a43c00489c994f8a7988d4c5556390916d99dff/merged major:0 minor:766 fsType:overlay blockSize:0} overlay_0-767:{mountpoint:/var/lib/containers/storage/overlay/a42beec80358b204488fc18420dbaa464bd9963191d5a4456f358958bb8234ea/merged major:0 minor:767 fsType:overlay blockSize:0} overlay_0-770:{mountpoint:/var/lib/containers/storage/overlay/ad3626501181a58c4c75bb39d99ef06e4d5e582696096cbe6c8bf3499106a0e8/merged major:0 minor:770 fsType:overlay blockSize:0} overlay_0-772:{mountpoint:/var/lib/containers/storage/overlay/47a6436d74ff970072aa5decdfdff075c2e517fb4d8330b5b1a472527997dab6/merged major:0 minor:772 fsType:overlay blockSize:0} overlay_0-774:{mountpoint:/var/lib/containers/storage/overlay/c81d7bce6b171d88382e16639ba3850ad185637aa263022e1623604c022251c5/merged major:0 minor:774 fsType:overlay blockSize:0} overlay_0-775:{mountpoint:/var/lib/containers/storage/overlay/d2a1de0bf2df4f298bf83d77565125b68683340fa2e19abff21f44975d8fcf9d/merged major:0 minor:775 fsType:overlay blockSize:0} overlay_0-778:{mountpoint:/var/lib/containers/storage/overlay/b2590006e23f0f5e10e45ccd9820d0f3326aeb29757f2f998671f937a4d2b29b/merged major:0 minor:778 fsType:overlay blockSize:0} overlay_0-794:{mountpoint:/var/lib/containers/storage/overlay/b4390cd21d16ed6184ae5b7777241ccffbdeff4a2c88a10fc5e163d11320c5e1/merged major:0 minor:794 fsType:overlay blockSize:0} overlay_0-795:{mountpoint:/var/lib/containers/storage/overlay/a301909f61a734ef293f8eee1d41c3edfcf0caec4fea69e85c3d3d6058638aa4/merged major:0 minor:795 fsType:overlay blockSize:0} overlay_0-797:{mountpoint:/var/lib/containers/storage/overlay/87a86030fe4e0cbc72198614e1e7ab1c7adf938487145f9d5a1647b03a12a696/merged major:0 minor:797 fsType:overlay blockSize:0} overlay_0-799:{mountpoint:/var/lib/containers/storage/overlay/dd545ba75b36d0d63832e01110d505e29b6aa7e0d7dbad612a553e6313be10d3/merged major:0 minor:799 fsType:overlay blockSize:0} overlay_0-803:{mountpoint:/var/lib/containers/storage/overlay/956683c16266268ce9118504d8e45b0c1a745d8145de237f1f0e138b792c428c/merged major:0 minor:803 fsType:overlay blockSize:0} overlay_0-808:{mountpoint:/var/lib/containers/storage/overlay/d12c796a77a1eda3be58fd6bdea9a753cc37e142a316e0ef3e63a5d5207784c3/merged major:0 minor:808 fsType:overlay blockSize:0} overlay_0-811:{mountpoint:/var/lib/containers/storage/overlay/0767923aa7ca2eca817f8f917da0387d68c220886b2a2fa131e14adb4e093525/merged major:0 minor:811 fsType:overlay blockSize:0} overlay_0-816:{mountpoint:/var/lib/containers/storage/overlay/7b54b797607abb43eda7d6b27dde957d9c42f25949c3f63e4d9fec2ade3ca2d3/merged major:0 minor:816 fsType:overlay blockSize:0} overlay_0-817:{mountpoint:/var/lib/containers/storage/overlay/a75d10c19bdeb325d1c3d85096274d39beb3d7d9b2ee2eaade957cfccb3d761e/merged major:0 minor:817 fsType:overlay blockSize:0} overlay_0-818:{mountpoint:/var/lib/containers/storage/overlay/760ec4de63801621b3a6ba6d18f30f3f047c8949983764b59b50b3e97056ee69/merged major:0 minor:818 fsType:overlay blockSize:0} overlay_0-821:{mountpoint:/var/lib/containers/storage/overlay/cdb34fa1d9c9e1686d75eb6b215d42c87725f5a67eb108f48bae9ef79566b317/merged major:0 minor:821 fsType:overlay blockSize:0} overlay_0-823:{mountpoint:/var/lib/containers/storage/overlay/69153cd8ec604beabbbdec43e29e977c602506379ed9a523649c14f0ec542d53/merged major:0 minor:823 fsType:overlay blockSize:0} overlay_0-825:{mountpoint:/var/lib/containers/storage/overlay/6d243ef66ec3e57bd90bb9b5c11a85ad3e7b42cdd830319f279f61a4181e4c1b/merged major:0 minor:825 fsType:overlay blockSize:0} overlay_0-83:{mountpoint:/var/lib/containers/storage/overlay/e27eb49f4392621c3ca57c81a707fab7c367b71f891ba33071b7d5e4cb7513af/merged major:0 minor:83 fsType:overlay blockSize:0} overlay_0-832:{mountpoint:/var/lib/containers/storage/overlay/bc660e134fe00bffdb19e08e0072f675fe7f53da1518238d407c51195151a87a/merged major:0 minor:832 fsType:overlay blockSize:0} overlay_0-836:{mountpoint:/var/lib/containers/storage/overlay/a2bcb637fd88692d54e311a6f2808451d4a4d6713dc3aad927cca2298ac014dc/merged major:0 minor:836 fsType:overlay blockSize:0} overlay_0-846:{mountpoint:/var/lib/containers/storage/overlay/96230f632a0fd6c6e70ab4ff71de8cb0ab7f01a7c8ffd4b84c903bdf0c8388bd/merged major:0 minor:846 fsType:overlay blockSize:0} overlay_0-848:{mountpoint:/var/lib/containers/storage/overlay/9ab35a3dfb1d50d09a6e5b0f84b5b46b75876741336d14dfb478caef95e641c6/merged major:0 minor:848 fsType:overlay blockSize:0} overlay_0-853:{mountpoint:/var/lib/containers/storage/overlay/7bd00a55673163ce5e39d92fbbfced5c20b6e41a6826e975bb9aef9e0b335682/merged major:0 minor:853 fsType:overlay blockSize:0} overlay_0-855:{mountpoint:/var/lib/containers/storage/overlay/ac01d791dd051b675145b70f50bdb38f2509cf1f529ba1d4184c88bd5acb00ee/merged major:0 minor:855 fsType:overlay blockSize:0} overlay_0-863:{mountpoint:/var/lib/containers/storage/overlay/330c6d4d59060ad4f79a328d793326114589474581b760dceead7880facd4f4e/merged major:0 minor:863 fsType:overlay blockSize:0} overlay_0-865:{mountpoint:/var/lib/containers/storage/overlay/b039c637a76a7da9475698c5a7ef618175fb44f48e99c0bb29835b75c9663e5e/merged major:0 minor:865 fsType:overlay blockSize:0} overlay_0-867:{mountpoint:/var/lib/containers/storage/overlay/517998b10daa2db85bdf2d79fa38949c1726a7770f6eb671ecc4aa173b5b2484/merged major:0 minor:867 fsType:overlay blockSize:0} overlay_0-869:{mountpoint:/var/lib/containers/storage/overlay/1baa95b18d5550851093e1a21cb69b812adff6520ae17b5a6812643f1fa31231/merged major:0 minor:869 fsType:overlay blockSize:0} overlay_0-87:{mountpoint:/var/lib/containers/storage/overlay/8993149ecca1ee3e71e8005b0a6cef31bdcb325af097615d247d0992a6985946/merged major:0 minor:87 fsType:overlay blockSize:0} overlay_0-871:{mountpoint:/var/lib/containers/storage/overlay/74843fce13480c8cd5b568e6d416a93ee80fe8eef1d0ef8cdfad4ba5759cb843/merged major:0 minor:871 fsType:overlay blockSize:0} overlay_0-873:{mountpoint:/var/lib/containers/storage/overlay/16f6c423ba7c1a425102c732c03cf4b662be533192f80baabc632c8e5588bff2/merged major:0 minor:873 fsType:overlay blockSize:0} overlay_0-875:{mountpoint:/var/lib/containers/storage/overlay/5fbe3010c5afe1413de8df6e7fa43ac1b9ac6ffd04d66a3a37e8daaf11dec3cc/merged major:0 minor:875 fsType:overlay blockSize:0} overlay_0-880:{mountpoint:/var/lib/containers/storage/overlay/4a42ddc2abd07fece2d3172fc4e1467416e50e57a5aad061295375afc98a8a4e/merged major:0 minor:880 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/bf78f9afe3eeeff5a6f77eaf3827eb2b1ae6006d8d393b5f2cbd3eec9929aeab/merged major:0 minor:89 fsType:overlay blockSize:0} overlay_0-905:{mountpoint:/var/lib/containers/storage/overlay/859c2f38016f27b174d7266fb73a39039eec464773283627ef9df882a49d5e7f/merged major:0 minor:905 fsType:overlay blockSize:0} overlay_0-912:{mountpoint:/var/lib/containers/storage/overlay/30f73eb24132fd85909d2c338afcf11462b6449eedc0afc96b87a9ea1c1374f5/merged major:0 minor:912 fsType:overlay blockSize:0} overlay_0-916:{mountpoint:/var/lib/containers/storage/overlay/f14b8d0abc72fbe251afb561661a65470470470f06b275f277c7dd1294406170/merged major:0 minor:916 fsType:overlay blockSize:0} overlay_0-919:{mountpoint:/var/lib/containers/storage/overlay/bc88db9883803129fe290bab753f21b47a98582b6dd35c872799bf82e1a55eda/merged major:0 minor:919 fsType:overlay blockSize:0} overlay_0-92:{mountpoint:/var/lib/containers/storage/overlay/3d4c9f9c540a0abacef9f12205589635eabb50a52772740897dc104613164873/merged major:0 minor:92 fsType:overlay blockSize:0} overlay_0-922:{mountpoint:/var/lib/containers/storage/overlay/e27085eb012a5043fe28931ddcf7641668f10ebdf47ebb829b6577d6970feb61/merged major:0 minor:922 fsType:overlay blockSize:0} overlay_0-924:{mountpoint:/var/lib/containers/storage/overlay/95a4d82e128cbf192d916c198e81c820e39044745e805342e52e37d5969dc2d8/merged major:0 minor:924 fsType:overlay blockSize:0} overlay_0-927:{mountpoint:/var/lib/containers/storage/overlay/a9191ace5c962cd1dec50e2885ab66a466532ce634228e6d208db86ddea76833/merged major:0 minor:927 fsType:overlay blockSize:0} overlay_0-931:{mountpoint:/var/lib/containers/storage/overlay/5b031567ac7844143aacce215805139a752a8d53d628f1d64bd6cf4392c6a091/merged major:0 minor:931 fsType:overlay blockSize:0} overlay_0-937:{mountpoint:/var/lib/containers/storage/overlay/3960a8c3b8b5eda5586993eb1b8a39306fb3d917c33935c4b75d07c671bcbb64/merged major:0 minor:937 fsType:overlay blockSize:0} overlay_0-940:{mountpoint:/var/lib/containers/storage/overlay/e580fcace71addcb5590d8a82d77da7eb5ce93aa08d5f795f80bbefd8674eb02/merged major:0 minor:940 fsType:overlay blockSize:0} overlay_0-941:{mountpoint:/var/lib/containers/storage/overlay/15cdaf8bd83680caed7030cd6e052e1a35b73684ef5374e6dd90daf35763b8e9/merged major:0 minor:941 fsType:overlay blockSize:0} overlay_0-957:{mountpoint:/var/lib/containers/storage/overlay/78d6f05d9d49ebb7d32702630fd6713f5f0d79e042b4609c02437e52177ddd12/merged major:0 minor:957 fsType:overlay blockSize:0} overlay_0-96:{mountpoint:/var/lib/containers/storage/overlay/012887443a2d6633acf35af63f0a70bd0df87e576c3a65597fff4cab7998c665/merged major:0 minor:96 fsType:overlay blockSize:0} overlay_0-965:{mountpoint:/var/lib/containers/storage/overlay/6354bbfcfa2165215ea08e2d37f587017a7e9316a9fa6101afdf52b0be167e9b/merged major:0 minor:965 fsType:overlay blockSize:0} overlay_0-969:{mountpoint:/var/lib/containers/storage/overlay/5d309f9b281d087fe6b9c4b0cf3bcb334ba41bc5d5cb11354b3d8c55435ee765/merged major:0 minor:969 fsType:overlay blockSize:0} overlay_0-971:{mountpoint:/var/lib/containers/storage/overlay/cc4662d723616fa91ba0be03e00c0792c0934a7e3404c9ab7606c8ff8c92a686/merged major:0 minor:971 fsType:overlay blockSize:0} overlay_0-979:{mountpoint:/var/lib/containers/storage/overlay/052cdd8e5ed8ca92a04f30a4ee08402007149f0ba4b5bc8e1c978a47988f6cc8/merged major:0 minor:979 fsType:overlay blockSize:0} overlay_0-984:{mountpoint:/var/lib/containers/storage/overlay/7e90d3e9d0fdef2a6bc26e76d70467dd5dafe0cf39289158244d16f2ba76c2af/merged major:0 minor:984 fsType:overlay blockSize:0} overlay_0-986:{mountpoint:/var/lib/containers/storage/overlay/b3f0fd25d1b6a8238fa651a10e9d96e7ead50f781d215695bf4efae0efc31be7/merged major:0 minor:986 fsType:overlay blockSize:0} overlay_0-988:{mountpoint:/var/lib/containers/storage/overlay/79301e8af28ad62abed2bddf01c01efe401b8863ed7a652efb3a3a140248a638/merged major:0 minor:988 fsType:overlay blockSize:0} overlay_0-99:{mountpoint:/var/lib/containers/storage/overlay/5be3406c7bca65074a5b86cc7255fa236a564143d9442c96bb97be83567fd8e6/merged major:0 minor:99 fsType:overlay blockSize:0} overlay_0-996:{mountpoint:/var/lib/containers/storage/overlay/47cbd045ea45106242a42a51b4760e10e66b8630d990d61d56d172041efeb20f/merged major:0 minor:996 fsType:overlay blockSize:0} overlay_0-998:{mountpoint:/var/lib/containers/storage/overlay/5d9b572dded592804ace96a0d64bb8499dc49009a6a22d1d337a951342432ecd/merged major:0 minor:998 fsType:overlay blockSize:0}] Feb 20 12:04:55.420776 master-0 kubenswrapper[31420]: I0220 12:04:55.417382 31420 manager.go:217] Machine: {Timestamp:2026-02-20 12:04:55.415948044 +0000 UTC m=+0.135186295 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514149376 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:c1d3cbc82ca3451894ea40b65f988770 SystemUUID:c1d3cbc8-2ca3-4518-94ea-40b65f988770 BootID:5aa007af-ada2-4850-bae5-7cd3dd4060ba Filesystems:[{Device:/run/containers/storage/overlay-containers/3372bbf7f4c306095391a5b4c0a6615ca5aaf373fb3cc461d59deb2a7e8dca2b/userdata/shm DeviceMajor:0 DeviceMinor:1205 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8a97bbf5-7409-4f36-894b-b88284e1b6d0/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:386 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8cf490279cd50e81a0597e17ffd2c0830f353d5b000ce0e906995ead9d10342b/userdata/shm DeviceMajor:0 DeviceMinor:540 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bd609bd3-2525-4b88-8f07-94a0418fb582/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:886 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-476 DeviceMajor:0 DeviceMinor:476 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-612 DeviceMajor:0 DeviceMinor:612 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5c104245-d078-4856-9a60-207bb6efcfe8/volumes/kubernetes.io~projected/kube-api-access-nlcjf DeviceMajor:0 DeviceMinor:884 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-125 DeviceMajor:0 DeviceMinor:125 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1378 DeviceMajor:0 DeviceMinor:1378 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-99 DeviceMajor:0 DeviceMinor:99 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-221 DeviceMajor:0 DeviceMinor:221 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dd42f3b0e8e73a155f4ae8d3e76cb9c1f46437280ce91aa23b51a6b995b48869/userdata/shm DeviceMajor:0 DeviceMinor:425 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-803 DeviceMajor:0 DeviceMinor:803 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ce2b6fde-de56-49c3-9bd6-e81c679b02bc/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/906307ef-d988-49e7-9d63-39116a2c4880/volumes/kubernetes.io~projected/kube-api-access-5j82z DeviceMajor:0 DeviceMinor:280 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1330 DeviceMajor:0 DeviceMinor:1330 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-306 DeviceMajor:0 DeviceMinor:306 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-808 DeviceMajor:0 DeviceMinor:808 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6a7bfbfdcb0537291cfa1b372b6f031e0ea91896123e7787ed049d4ad28854cc/userdata/shm DeviceMajor:0 DeviceMinor:379 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1212 DeviceMajor:0 DeviceMinor:1212 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-710 DeviceMajor:0 DeviceMinor:710 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-867 DeviceMajor:0 DeviceMinor:867 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-869 DeviceMajor:0 DeviceMinor:869 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/39a1a5d33692c6053b9e75c3ef75f6d5e551935ea080f8573acf4698acb62831/userdata/shm DeviceMajor:0 DeviceMinor:582 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1229 DeviceMajor:0 DeviceMinor:1229 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1023 DeviceMajor:0 DeviceMinor:1023 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-498 DeviceMajor:0 DeviceMinor:498 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8df029f2-d0ec-4543-9371-7694b1e85a06/volumes/kubernetes.io~projected/kube-api-access-kwgg6 DeviceMajor:0 DeviceMinor:857 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2f9cd117-c84f-44c9-80a9-879a04d62934/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1160 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-416 DeviceMajor:0 DeviceMinor:416 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fca213c3-42ca-4341-a2e6-a143b9389f9e/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:753 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-818 DeviceMajor:0 DeviceMinor:818 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1108 DeviceMajor:0 DeviceMinor:1108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-988 DeviceMajor:0 DeviceMinor:988 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-462 DeviceMajor:0 DeviceMinor:462 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-220 DeviceMajor:0 DeviceMinor:220 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/db2a7cb1-1d05-4b24-86ed-f823fad5013e/volumes/kubernetes.io~projected/kube-api-access-6td56 DeviceMajor:0 DeviceMinor:253 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:424 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/39790258-73bc-4c37-a935-e8d3c2a2d5c6/volumes/kubernetes.io~projected/kube-api-access-94lkp DeviceMajor:0 DeviceMinor:977 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d65a0af4-c96f-44f8-9384-6bae4585983b/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:563 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-96 DeviceMajor:0 DeviceMinor:96 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1265 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-708 DeviceMajor:0 DeviceMinor:708 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-112 DeviceMajor:0 DeviceMinor:112 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/af18215b-e749-4565-bb6c-24e92c452817/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:537 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1709ef31-9ddd-42bf-9a95-4be4502a0828/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:560 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51/volumes/kubernetes.io~projected/kube-api-access-qkn7h DeviceMajor:0 DeviceMinor:902 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4d8cd7c5-31fd-4dca-b39b-6d62eb573707/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1120 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-333 DeviceMajor:0 DeviceMinor:333 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-371 DeviceMajor:0 DeviceMinor:371 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1089 DeviceMajor:0 DeviceMinor:1089 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ad27979ee67ec73db6166a66f6c8de5d02655b589472440fd2f397e6aebb3ab2/userdata/shm DeviceMajor:0 DeviceMinor:143 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-339 DeviceMajor:0 DeviceMinor:339 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-648 DeviceMajor:0 DeviceMinor:648 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b2b9e99c760ef8b2b1d3b355cd2d86c95a75c8f2455bc3e22e89b188ba7101e3/userdata/shm DeviceMajor:0 DeviceMinor:78 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/01e90033-9ddf-41b4-ab61-e89add6c2fde/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:241 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e89e4070dae8204d097a2414e77e4c5c562c772569afe17fb4b2e8b090f82fda/userdata/shm DeviceMajor:0 DeviceMinor:784 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-187 DeviceMajor:0 DeviceMinor:187 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-447 DeviceMajor:0 DeviceMinor:447 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-965 DeviceMajor:0 DeviceMinor:965 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-205 DeviceMajor:0 DeviceMinor:205 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3492cbd782b3ac55acb0d1ebd2aa664af10267490d59604deb78eb50aef952ff/userdata/shm DeviceMajor:0 DeviceMinor:566 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5c104245-d078-4856-9a60-207bb6efcfe8/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:883 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5b2746caab687d58b26002188b5ccba20de2a04cd6da171355541cf375046c0d/userdata/shm DeviceMajor:0 DeviceMinor:542 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7127b21b93cf0d636eeb4e29ca5a97fd29d095d44d5d5c9994999fa758bf4565/userdata/shm DeviceMajor:0 DeviceMinor:577 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-821 DeviceMajor:0 DeviceMinor:821 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5552d51223ad679691154e2dedf71641b800849a05e120dc501f6840be1e99e/userdata/shm DeviceMajor:0 DeviceMinor:810 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b37a738de54db612fada8bd81cf2bdbaea3d8eea466401eb8ad83715ec6bea2f/userdata/shm DeviceMajor:0 DeviceMinor:47 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:244 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-320 DeviceMajor:0 DeviceMinor:320 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-418 DeviceMajor:0 DeviceMinor:418 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-335 DeviceMajor:0 DeviceMinor:335 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1df81fcc-f967-4874-ad16-1a89f0e7875a/volumes/kubernetes.io~projected/kube-api-access-7mggv DeviceMajor:0 DeviceMinor:270 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-905 DeviceMajor:0 DeviceMinor:905 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1214 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b89fd2b72c95ae892c409ef90ceca60361969c1db213c09131f13705c3334986/userdata/shm DeviceMajor:0 DeviceMinor:390 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b246614c1f2f72db4cedbcce4b955bc3ac0b04e8bff7cc76cf229101226ee259/userdata/shm DeviceMajor:0 DeviceMinor:575 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-924 DeviceMajor:0 DeviceMinor:924 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-919 DeviceMajor:0 DeviceMinor:919 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-685 DeviceMajor:0 DeviceMinor:685 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1040 DeviceMajor:0 DeviceMinor:1040 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-437 DeviceMajor:0 DeviceMinor:437 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-865 DeviceMajor:0 DeviceMinor:865 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-922 DeviceMajor:0 DeviceMinor:922 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bccd3e3cca0e5a27f19803d019ffa435cc0a6a211a761789d34e9900fb9748dc/userdata/shm DeviceMajor:0 DeviceMinor:337 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-549 DeviceMajor:0 DeviceMinor:549 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0318746ff4f748b910f4c4078a258eb92f24f864ae719352a32329d892129cdb/userdata/shm DeviceMajor:0 DeviceMinor:1292 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-623 DeviceMajor:0 DeviceMinor:623 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e8c48a22-ed96-42c5-ac4a-dd7d4f204539/volumes/kubernetes.io~projected/kube-api-access-ksx6l DeviceMajor:0 DeviceMinor:377 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-670 DeviceMajor:0 DeviceMinor:670 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:422 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-592 DeviceMajor:0 DeviceMinor:592 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-100 DeviceMajor:0 DeviceMinor:100 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/836a6d7e-9b26-425f-ae21-00422515d7fe/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:163 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-832 DeviceMajor:0 DeviceMinor:832 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-455 DeviceMajor:0 DeviceMinor:455 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/312ca024-c8f0-4994-8f9a-b707607341fe/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4d060bff-3c25-4eeb-bdd3-e20fb2687645/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:260 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1179 DeviceMajor:0 DeviceMinor:1179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-745 DeviceMajor:0 DeviceMinor:745 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d462fc60c97084643070378d982a956e1f53a8cb223bde5d6b24565dab2fc818/userdata/shm DeviceMajor:0 DeviceMinor:1129 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1367 DeviceMajor:0 DeviceMinor:1367 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257074688 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9f9442b-25b9-420f-b748-bb13423809fe/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:480 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-504 DeviceMajor:0 DeviceMinor:504 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-547 DeviceMajor:0 DeviceMinor:547 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1030 DeviceMajor:0 DeviceMinor:1030 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c3aa45a-44cc-48fb-a478-ce01a70c4b02/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:261 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-322 DeviceMajor:0 DeviceMinor:322 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1238 DeviceMajor:0 DeviceMinor:1238 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b9fe0660-fae4-4f97-8895-dbc4845cee40/volumes/kubernetes.io~projected/kube-api-access-7r85p DeviceMajor:0 DeviceMinor:1174 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d3506d2533f5948044615b3daf194c86dee0685849b66763860811b20d32f418/userdata/shm DeviceMajor:0 DeviceMinor:1304 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-358 DeviceMajor:0 DeviceMinor:358 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8ab951b1-6898-4357-b813-16365f3f89d5/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:888 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1044 DeviceMajor:0 DeviceMinor:1044 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2f9cd117-c84f-44c9-80a9-879a04d62934/volumes/kubernetes.io~projected/kube-api-access-m98rt DeviceMajor:0 DeviceMinor:1165 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9c078827-3bdb-4509-aeb3-eb558df1f6e7/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1121 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-535 DeviceMajor:0 DeviceMinor:535 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/89383482-190e-4f74-a81e-b1547e5b9ae6/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:697 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-996 DeviceMajor:0 DeviceMinor:996 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-532 DeviceMajor:0 DeviceMinor:532 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1298 DeviceMajor:0 DeviceMinor:1298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1054 DeviceMajor:0 DeviceMinor:1054 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-704 DeviceMajor:0 DeviceMinor:704 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-631 DeviceMajor:0 DeviceMinor:631 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-178 DeviceMajor:0 DeviceMinor:178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1fb59696-1d5f-41bb-9211-b89c63b10840/volumes/kubernetes.io~projected/kube-api-access-8djgj DeviceMajor:0 DeviceMinor:356 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/33f515505da92fce1875904be2b838a9fceeeb5773f300e97e9d391050d94811/userdata/shm DeviceMajor:0 DeviceMinor:373 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2be4e82eb96940a91f7ac36e8a59bd96b86a7b6fac8a7814b9cb48d762103f37/userdata/shm DeviceMajor:0 DeviceMinor:431 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6dfca740-0387-428a-b957-3e8a09c6e352/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:559 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-931 DeviceMajor:0 DeviceMinor:931 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volumes/kubernetes.io~projected/kube-api-access-5j4cs DeviceMajor:0 DeviceMinor:141 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2a5fb83b35a727aa019fe00cd3fd649fdc6a109862d8c91e0031dff4209d98e3/userdata/shm DeviceMajor:0 DeviceMinor:277 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/827635bac05f32a4d1b33aabd85a52eb2d7b3922ab83e829cdc824722116be6c/userdata/shm DeviceMajor:0 DeviceMinor:286 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-940 DeviceMajor:0 DeviceMinor:940 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-439 DeviceMajor:0 DeviceMinor:439 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aaa9389e6efd83bcb84425795f77ecd0592b13d2955b3048aeff511ecb88fc48/userdata/shm DeviceMajor:0 DeviceMinor:671 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/17031191ab6d96a7b42b27f8e62cc7de662a0a1661bf978c7cf3315a18929da9/userdata/shm DeviceMajor:0 DeviceMinor:1128 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/aae1df07-cf9f-47a3-b146-2a0adb182660/volumes/kubernetes.io~secret/telemeter-client-tls DeviceMajor:0 DeviceMinor:1286 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5360f3f5-2d07-432f-af45-22659538c55e/volumes/kubernetes.io~projected/kube-api-access-7vvm8 DeviceMajor:0 DeviceMinor:295 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-635 DeviceMajor:0 DeviceMinor:635 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-654 DeviceMajor:0 DeviceMinor:654 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0d45f4e60b11e0b0a317456c0195f07cdb88a32c6fdc95b3ec005464743a5f86/userdata/shm DeviceMajor:0 DeviceMinor:426 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1691f192a8834aa22572ce2ad682bc87e326607190761ea473a2ecaf32c9e175/userdata/shm DeviceMajor:0 DeviceMinor:861 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-188 DeviceMajor:0 DeviceMinor:188 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bbdbadd9-eeaa-46ef-936e-5db8d395c118/volumes/kubernetes.io~projected/kube-api-access-ttmwx DeviceMajor:0 DeviceMinor:892 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-611 DeviceMajor:0 DeviceMinor:611 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-330 DeviceMajor:0 DeviceMinor:330 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-441 DeviceMajor:0 DeviceMinor:441 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7b0a0741b1c4a0dbf76177da995e7cc407702a375fbd2c1f79e4ec49f22b6e5f/userdata/shm DeviceMajor:0 DeviceMinor:544 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-846 DeviceMajor:0 DeviceMinor:846 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/54f65f910e458ec6e67c421fe2cab6c8d04efb4552cacded48383019268d4056/userdata/shm DeviceMajor:0 DeviceMinor:762 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-599 DeviceMajor:0 DeviceMinor:599 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1df81fcc-f967-4874-ad16-1a89f0e7875a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ce2b6fde-de56-49c3-9bd6-e81c679b02bc/volumes/kubernetes.io~projected/kube-api-access-2k8n8 DeviceMajor:0 DeviceMinor:276 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/daeb204866928dea63cda5d95ee5bd6ef7be131f67e5efa51523f3185688b49e/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a92cb32c4be6840fe62cceeff083a250664f650a02bcc7c9c164c3636c13a84d/userdata/shm DeviceMajor:0 DeviceMinor:293 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-570 DeviceMajor:0 DeviceMinor:570 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ae1fd116-6f63-4344-b7af-278665649e5a/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:897 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d72373aa995597c762385fce3b659d1483668a485ad494b6b7d7dd517099e857/userdata/shm DeviceMajor:0 DeviceMinor:380 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1050 DeviceMajor:0 DeviceMinor:1050 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a7e80ad99f32fd1031084b1ec720eccfe0c30d3f2999f46f1a0b9a07c12c03d3/userdata/shm DeviceMajor:0 DeviceMinor:581 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-633 DeviceMajor:0 DeviceMinor:633 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-674 DeviceMajor:0 DeviceMinor:674 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-328 DeviceMajor:0 DeviceMinor:328 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-407 DeviceMajor:0 DeviceMinor:407 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1133 DeviceMajor:0 DeviceMinor:1133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4925985880a2064a6380cae65dbb1eb737b503d2a9366dcfbcec286b6e942ef7/userdata/shm DeviceMajor:0 DeviceMinor:430 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c29fd426-7c89-434e-8332-1ca31075d4bf/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:805 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ae1fd116-6f63-4344-b7af-278665649e5a/volumes/kubernetes.io~projected/kube-api-access-wf682 DeviceMajor:0 DeviceMinor:898 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-58 DeviceMajor:0 DeviceMinor:58 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-402 DeviceMajor:0 DeviceMinor:402 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f98aeaf7-bf1a-46af-bf1b-85713baa4c67/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:245 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-459 DeviceMajor:0 DeviceMinor:459 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/59c1cc61-8692-4a35-83fc-6bbef7086117/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:555 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-795 DeviceMajor:0 DeviceMinor:795 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-836 DeviceMajor:0 DeviceMinor:836 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-650 DeviceMajor:0 DeviceMinor:650 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-690 DeviceMajor:0 DeviceMinor:690 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-746 DeviceMajor:0 DeviceMinor:746 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:242 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e0b28c90-d5b6-44f3-867c-020ece32ac7d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:262 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e0b28c90-d5b6-44f3-867c-020ece32ac7d/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:289 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-986 DeviceMajor:0 DeviceMinor:986 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ad1e0968f9a0f9395b52d4138ec76c893d5513164ae2900823432b7870c6a271/userdata/shm DeviceMajor:0 DeviceMinor:1135 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b9fe0660-fae4-4f97-8895-dbc4845cee40/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1170 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d45bdb88cf4fb87c1f9683f4dd82403ae62e23be61f87cc716489058be0075c3/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0aa23336820d4847f443dc2f86a2ade4113e5076452290a0fb2cf4f2ca4f4941/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/59c1cc61-8692-4a35-83fc-6bbef7086117/volumes/kubernetes.io~projected/kube-api-access-mp57v DeviceMajor:0 DeviceMinor:558 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bd609bd3-2525-4b88-8f07-94a0418fb582/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:885 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ae1fd116-6f63-4344-b7af-278665649e5a/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:896 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9c078827-3bdb-4509-aeb3-eb558df1f6e7/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1123 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1028 DeviceMajor:0 DeviceMinor:1028 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-461 DeviceMajor:0 DeviceMinor:461 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-799 DeviceMajor:0 DeviceMinor:799 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-629 DeviceMajor:0 DeviceMinor:629 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/volumes/kubernetes.io~projected/kube-api-access-8nd7r DeviceMajor:0 DeviceMinor:251 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1168 DeviceMajor:0 DeviceMinor:1168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1192 DeviceMajor:0 DeviceMinor:1192 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-412 DeviceMajor:0 DeviceMinor:412 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/31969539-bfd1-466f-8697-f13cbbd957df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:138 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/47b2f781a814a8d1bcfc1cccd7e4c348407c92b6cdeff2bb7b600cfbaa766dff/userdata/shm DeviceMajor:0 DeviceMinor:580 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-875 DeviceMajor:0 DeviceMinor:875 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-445 DeviceMajor:0 DeviceMinor:445 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-375 DeviceMajor:0 DeviceMinor:375 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b2d70b13e56c93d2b547edf220b4dd7dcd419773ebed8ee5ba82b3212eb438a5/userdata/shm DeviceMajor:0 DeviceMinor:394 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:415 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1209 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:243 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9c078827-3bdb-4509-aeb3-eb558df1f6e7/volumes/kubernetes.io~projected/kube-api-access-x2qdb DeviceMajor:0 DeviceMinor:1124 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/89ed6373-78f8-4d77-82b2-1ab055b5b862/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1202 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-823 DeviceMajor:0 DeviceMinor:823 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-488 DeviceMajor:0 DeviceMinor:488 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/318e8d0079ec56751e5bcf03b814977bae46333d7a42c62cfe81d3ed0047c4ac/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-855 DeviceMajor:0 DeviceMinor:855 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c29fd426-7c89-434e-8332-1ca31075d4bf/volumes/kubernetes.io~projected/kube-api-access-z7k2n DeviceMajor:0 DeviceMinor:877 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-927 DeviceMajor:0 DeviceMinor:927 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/23b2cdbe43b5f53ee3da0198ee8c38e0997aeb51d9b1fb66eb114d3637b2718c/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/volumes/kubernetes.io~projected/kube-api-access-lqxhp DeviceMajor:0 DeviceMinor:264 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-83 DeviceMajor:0 DeviceMinor:83 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-497 DeviceMajor:0 DeviceMinor:497 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-65 DeviceMajor:0 DeviceMinor:65 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/839bf5b1-b242-4bbd-bc09-cf6abcf7f734/volumes/kubernetes.io~projected/kube-api-access-pvxsh DeviceMajor:0 DeviceMinor:250 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fca213c3-42ca-4341-a2e6-a143b9389f9e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:754 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9c078827-3bdb-4509-aeb3-eb558df1f6e7/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1116 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/af18215b-e749-4565-bb6c-24e92c452817/volumes/kubernetes.io~projected/kube-api-access-7c9xz DeviceMajor:0 DeviceMinor:539 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1112 DeviceMajor:0 DeviceMinor:1112 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1294 DeviceMajor:0 DeviceMinor:1294 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/31969539-bfd1-466f-8697-f13cbbd957df/volumes/kubernetes.io~projected/kube-api-access-7ts6s DeviceMajor:0 DeviceMinor:139 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/daf25ef5-8247-4dbb-bdc1-55104b1015b7/volumes/kubernetes.io~projected/kube-api-access-78bqv DeviceMajor:0 DeviceMinor:893 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6479d88f-463f-48ed-846d-2747752a8abb/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1287 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1255 DeviceMajor:0 DeviceMinor:1255 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6479d88f-463f-48ed-846d-2747752a8abb/volumes/kubernetes.io~projected/kube-api-access-mfmdd DeviceMajor:0 DeviceMinor:1291 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-564 DeviceMajor:0 DeviceMinor:564 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3a76972be7f15da250f8e27177b299ce05a6278ca9f8bfe782f7866364a2323b/userdata/shm DeviceMajor:0 DeviceMinor:429 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-772 DeviceMajor:0 DeviceMinor:772 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/21e8e44b-b883-4afb-af90-d6c1265edf34/volumes/kubernetes.io~projected/kube-api-access-rk6hv DeviceMajor:0 DeviceMinor:699 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/478be5e4-cf17-4ebf-a45a-c18cd2b69929/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:140 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/89383482-190e-4f74-a81e-b1547e5b9ae6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:692 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-660 DeviceMajor:0 DeviceMinor:660 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1240 DeviceMajor:0 DeviceMinor:1240 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f1388469-5e55-4c1b-97c3-c88777f29ae7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:249 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4d060bff-3c25-4eeb-bdd3-e20fb2687645/volumes/kubernetes.io~projected/kube-api-access-26x7b DeviceMajor:0 DeviceMinor:292 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-705 DeviceMajor:0 DeviceMinor:705 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f1388469-5e55-4c1b-97c3-c88777f29ae7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:247 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-392 DeviceMajor:0 DeviceMinor:392 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e668bf18622f735aba88fe56630f792fd4bf653bbe4e51d87240b3f22f8d64bd/userdata/shm DeviceMajor:0 DeviceMinor:714 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-443 DeviceMajor:0 DeviceMinor:443 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-302 DeviceMajor:0 DeviceMinor:302 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fca213c3-42ca-4341-a2e6-a143b9389f9e/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:755 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0ede86c860ac980d49efbb5f04d472fabe03c4653074a1a827ff49d2034894a1/userdata/shm DeviceMajor:0 DeviceMinor:716 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/aae1df07-cf9f-47a3-b146-2a0adb182660/volumes/kubernetes.io~secret/federate-client-tls DeviceMajor:0 DeviceMinor:1281 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-457 DeviceMajor:0 DeviceMinor:457 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/07281644-2789-424f-8429-aa4448dda01e/volumes/kubernetes.io~projected/kube-api-access-l5pw4 DeviceMajor:0 DeviceMinor:123 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c3aa45a-44cc-48fb-a478-ce01a70c4b02/volumes/kubernetes.io~projected/kube-api-access-2zkbq DeviceMajor:0 DeviceMinor:288 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/aae1df07-cf9f-47a3-b146-2a0adb182660/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1288 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1324 DeviceMajor:0 DeviceMinor:1324 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2f9cd117-c84f-44c9-80a9-879a04d62934/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1164 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/042d8457-04dc-4171-8b0f-f9e3de695c46/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1201 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e9b84bcaac977feb96e17841e41bc90c7743f1709d32f1ab4ffe9c651b7c5436/userdata/shm DeviceMajor:0 DeviceMinor:1227 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/db2a7cb1-1d05-4b24-86ed-f823fad5013e/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:420 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-360 DeviceMajor:0 DeviceMinor:360 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/afa174b3-912c-4b56-b5eb-f3e3df012c11/volumes/kubernetes.io~projected/kube-api-access-2795m DeviceMajor:0 DeviceMinor:545 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-853 DeviceMajor:0 DeviceMinor:853 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e597c41c82bb3cdfce7c1bbc08b1c76dcd4cf2cd3b4feeb956d08f44152b7ef9/userdata/shm DeviceMajor:0 DeviceMinor:274 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/22bba1b3-587d-4802-b4ae-946827c3fa7a/volumes/kubernetes.io~projected/kube-api-access-2wnh5 DeviceMajor:0 DeviceMinor:291 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1296 DeviceMajor:0 DeviceMinor:1296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2/volumes/kubernetes.io~projected/kube-api-access-2dx69 DeviceMajor:0 DeviceMinor:1194 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1219 DeviceMajor:0 DeviceMinor:1219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-871 DeviceMajor:0 DeviceMinor:871 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-410 DeviceMajor:0 DeviceMinor:410 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1090f75c486a323ae59e4678633d1d8f31d3b0da933bc500d26a674a58096eb0/userdata/shm DeviceMajor:0 DeviceMinor:97 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8/volumes/kubernetes.io~projected/kube-api-access-rcnmk DeviceMajor:0 DeviceMinor:290 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/98226a59-5234-48f3-a9cd-21de305810dc/volumes/kubernetes.io~projected/kube-api-access-j2hwr DeviceMajor:0 DeviceMinor:698 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c/volumes/kubernetes.io~projected/kube-api-access-kfzqt DeviceMajor:0 DeviceMinor:1266 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/aae1df07-cf9f-47a3-b146-2a0adb182660/volumes/kubernetes.io~projected/kube-api-access-qqzpj DeviceMajor:0 DeviceMinor:1290 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/db2a7cb1-1d05-4b24-86ed-f823fad5013e/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:257 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-700 DeviceMajor:0 DeviceMinor:700 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-848 DeviceMajor:0 DeviceMinor:848 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-817 DeviceMajor:0 DeviceMinor:817 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-729 DeviceMajor:0 DeviceMinor:729 Capacity:214143315968 Type:vfs Inodes:10459488 Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: 0 HasInodes:true} {Device:overlay_0-969 DeviceMajor:0 DeviceMinor:969 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-475 DeviceMajor:0 DeviceMinor:475 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/89ed6373-78f8-4d77-82b2-1ab055b5b862/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1199 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-494 DeviceMajor:0 DeviceMinor:494 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-314 DeviceMajor:0 DeviceMinor:314 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-596 DeviceMajor:0 DeviceMinor:596 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5360f3f5-2d07-432f-af45-22659538c55e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:259 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/34382460-b2d7-4154-87ba-c0347a4c0f1b/volumes/kubernetes.io~projected/kube-api-access-5dx9s DeviceMajor:0 DeviceMinor:850 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-758 DeviceMajor:0 DeviceMinor:758 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-87 DeviceMajor:0 DeviceMinor:87 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1316 DeviceMajor:0 DeviceMinor:1316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/65f0cac0248f829995872c710eae2661c9c322f7d317b3a4dc6cd36bbbee0b47/userdata/shm DeviceMajor:0 DeviceMinor:309 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fca78741-ca32-4867-b44f-483fd62f2942/volumes/kubernetes.io~projected/kube-api-access-2cnvt DeviceMajor:0 DeviceMinor:1126 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c16bea21819b6e1d15437de870badf6de1fc66d12913185c9cecf80f61f24b54/userdata/shm DeviceMajor:0 DeviceMinor:80 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-625 DeviceMajor:0 DeviceMinor:625 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/21e8e44b-b883-4afb-af90-d6c1265edf34/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:666 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1144 DeviceMajor:0 DeviceMinor:1144 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-579 DeviceMajor:0 DeviceMinor:579 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-129 DeviceMajor:0 DeviceMinor:129 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-296 DeviceMajor:0 DeviceMinor:296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/836a6d7e-9b26-425f-ae21-00422515d7fe/volumes/kubernetes.io~projected/kube-api-access-ms8wk DeviceMajor:0 DeviceMinor:164 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/62fc400b-b3dd-4134-bd27-69dd8369153a/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:899 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1283 DeviceMajor:0 DeviceMinor:1283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:378 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1122 DeviceMajor:0 DeviceMinor:1122 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fb0b310e4353078b29e20eeb338d9c0abab57242511e7e72ade79783d9a85447/userdata/shm DeviceMajor:0 DeviceMinor:1207 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-470 DeviceMajor:0 DeviceMinor:470 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-677 DeviceMajor:0 DeviceMinor:677 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bbdbadd9-eeaa-46ef-936e-5db8d395c118/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:891 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1206e415177b826b05bc4efd16176f68cc29c42141e8fa6d0d360426d4f33a85/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-825 DeviceMajor:0 DeviceMinor:825 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-731 DeviceMajor:0 DeviceMinor:731 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-105 DeviceMajor:0 DeviceMinor:105 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1181 DeviceMajor:0 DeviceMinor:1181 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/042d8457-04dc-4171-8b0f-f9e3de695c46/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1200 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1074 DeviceMajor:0 DeviceMinor:1074 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1149 DeviceMajor:0 DeviceMinor:1149 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe/volumes/kubernetes.io~projected/kube-api-access-rxr6j DeviceMajor:0 DeviceMinor:385 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1077 DeviceMajor:0 DeviceMinor:1077 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f7330d7b1c8d5e165b36ef69bd54c2550cc5df6e53223d95c4871726c4c1402/userdata/shm DeviceMajor:0 DeviceMinor:282 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-529 DeviceMajor:0 DeviceMinor:529 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-606 DeviceMajor:0 DeviceMinor:606 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e112dc6a9d5f726f666b1385197c77d837257cbee8251d26060f19151f5ada2f/userdata/shm DeviceMajor:0 DeviceMinor:858 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-658 DeviceMajor:0 DeviceMinor:658 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-627 DeviceMajor:0 DeviceMinor:627 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-728 DeviceMajor:0 DeviceMinor:728 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-400 DeviceMajor:0 DeviceMinor:400 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-767 DeviceMajor:0 DeviceMinor:767 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/19cf75ed-6a4e-444d-8975-fa6ecba79f13/volumes/kubernetes.io~projected/kube-api-access-7hxz5 DeviceMajor:0 DeviceMinor:843 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-971 DeviceMajor:0 DeviceMinor:971 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-738 DeviceMajor:0 DeviceMinor:738 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-530 DeviceMajor:0 DeviceMinor:530 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dbce6cdc-040a-48e1-8a81-b6ff9c180eba/volumes/kubernetes.io~projected/kube-api-access-z2kct DeviceMajor:0 DeviceMinor:255 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4d060bff-3c25-4eeb-bdd3-e20fb2687645/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:561 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-492 DeviceMajor:0 DeviceMinor:492 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1177 DeviceMajor:0 DeviceMinor:1177 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5619c16f90d5aa3883b6b245c376b14384e785794b212891be3d9cc98f2155b/userdata/shm DeviceMajor:0 DeviceMinor:304 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1032 DeviceMajor:0 DeviceMinor:1032 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1150 DeviceMajor:0 DeviceMinor:1150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-794 DeviceMajor:0 DeviceMinor:794 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-51 DeviceMajor:0 DeviceMinor:51 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-984 DeviceMajor:0 DeviceMinor:984 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-680 DeviceMajor:0 DeviceMinor:680 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1141 DeviceMajor:0 DeviceMinor:1141 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/826db63109cf25d66ed31a255738b519d4a9faae58f44b83818b33fc45665543/userdata/shm DeviceMajor:0 DeviceMinor:578 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/98226a59-5234-48f3-a9cd-21de305810dc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:656 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-770 DeviceMajor:0 DeviceMinor:770 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/62fc400b-b3dd-4134-bd27-69dd8369153a/volumes/kubernetes.io~projected/kube-api-access-zbsxw DeviceMajor:0 DeviceMinor:900 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-912 DeviceMajor:0 DeviceMinor:912 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:657 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1097 DeviceMajor:0 DeviceMinor:1097 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/db83379789ef98a1c8bd3954093bb31968ab0139d9f5bc569d532d29a9e92213/userdata/shm DeviceMajor:0 DeviceMinor:90 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1215 DeviceMajor:0 DeviceMinor:1215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1264 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/638616f7252126f59d4bcab9c5e05a063b0ebaded1038582ec0b4152c67c3d10/userdata/shm DeviceMajor:0 DeviceMinor:110 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6dfca740-0387-428a-b957-3e8a09c6e352/volumes/kubernetes.io~projected/kube-api-access-d4457 DeviceMajor:0 DeviceMinor:279 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b9eb45bd-fc01-4707-87ea-64f07f72f6f9/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:517 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-499 DeviceMajor:0 DeviceMinor:499 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-511 DeviceMajor:0 DeviceMinor:511 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1309 DeviceMajor:0 DeviceMinor:1309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-636 DeviceMajor:0 DeviceMinor:636 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1709ef31-9ddd-42bf-9a95-4be4502a0828/volumes/kubernetes.io~projected/kube-api-access-79j9f DeviceMajor:0 DeviceMinor:135 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ac8b90837a8f5e731e7b22ff050f1b380571286ef85231576020efe34cd2e430/userdata/shm DeviceMajor:0 DeviceMinor:568 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1172 DeviceMajor:0 DeviceMinor:1172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-468 DeviceMajor:0 DeviceMinor:468 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e6e96cc43446a3135181efd422d5641ca4e6cd2f71bf5238bf91b4954d41a24a/userdata/shm DeviceMajor:0 DeviceMinor:779 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/327d8b93a0b8136db5fa70fbc964d1cbd5cf33fa512a27f0f0cf22df8db25f21/userdata/shm DeviceMajor:0 DeviceMinor:195 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/441065bea23c74396afef0b5e83785e19b00c76012695c20dcc42243f3f809f3/userdata/shm DeviceMajor:0 DeviceMinor:452 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7635c0ff-4d40-4310-8187-230323e504e0/volumes/kubernetes.io~projected/kube-api-access-p5m78 DeviceMajor:0 DeviceMinor:484 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-638 DeviceMajor:0 DeviceMinor:638 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-318 DeviceMajor:0 DeviceMinor:318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/volumes/kubernetes.io~projected/kube-api-access-s4j88 DeviceMajor:0 DeviceMinor:538 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/daf25ef5-8247-4dbb-bdc1-55104b1015b7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:890 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9a135ef2bf0cea92e0e6d6c962da99bd4bf9e44e47304e0bce9ab97fa97ad55c/userdata/shm DeviceMajor:0 DeviceMinor:194 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9cc7b181ab55ab6abb3242c925ed6067592af711ebb394b812dbd9cfe003dfbd/userdata/shm DeviceMajor:0 DeviceMinor:85 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/volumes/kubernetes.io~projected/kube-api-access-8k2dv DeviceMajor:0 DeviceMinor:246 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d65a0af4-c96f-44f8-9384-6bae4585983b/volumes/kubernetes.io~projected/kube-api-access-bpk24 DeviceMajor:0 DeviceMinor:266 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/59c1cc61-8692-4a35-83fc-6bbef7086117/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:551 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1d3a36bb-9d11-48b3-a3b5-07b47738ef97/volumes/kubernetes.io~projected/kube-api-access-lvjcp DeviceMajor:0 DeviceMinor:252 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/89ed6373-78f8-4d77-82b2-1ab055b5b862/volumes/kubernetes.io~projected/kube-api-access-f64ql DeviceMajor:0 DeviceMinor:1204 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/39ccf158-b40f-4dba-90e2-27b1409487b7/volumes/kubernetes.io~projected/kube-api-access-4zmwm DeviceMajor:0 DeviceMinor:332 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-54 DeviceMajor:0 DeviceMinor:54 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-797 DeviceMajor:0 DeviceMinor:797 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-764 DeviceMajor:0 DeviceMinor:764 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1269 DeviceMajor:0 DeviceMinor:1269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-192 DeviceMajor:0 DeviceMinor:192 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-312 DeviceMajor:0 DeviceMinor:312 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b9eb45bd-fc01-4707-87ea-64f07f72f6f9/volumes/kubernetes.io~projected/kube-api-access-qxm8p DeviceMajor:0 DeviceMinor:520 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9/volumes/kubernetes.io~projected/kube-api-access-8p4w6 DeviceMajor:0 DeviceMinor:256 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-521 DeviceMajor:0 DeviceMinor:521 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-811 DeviceMajor:0 DeviceMinor:811 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-466 DeviceMajor:0 DeviceMinor:466 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:901 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-766 DeviceMajor:0 DeviceMinor:766 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1232 DeviceMajor:0 DeviceMinor:1232 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-778 DeviceMajor:0 DeviceMinor:778 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7635c0ff-4d40-4310-8187-230323e504e0/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:94 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b77cf717eaf94cf8bf6837636ba7313b88c41d8f394ba5e1308558d0bca1c808/userdata/shm DeviceMajor:0 DeviceMinor:1130 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-998 DeviceMajor:0 DeviceMinor:998 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-324 DeviceMajor:0 DeviceMinor:324 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-336 DeviceMajor:0 DeviceMinor:336 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/78ca76bb28058c596e989b94f315e85b6607b7b0e487f9746f2eff407fceb169/userdata/shm DeviceMajor:0 DeviceMinor:300 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bd609bd3-2525-4b88-8f07-94a0418fb582/volumes/kubernetes.io~projected/kube-api-access-zztmz DeviceMajor:0 DeviceMinor:887 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/529a8813da1db26a89da6c06d3a8fcc3afc05b6c872a6a5a2b9fb3ceb4df9687/userdata/shm DeviceMajor:0 DeviceMinor:193 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/312ca024-c8f0-4994-8f9a-b707607341fe/volumes/kubernetes.io~projected/kube-api-access-bpnmz DeviceMajor:0 DeviceMinor:107 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/59c1cc61-8692-4a35-83fc-6bbef7086117/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:556 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d0a141b311d0fcd6bd712d0075c6fb1c7f72a45707678fd94f7971d15d34a88f/userdata/shm DeviceMajor:0 DeviceMinor:1166 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1/volumes/kubernetes.io~projected/kube-api-access-sc9wx DeviceMajor:0 DeviceMinor:675 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-688 DeviceMajor:0 DeviceMinor:688 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1271 DeviceMajor:0 DeviceMinor:1271 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/39790258-73bc-4c37-a935-e8d3c2a2d5c6/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:783 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-641 DeviceMajor:0 DeviceMinor:641 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-343 DeviceMajor:0 DeviceMinor:343 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dbce6cdc-040a-48e1-8a81-b6ff9c180eba/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:557 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-652 DeviceMajor:0 DeviceMinor:652 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ef18ace4-7316-4600-9be9-2adc792705e9/volumes/kubernetes.io~projected/kube-api-access-kn7cs DeviceMajor:0 DeviceMinor:882 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1210 DeviceMajor:0 DeviceMinor:1210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-92 DeviceMajor:0 DeviceMinor:92 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-774 DeviceMajor:0 DeviceMinor:774 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1234 DeviceMajor:0 DeviceMinor:1234 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-473 DeviceMajor:0 DeviceMinor:473 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9f9442b-25b9-420f-b748-bb13423809fe/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:479 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-686 DeviceMajor:0 DeviceMinor:686 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eb135cff-1a2e-468d-80ab-f7db3f57552a/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:565 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8341254b8ef7faec187b8fe415e34b54bbc9e2b3da20b0d37f8005ee126bc089/userdata/shm DeviceMajor:0 DeviceMinor:381 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-597 DeviceMajor:0 DeviceMinor:597 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cbd2814207ea73c81ee03ec39936289eb40513d40ec1dfdddcdf33cff0834b18/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-449 DeviceMajor:0 DeviceMinor:449 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-615 DeviceMajor:0 DeviceMinor:615 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/11aaad8c-2f25-460f-b4af-f27d8bc682a0/volumes/kubernetes.io~projected/kube-api-access-x5z86 DeviceMajor:0 DeviceMinor:860 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-957 DeviceMajor:0 DeviceMinor:957 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-880 DeviceMajor:0 DeviceMinor:880 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1139 DeviceMajor:0 DeviceMinor:1139 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-45 DeviceMajor:0 DeviceMinor:45 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:421 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/22bba1b3-587d-4802-b4ae-946827c3fa7a/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:562 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1236 DeviceMajor:0 DeviceMinor:1236 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/aae1df07-cf9f-47a3-b146-2a0adb182660/volumes/kubernetes.io~secret/secret-telemeter-client DeviceMajor:0 DeviceMinor:1289 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1a01adc1f41522dbb8a1d23da740cfd44f6a53e272a46c5d7003ab771e7ccdcb/userdata/shm DeviceMajor:0 DeviceMinor:60 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/01e90033-9ddf-41b4-ab61-e89add6c2fde/volumes/kubernetes.io~projected/kube-api-access-j2tk7 DeviceMajor:0 DeviceMinor:258 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ef18ace4-7316-4600-9be9-2adc792705e9/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:881 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e8c48a22-ed96-42c5-ac4a-dd7d4f204539/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:376 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-979 DeviceMajor:0 DeviceMinor:979 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fca213c3-42ca-4341-a2e6-a143b9389f9e/volumes/kubernetes.io~projected/kube-api-access-7krn8 DeviceMajor:0 DeviceMinor:727 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b9fe0660-fae4-4f97-8895-dbc4845cee40/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1171 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-941 DeviceMajor:0 DeviceMinor:941 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/36eb1911b1d84465d4f3614b052501f0ab8200fc09c3cd58c9e93b58066e3180/userdata/shm DeviceMajor:0 DeviceMinor:574 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-73 DeviceMajor:0 DeviceMinor:73 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/042d8457-04dc-4171-8b0f-f9e3de695c46/volumes/kubernetes.io~projected/kube-api-access-hpz9d DeviceMajor:0 DeviceMinor:1203 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-916 DeviceMajor:0 DeviceMinor:916 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-145 DeviceMajor:0 DeviceMinor:145 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f98aeaf7-bf1a-46af-bf1b-85713baa4c67/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:263 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8a97bbf5-7409-4f36-894b-b88284e1b6d0/volumes/kubernetes.io~projected/kube-api-access-vq4ct DeviceMajor:0 DeviceMinor:391 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-711 DeviceMajor:0 DeviceMinor:711 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-326 DeviceMajor:0 DeviceMinor:326 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9f9442b-25b9-420f-b748-bb13423809fe/volumes/kubernetes.io~projected/kube-api-access-kxs4n DeviceMajor:0 DeviceMinor:481 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-816 DeviceMajor:0 DeviceMinor:816 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-703 DeviceMajor:0 DeviceMinor:703 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/533fe3c7-504f-40aa-aab0-8d66ef27920f/volumes/kubernetes.io~projected/kube-api-access-jrwcs DeviceMajor:0 DeviceMinor:108 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/eb135cff-1a2e-468d-80ab-f7db3f57552a/volumes/kubernetes.io~projected/kube-api-access-tk5sc DeviceMajor:0 DeviceMinor:254 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8ab951b1-6898-4357-b813-16365f3f89d5/volumes/kubernetes.io~projected/kube-api-access-xdzzt DeviceMajor:0 DeviceMinor:889 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7bc9a872390b5d9f7e6deaa6fe763c395d3fe8f5593fe4a35eea402b1c688808/userdata/shm DeviceMajor:0 DeviceMinor:851 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/166c259337ecc4f073ab3f6650460578e7d7cb947fe167df547591f1f002809b/userdata/shm DeviceMajor:0 DeviceMinor:1267 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d65a0af4-c96f-44f8-9384-6bae4585983b/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:248 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:273 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-341 DeviceMajor:0 DeviceMinor:341 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1260 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-355 DeviceMajor:0 DeviceMinor:355 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-486 DeviceMajor:0 DeviceMinor:486 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-523 DeviceMajor:0 DeviceMinor:523 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1038 DeviceMajor:0 DeviceMinor:1038 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-873 DeviceMajor:0 DeviceMinor:873 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d5397ea31b615de3ed7b896751d9092d0868b0406feb889378a7e38993fb96df/userdata/shm DeviceMajor:0 DeviceMinor:308 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b9eb45bd-fc01-4707-87ea-64f07f72f6f9/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:518 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-863 DeviceMajor:0 DeviceMinor:863 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-181 DeviceMajor:0 DeviceMinor:181 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-669 DeviceMajor:0 DeviceMinor:669 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b48391bd94beb64b336f15ad176f98f36973e5e545db832340669d5eac56bf63/userdata/shm DeviceMajor:0 DeviceMinor:127 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-200 DeviceMajor:0 DeviceMinor:200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1318 DeviceMajor:0 DeviceMinor:1318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-937 DeviceMajor:0 DeviceMinor:937 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-477 DeviceMajor:0 DeviceMinor:477 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84bc6873f1c2f152a188b93adf9b13caf01f769508b7055c0e1ef90ebe5496e8/userdata/shm DeviceMajor:0 DeviceMinor:844 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1217 DeviceMajor:0 DeviceMinor:1217 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-775 DeviceMajor:0 DeviceMinor:775 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fb6f6ab6826113043c422e9cb31e951a4709e29a8f548f2f0410e49be87f511d/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-572 DeviceMajor:0 DeviceMinor:572 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/507676e6f82ab903ac83daafdb4ad3f73a28bb521382cf0074ea56ae587cb87f/userdata/shm DeviceMajor:0 DeviceMinor:759 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0318746ff4f748b MacAddress:c2:18:74:dd:63:60 Speed:10000 Mtu:8900} {Name:0aa23336820d484 MacAddress:0e:2e:32:bf:46:9e Speed:10000 Mtu:8900} {Name:0d45f4e60b11e0b MacAddress:16:1a:ea:11:56:5e Speed:10000 Mtu:8900} {Name:0ede86c860ac980 MacAddress:6a:1c:ab:61:a2:f8 Speed:10000 Mtu:8900} {Name:166c259337ecc4f MacAddress:32:cc:e5:c7:20:40 Speed:10000 Mtu:8900} {Name:1691f192a8834aa MacAddress:92:e2:ca:98:ca:f0 Speed:10000 Mtu:8900} {Name:17031191ab6d96a MacAddress:0e:1c:f0:c7:e8:1b Speed:10000 Mtu:8900} {Name:2a5fb83b35a727a MacAddress:1a:8a:cc:70:8f:ae Speed:10000 Mtu:8900} {Name:2be4e82eb96940a MacAddress:72:b6:1e:61:36:74 Speed:10000 Mtu:8900} {Name:318e8d0079ec567 MacAddress:12:6b:9c:a4:69:40 Speed:10000 Mtu:8900} {Name:327d8b93a0b8136 MacAddress:3a:07:ed:d8:2d:5f Speed:10000 Mtu:8900} {Name:3372bbf7f4c3060 MacAddress:32:3a:8d:12:1c:cd Speed:10000 Mtu:8900} {Name:33f515505da92fc MacAddress:7a:e7:a7:66:c8:e2 Speed:10000 Mtu:8900} {Name:3492cbd782b3ac5 MacAddress:26:92:aa:61:c0:90 Speed:10000 Mtu:8900} {Name:36eb1911b1d8446 MacAddress:a6:ad:ae:9d:86:57 Speed:10000 Mtu:8900} {Name:39a1a5d33692c60 MacAddress:c6:84:9d:94:49:e3 Speed:10000 Mtu:8900} {Name:3a76972be7f15da MacAddress:26:7e:b1:48:7e:cb Speed:10000 Mtu:8900} {Name:441065bea23c743 MacAddress:2a:e7:fc:06:ca:21 Speed:10000 Mtu:8900} {Name:47b2f781a814a8d MacAddress:9e:ad:04:9d:cd:75 Speed:10000 Mtu:8900} {Name:4925985880a2064 MacAddress:92:4a:5b:3f:cb:cc Speed:10000 Mtu:8900} {Name:507676e6f82ab90 MacAddress:32:1d:38:60:85:af Speed:10000 Mtu:8900} {Name:529a8813da1db26 MacAddress:d2:69:a8:92:78:9b Speed:10000 Mtu:8900} {Name:54f65f910e458ec MacAddress:f2:8b:30:13:9e:e7 Speed:10000 Mtu:8900} {Name:5b2746caab687d5 MacAddress:2e:ac:e5:b2:b1:f4 Speed:10000 Mtu:8900} {Name:6a7bfbfdcb05372 MacAddress:8e:0f:3c:c5:20:91 Speed:10000 Mtu:8900} {Name:7127b21b93cf0d6 MacAddress:ca:09:15:f8:01:33 Speed:10000 Mtu:8900} {Name:78ca76bb28058c5 MacAddress:ce:d9:31:51:3c:1b Speed:10000 Mtu:8900} {Name:7bc9a872390b5d9 MacAddress:ca:20:76:1d:b3:37 Speed:10000 Mtu:8900} {Name:826db63109cf25d MacAddress:d6:27:92:33:4d:76 Speed:10000 Mtu:8900} {Name:827635bac05f32a MacAddress:12:24:48:6c:f2:ef Speed:10000 Mtu:8900} {Name:8341254b8ef7fae MacAddress:5a:68:aa:b7:3b:fc Speed:10000 Mtu:8900} {Name:84bc6873f1c2f15 MacAddress:76:62:9e:8b:8c:69 Speed:10000 Mtu:8900} {Name:8cf490279cd50e8 MacAddress:96:44:f9:51:51:cd Speed:10000 Mtu:8900} {Name:8f7330d7b1c8d5e MacAddress:32:92:48:65:cc:20 Speed:10000 Mtu:8900} {Name:9a135ef2bf0cea9 MacAddress:86:f3:12:b2:ad:ce Speed:10000 Mtu:8900} {Name:a7e80ad99f32fd1 MacAddress:42:0e:37:39:15:40 Speed:10000 Mtu:8900} {Name:a92cb32c4be6840 MacAddress:0a:b1:e9:cd:f5:1e Speed:10000 Mtu:8900} {Name:aaa9389e6efd83b MacAddress:56:8a:8a:81:f8:47 Speed:10000 Mtu:8900} {Name:ad1e0968f9a0f93 MacAddress:f2:31:42:3b:01:eb Speed:10000 Mtu:8900} {Name:b246614c1f2f72d MacAddress:6a:e5:a1:cc:f1:55 Speed:10000 Mtu:8900} {Name:b2d70b13e56c93d MacAddress:0e:6e:9a:ec:df:0c Speed:10000 Mtu:8900} {Name:b77cf717eaf94cf MacAddress:de:6d:4b:90:76:01 Speed:10000 Mtu:8900} {Name:b89fd2b72c95ae8 MacAddress:42:4c:71:a5:4a:25 Speed:10000 Mtu:8900} {Name:bccd3e3cca0e5a2 MacAddress:1a:d0:24:98:d9:12 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:22:87:73:30:49:0a Speed:0 Mtu:8900} {Name:c5552d51223ad67 MacAddress:ea:02:d4:2a:00:5a Speed:10000 Mtu:8900} {Name:c5619c16f90d5aa MacAddress:fa:63:bc:1c:3f:70 Speed:10000 Mtu:8900} {Name:cbd2814207ea73c MacAddress:a6:dc:69:93:aa:10 Speed:10000 Mtu:8900} {Name:d3506d2533f5948 MacAddress:4e:d2:0b:c1:77:57 Speed:10000 Mtu:8900} {Name:d45bdb88cf4fb87 MacAddress:5e:80:24:a5:22:ad Speed:10000 Mtu:8900} {Name:d5397ea31b615de MacAddress:9a:30:1e:05:4a:1f Speed:10000 Mtu:8900} {Name:d72373aa995597c MacAddress:12:af:45:ed:57:64 Speed:10000 Mtu:8900} {Name:dd42f3b0e8e73a1 MacAddress:46:c2:ec:d4:69:61 Speed:10000 Mtu:8900} {Name:e112dc6a9d5f726 MacAddress:8e:78:e5:4a:d7:99 Speed:10000 Mtu:8900} {Name:e597c41c82bb3cd MacAddress:8a:36:dc:c7:98:23 Speed:10000 Mtu:8900} {Name:e668bf18622f735 MacAddress:7e:45:b8:52:1c:9a Speed:10000 Mtu:8900} {Name:e89e4070dae8204 MacAddress:62:4c:dc:38:e9:7d Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:8e:d0:9c Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:ad:cf:59 Speed:-1 Mtu:9000} {Name:fb0b310e4353078 MacAddress:5a:5e:c4:9e:84:49 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:7e:cc:dd:86:6c:ff Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514149376 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.419708 31420 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.419785 31420 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.419994 31420 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.420153 31420 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.420176 31420 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.420422 31420 topology_manager.go:138] "Creating topology manager with none policy" Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.420430 31420 container_manager_linux.go:303] "Creating device plugin manager" Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.420438 31420 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.420461 31420 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.420501 31420 state_mem.go:36] "Initialized new in-memory state store" Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.420600 31420 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.420648 31420 kubelet.go:418] "Attempting to sync node with API server" Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.420663 31420 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.420677 31420 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.420688 31420 kubelet.go:324] "Adding apiserver pod source" Feb 20 12:04:55.421223 master-0 kubenswrapper[31420]: I0220 12:04:55.420702 31420 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 20 12:04:55.428590 master-0 kubenswrapper[31420]: I0220 12:04:55.428554 31420 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 20 12:04:55.428740 master-0 kubenswrapper[31420]: I0220 12:04:55.428722 31420 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 20 12:04:55.429048 master-0 kubenswrapper[31420]: I0220 12:04:55.429016 31420 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 20 12:04:55.429170 master-0 kubenswrapper[31420]: I0220 12:04:55.429155 31420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 20 12:04:55.429208 master-0 kubenswrapper[31420]: I0220 12:04:55.429180 31420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 20 12:04:55.429208 master-0 kubenswrapper[31420]: I0220 12:04:55.429190 31420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 20 12:04:55.429208 master-0 kubenswrapper[31420]: I0220 12:04:55.429200 31420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 20 12:04:55.429292 master-0 kubenswrapper[31420]: I0220 12:04:55.429209 31420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 20 12:04:55.429292 master-0 kubenswrapper[31420]: I0220 12:04:55.429217 31420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 20 12:04:55.429292 master-0 kubenswrapper[31420]: I0220 12:04:55.429227 31420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 20 12:04:55.429292 master-0 kubenswrapper[31420]: I0220 12:04:55.429236 31420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 20 12:04:55.429292 master-0 kubenswrapper[31420]: I0220 12:04:55.429246 31420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 20 12:04:55.429292 master-0 kubenswrapper[31420]: I0220 12:04:55.429255 31420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 20 12:04:55.429292 master-0 kubenswrapper[31420]: I0220 12:04:55.429268 31420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 20 12:04:55.429292 master-0 kubenswrapper[31420]: I0220 12:04:55.429291 31420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 20 12:04:55.429499 master-0 kubenswrapper[31420]: I0220 12:04:55.429328 31420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 20 12:04:55.429752 master-0 kubenswrapper[31420]: I0220 12:04:55.429733 31420 server.go:1280] "Started kubelet" Feb 20 12:04:55.433613 master-0 kubenswrapper[31420]: I0220 12:04:55.430517 31420 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 20 12:04:55.433613 master-0 kubenswrapper[31420]: I0220 12:04:55.433202 31420 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 20 12:04:55.433613 master-0 kubenswrapper[31420]: I0220 12:04:55.433280 31420 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 20 12:04:55.433613 master-0 kubenswrapper[31420]: I0220 12:04:55.433336 31420 server.go:449] "Adding debug handlers to kubelet server" Feb 20 12:04:55.430625 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 20 12:04:55.434311 master-0 kubenswrapper[31420]: I0220 12:04:55.433722 31420 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 20 12:04:55.446833 master-0 kubenswrapper[31420]: I0220 12:04:55.446108 31420 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 20 12:04:55.460303 master-0 kubenswrapper[31420]: I0220 12:04:55.447169 31420 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 20 12:04:55.460303 master-0 kubenswrapper[31420]: I0220 12:04:55.451594 31420 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 20 12:04:55.460303 master-0 kubenswrapper[31420]: I0220 12:04:55.451636 31420 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 20 12:04:55.460303 master-0 kubenswrapper[31420]: I0220 12:04:55.451878 31420 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-21 11:39:43 +0000 UTC, rotation deadline is 2026-02-21 06:29:54.247064335 +0000 UTC Feb 20 12:04:55.460303 master-0 kubenswrapper[31420]: I0220 12:04:55.452077 31420 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h24m58.794994394s for next certificate rotation Feb 20 12:04:55.460303 master-0 kubenswrapper[31420]: I0220 12:04:55.452219 31420 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 20 12:04:55.460303 master-0 kubenswrapper[31420]: I0220 12:04:55.452233 31420 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 20 12:04:55.460303 master-0 kubenswrapper[31420]: I0220 12:04:55.452514 31420 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 20 12:04:55.460303 master-0 kubenswrapper[31420]: E0220 12:04:55.459176 31420 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 20 12:04:55.460303 master-0 kubenswrapper[31420]: I0220 12:04:55.460195 31420 factory.go:55] Registering systemd factory Feb 20 12:04:55.460303 master-0 kubenswrapper[31420]: I0220 12:04:55.460223 31420 factory.go:221] Registration of the systemd container factory successfully Feb 20 12:04:55.470567 master-0 kubenswrapper[31420]: I0220 12:04:55.462808 31420 factory.go:153] Registering CRI-O factory Feb 20 12:04:55.470567 master-0 kubenswrapper[31420]: I0220 12:04:55.462831 31420 factory.go:221] Registration of the crio container factory successfully Feb 20 12:04:55.470567 master-0 kubenswrapper[31420]: I0220 12:04:55.462963 31420 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 20 12:04:55.470567 master-0 kubenswrapper[31420]: I0220 12:04:55.462973 31420 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 20 12:04:55.470567 master-0 kubenswrapper[31420]: I0220 12:04:55.462984 31420 factory.go:103] Registering Raw factory Feb 20 12:04:55.470567 master-0 kubenswrapper[31420]: I0220 12:04:55.463345 31420 manager.go:1196] Started watching for new ooms in manager Feb 20 12:04:55.470567 master-0 kubenswrapper[31420]: I0220 12:04:55.463840 31420 manager.go:319] Starting recovery of all containers Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476275 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db2a7cb1-1d05-4b24-86ed-f823fad5013e" volumeName="kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-kube-api-access-6td56" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476378 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fca78741-ca32-4867-b44f-483fd62f2942" volumeName="kubernetes.io/projected/fca78741-ca32-4867-b44f-483fd62f2942-kube-api-access-2cnvt" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476404 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9f9442b-25b9-420f-b748-bb13423809fe" volumeName="kubernetes.io/empty-dir/d9f9442b-25b9-420f-b748-bb13423809fe-cache" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476417 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="042d8457-04dc-4171-8b0f-f9e3de695c46" volumeName="kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-metrics-client-ca" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476439 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62fc400b-b3dd-4134-bd27-69dd8369153a" volumeName="kubernetes.io/projected/62fc400b-b3dd-4134-bd27-69dd8369153a-kube-api-access-zbsxw" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476452 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39790258-73bc-4c37-a935-e8d3c2a2d5c6" volumeName="kubernetes.io/secret/39790258-73bc-4c37-a935-e8d3c2a2d5c6-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476464 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" volumeName="kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-metrics-server-audit-profiles" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476478 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89383482-190e-4f74-a81e-b1547e5b9ae6" volumeName="kubernetes.io/projected/89383482-190e-4f74-a81e-b1547e5b9ae6-kube-api-access" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476500 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aae1df07-cf9f-47a3-b146-2a0adb182660" volumeName="kubernetes.io/projected/aae1df07-cf9f-47a3-b146-2a0adb182660-kube-api-access-qqzpj" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476516 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f9cd117-c84f-44c9-80a9-879a04d62934" volumeName="kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-certs" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476545 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6479d88f-463f-48ed-846d-2747752a8abb" volumeName="kubernetes.io/projected/6479d88f-463f-48ed-846d-2747752a8abb-kube-api-access-mfmdd" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476596 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd609bd3-2525-4b88-8f07-94a0418fb582" volumeName="kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476609 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce2b6fde-de56-49c3-9bd6-e81c679b02bc" volumeName="kubernetes.io/empty-dir/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-operand-assets" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476687 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9f9442b-25b9-420f-b748-bb13423809fe" volumeName="kubernetes.io/projected/d9f9442b-25b9-420f-b748-bb13423809fe-ca-certs" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476702 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21e8e44b-b883-4afb-af90-d6c1265edf34" volumeName="kubernetes.io/secret/21e8e44b-b883-4afb-af90-d6c1265edf34-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476719 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="836a6d7e-9b26-425f-ae21-00422515d7fe" volumeName="kubernetes.io/secret/836a6d7e-9b26-425f-ae21-00422515d7fe-webhook-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476792 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89ed6373-78f8-4d77-82b2-1ab055b5b862" volumeName="kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476807 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aae1df07-cf9f-47a3-b146-2a0adb182660" volumeName="kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476840 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fca213c3-42ca-4341-a2e6-a143b9389f9e" volumeName="kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-etcd-serving-ca" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476872 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1df81fcc-f967-4874-ad16-1a89f0e7875a" volumeName="kubernetes.io/configmap/1df81fcc-f967-4874-ad16-1a89f0e7875a-config" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476909 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8" volumeName="kubernetes.io/projected/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-kube-api-access-rcnmk" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476937 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef18ace4-7316-4600-9be9-2adc792705e9" volumeName="kubernetes.io/projected/ef18ace4-7316-4600-9be9-2adc792705e9-kube-api-access-kn7cs" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476954 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fca213c3-42ca-4341-a2e6-a143b9389f9e" volumeName="kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-audit-policies" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476969 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="042d8457-04dc-4171-8b0f-f9e3de695c46" volumeName="kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.476989 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9fe0660-fae4-4f97-8895-dbc4845cee40" volumeName="kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477001 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22bba1b3-587d-4802-b4ae-946827c3fa7a" volumeName="kubernetes.io/configmap/22bba1b3-587d-4802-b4ae-946827c3fa7a-telemetry-config" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477040 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98226a59-5234-48f3-a9cd-21de305810dc" volumeName="kubernetes.io/projected/98226a59-5234-48f3-a9cd-21de305810dc-kube-api-access-j2hwr" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477113 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d65a0af4-c96f-44f8-9384-6bae4585983b" volumeName="kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477163 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="11aaad8c-2f25-460f-b4af-f27d8bc682a0" volumeName="kubernetes.io/empty-dir/11aaad8c-2f25-460f-b4af-f27d8bc682a0-utilities" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477200 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fb59696-1d5f-41bb-9211-b89c63b10840" volumeName="kubernetes.io/projected/1fb59696-1d5f-41bb-9211-b89c63b10840-kube-api-access-8djgj" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477212 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f9cd117-c84f-44c9-80a9-879a04d62934" volumeName="kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-node-bootstrap-token" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477229 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6479d88f-463f-48ed-846d-2747752a8abb" volumeName="kubernetes.io/secret/6479d88f-463f-48ed-846d-2747752a8abb-webhook-certs" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477249 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="836a6d7e-9b26-425f-ae21-00422515d7fe" volumeName="kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-env-overrides" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477261 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98226a59-5234-48f3-a9cd-21de305810dc" volumeName="kubernetes.io/secret/98226a59-5234-48f3-a9cd-21de305810dc-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477276 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae1fd116-6f63-4344-b7af-278665649e5a" volumeName="kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-apiservice-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477305 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" volumeName="kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-config" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477317 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8" volumeName="kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477338 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59c1cc61-8692-4a35-83fc-6bbef7086117" volumeName="kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-trusted-ca-bundle" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477351 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98226a59-5234-48f3-a9cd-21de305810dc" volumeName="kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-proxy-ca-bundles" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477362 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="daf25ef5-8247-4dbb-bdc1-55104b1015b7" volumeName="kubernetes.io/empty-dir/daf25ef5-8247-4dbb-bdc1-55104b1015b7-snapshots" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477374 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34382460-b2d7-4154-87ba-c0347a4c0f1b" volumeName="kubernetes.io/empty-dir/34382460-b2d7-4154-87ba-c0347a4c0f1b-utilities" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477386 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" volumeName="kubernetes.io/projected/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-kube-api-access-kfzqt" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477397 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" volumeName="kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-trusted-ca-bundle" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477425 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9fe0660-fae4-4f97-8895-dbc4845cee40" volumeName="kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477437 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c29fd426-7c89-434e-8332-1ca31075d4bf" volumeName="kubernetes.io/secret/c29fd426-7c89-434e-8332-1ca31075d4bf-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477474 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29489539-68c6-49dd-bc1b-dcf0c7bb2ebe" volumeName="kubernetes.io/configmap/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-mcc-auth-proxy-config" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477499 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="533fe3c7-504f-40aa-aab0-8d66ef27920f" volumeName="kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-cni-binary-copy" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477551 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5360f3f5-2d07-432f-af45-22659538c55e" volumeName="kubernetes.io/secret/5360f3f5-2d07-432f-af45-22659538c55e-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477564 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db2a7cb1-1d05-4b24-86ed-f823fad5013e" volumeName="kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477576 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01e90033-9ddf-41b4-ab61-e89add6c2fde" volumeName="kubernetes.io/secret/01e90033-9ddf-41b4-ab61-e89add6c2fde-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477601 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07281644-2789-424f-8429-aa4448dda01e" volumeName="kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-binary-copy" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477624 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62ba4bae-a5e1-4c4d-b544-25d0e59eeac2" volumeName="kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477651 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8df029f2-d0ec-4543-9371-7694b1e85a06" volumeName="kubernetes.io/empty-dir/8df029f2-d0ec-4543-9371-7694b1e85a06-catalog-content" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477682 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb135cff-1a2e-468d-80ab-f7db3f57552a" volumeName="kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477696 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" volumeName="kubernetes.io/secret/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477713 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89383482-190e-4f74-a81e-b1547e5b9ae6" volumeName="kubernetes.io/secret/89383482-190e-4f74-a81e-b1547e5b9ae6-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477760 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1709ef31-9ddd-42bf-9a95-4be4502a0828" volumeName="kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477783 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9eb45bd-fc01-4707-87ea-64f07f72f6f9" volumeName="kubernetes.io/projected/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-kube-api-access-qxm8p" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477795 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0b28c90-d5b6-44f3-867c-020ece32ac7d" volumeName="kubernetes.io/configmap/e0b28c90-d5b6-44f3-867c-020ece32ac7d-config" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477823 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1388469-5e55-4c1b-97c3-c88777f29ae7" volumeName="kubernetes.io/projected/f1388469-5e55-4c1b-97c3-c88777f29ae7-kube-api-access" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477835 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f98aeaf7-bf1a-46af-bf1b-85713baa4c67" volumeName="kubernetes.io/configmap/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-config" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477848 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" volumeName="kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-client-ca-bundle" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477859 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62ba4bae-a5e1-4c4d-b544-25d0e59eeac2" volumeName="kubernetes.io/configmap/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-metrics-client-ca" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477871 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca" volumeName="kubernetes.io/configmap/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-trusted-ca" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477886 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aae1df07-cf9f-47a3-b146-2a0adb182660" volumeName="kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477908 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae1fd116-6f63-4344-b7af-278665649e5a" volumeName="kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-webhook-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477927 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1" volumeName="kubernetes.io/empty-dir/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-cache" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477954 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="daf25ef5-8247-4dbb-bdc1-55104b1015b7" volumeName="kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-trusted-ca-bundle" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477966 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31969539-bfd1-466f-8697-f13cbbd957df" volumeName="kubernetes.io/projected/31969539-bfd1-466f-8697-f13cbbd957df-kube-api-access-7ts6s" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.477984 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ab951b1-6898-4357-b813-16365f3f89d5" volumeName="kubernetes.io/configmap/8ab951b1-6898-4357-b813-16365f3f89d5-auth-proxy-config" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478014 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" volumeName="kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478032 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="836a6d7e-9b26-425f-ae21-00422515d7fe" volumeName="kubernetes.io/projected/836a6d7e-9b26-425f-ae21-00422515d7fe-kube-api-access-ms8wk" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478048 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9eb45bd-fc01-4707-87ea-64f07f72f6f9" volumeName="kubernetes.io/empty-dir/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-tmp" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478070 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9f9442b-25b9-420f-b748-bb13423809fe" volumeName="kubernetes.io/secret/d9f9442b-25b9-420f-b748-bb13423809fe-catalogserver-certs" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478087 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01e90033-9ddf-41b4-ab61-e89add6c2fde" volumeName="kubernetes.io/projected/01e90033-9ddf-41b4-ab61-e89add6c2fde-kube-api-access-j2tk7" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478115 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="478be5e4-cf17-4ebf-a45a-c18cd2b69929" volumeName="kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-config" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478132 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cbb46f1-1c33-42fc-8371-6a1bea8c28ff" volumeName="kubernetes.io/projected/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-kube-api-access-8nd7r" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478146 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0b28c90-d5b6-44f3-867c-020ece32ac7d" volumeName="kubernetes.io/secret/e0b28c90-d5b6-44f3-867c-020ece32ac7d-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478175 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" volumeName="kubernetes.io/projected/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-kube-api-access-8k2dv" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478230 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c078827-3bdb-4509-aeb3-eb558df1f6e7" volumeName="kubernetes.io/projected/9c078827-3bdb-4509-aeb3-eb558df1f6e7-kube-api-access-x2qdb" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478299 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dfca740-0387-428a-b957-3e8a09c6e352" volumeName="kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478321 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c104245-d078-4856-9a60-207bb6efcfe8" volumeName="kubernetes.io/secret/5c104245-d078-4856-9a60-207bb6efcfe8-samples-operator-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478344 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="839bf5b1-b242-4bbd-bc09-cf6abcf7f734" volumeName="kubernetes.io/projected/839bf5b1-b242-4bbd-bc09-cf6abcf7f734-kube-api-access-pvxsh" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478394 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59c1cc61-8692-4a35-83fc-6bbef7086117" volumeName="kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-image-import-ca" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478423 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" volumeName="kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478440 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f9cd117-c84f-44c9-80a9-879a04d62934" volumeName="kubernetes.io/projected/2f9cd117-c84f-44c9-80a9-879a04d62934-kube-api-access-m98rt" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478456 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62fc400b-b3dd-4134-bd27-69dd8369153a" volumeName="kubernetes.io/secret/62fc400b-b3dd-4134-bd27-69dd8369153a-machine-api-operator-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478478 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce2b6fde-de56-49c3-9bd6-e81c679b02bc" volumeName="kubernetes.io/secret/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478506 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef18ace4-7316-4600-9be9-2adc792705e9" volumeName="kubernetes.io/secret/ef18ace4-7316-4600-9be9-2adc792705e9-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478567 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07281644-2789-424f-8429-aa4448dda01e" volumeName="kubernetes.io/projected/07281644-2789-424f-8429-aa4448dda01e-kube-api-access-l5pw4" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478587 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31969539-bfd1-466f-8697-f13cbbd957df" volumeName="kubernetes.io/secret/31969539-bfd1-466f-8697-f13cbbd957df-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478621 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59c1cc61-8692-4a35-83fc-6bbef7086117" volumeName="kubernetes.io/projected/59c1cc61-8692-4a35-83fc-6bbef7086117-kube-api-access-mp57v" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478649 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" volumeName="kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-client-certs" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478668 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89ed6373-78f8-4d77-82b2-1ab055b5b862" volumeName="kubernetes.io/configmap/89ed6373-78f8-4d77-82b2-1ab055b5b862-metrics-client-ca" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478690 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d65a0af4-c96f-44f8-9384-6bae4585983b" volumeName="kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-profile-collector-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478705 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f98aeaf7-bf1a-46af-bf1b-85713baa4c67" volumeName="kubernetes.io/secret/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478775 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" volumeName="kubernetes.io/projected/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-kube-api-access-lvjcp" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478798 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" volumeName="kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-client" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478830 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9f9442b-25b9-420f-b748-bb13423809fe" volumeName="kubernetes.io/projected/d9f9442b-25b9-420f-b748-bb13423809fe-kube-api-access-kxs4n" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478847 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="daf25ef5-8247-4dbb-bdc1-55104b1015b7" volumeName="kubernetes.io/projected/daf25ef5-8247-4dbb-bdc1-55104b1015b7-kube-api-access-78bqv" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478864 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07281644-2789-424f-8429-aa4448dda01e" volumeName="kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-whereabouts-configmap" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478882 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4d8cd7c5-31fd-4dca-b39b-6d62eb573707" volumeName="kubernetes.io/secret/4d8cd7c5-31fd-4dca-b39b-6d62eb573707-tls-certificates" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478898 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af18215b-e749-4565-bb6c-24e92c452817" volumeName="kubernetes.io/configmap/af18215b-e749-4565-bb6c-24e92c452817-config-volume" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478916 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="478be5e4-cf17-4ebf-a45a-c18cd2b69929" volumeName="kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-env-overrides" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478951 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59c1cc61-8692-4a35-83fc-6bbef7086117" volumeName="kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-etcd-client" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.478973 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29489539-68c6-49dd-bc1b-dcf0c7bb2ebe" volumeName="kubernetes.io/secret/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-proxy-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479005 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9eb45bd-fc01-4707-87ea-64f07f72f6f9" volumeName="kubernetes.io/empty-dir/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-tuned" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479036 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bbdbadd9-eeaa-46ef-936e-5db8d395c118" volumeName="kubernetes.io/projected/bbdbadd9-eeaa-46ef-936e-5db8d395c118-kube-api-access-ttmwx" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479063 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d65a0af4-c96f-44f8-9384-6bae4585983b" volumeName="kubernetes.io/projected/d65a0af4-c96f-44f8-9384-6bae4585983b-kube-api-access-bpk24" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479145 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db2a7cb1-1d05-4b24-86ed-f823fad5013e" volumeName="kubernetes.io/configmap/db2a7cb1-1d05-4b24-86ed-f823fad5013e-trusted-ca" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479181 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21e8e44b-b883-4afb-af90-d6c1265edf34" volumeName="kubernetes.io/projected/21e8e44b-b883-4afb-af90-d6c1265edf34-kube-api-access-rk6hv" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479209 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62ba4bae-a5e1-4c4d-b544-25d0e59eeac2" volumeName="kubernetes.io/projected/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-kube-api-access-2dx69" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479258 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" volumeName="kubernetes.io/projected/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-kube-api-access-2zkbq" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479277 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bbdbadd9-eeaa-46ef-936e-5db8d395c118" volumeName="kubernetes.io/secret/bbdbadd9-eeaa-46ef-936e-5db8d395c118-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479292 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4" volumeName="kubernetes.io/projected/bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4-kube-api-access-s4j88" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479314 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="daf25ef5-8247-4dbb-bdc1-55104b1015b7" volumeName="kubernetes.io/secret/daf25ef5-8247-4dbb-bdc1-55104b1015b7-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479335 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8c48a22-ed96-42c5-ac4a-dd7d4f204539" volumeName="kubernetes.io/secret/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479354 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb135cff-1a2e-468d-80ab-f7db3f57552a" volumeName="kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-auth-proxy-config" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479369 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cbb46f1-1c33-42fc-8371-6a1bea8c28ff" volumeName="kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479388 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4d060bff-3c25-4eeb-bdd3-e20fb2687645" volumeName="kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479412 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" volumeName="kubernetes.io/empty-dir/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-audit-log" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479428 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c078827-3bdb-4509-aeb3-eb558df1f6e7" volumeName="kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-metrics-certs" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479444 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c078827-3bdb-4509-aeb3-eb558df1f6e7" volumeName="kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-stats-auth" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479461 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9fe0660-fae4-4f97-8895-dbc4845cee40" volumeName="kubernetes.io/projected/b9fe0660-fae4-4f97-8895-dbc4845cee40-kube-api-access-7r85p" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479495 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd609bd3-2525-4b88-8f07-94a0418fb582" volumeName="kubernetes.io/projected/bd609bd3-2525-4b88-8f07-94a0418fb582-kube-api-access-zztmz" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479588 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f98aeaf7-bf1a-46af-bf1b-85713baa4c67" volumeName="kubernetes.io/projected/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-kube-api-access" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479629 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22bba1b3-587d-4802-b4ae-946827c3fa7a" volumeName="kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479647 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29489539-68c6-49dd-bc1b-dcf0c7bb2ebe" volumeName="kubernetes.io/projected/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-kube-api-access-rxr6j" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479664 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4d060bff-3c25-4eeb-bdd3-e20fb2687645" volumeName="kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-profile-collector-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479686 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22bba1b3-587d-4802-b4ae-946827c3fa7a" volumeName="kubernetes.io/projected/22bba1b3-587d-4802-b4ae-946827c3fa7a-kube-api-access-2wnh5" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479708 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31969539-bfd1-466f-8697-f13cbbd957df" volumeName="kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-ovnkube-config" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479724 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="478be5e4-cf17-4ebf-a45a-c18cd2b69929" volumeName="kubernetes.io/projected/478be5e4-cf17-4ebf-a45a-c18cd2b69929-kube-api-access-5j4cs" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479752 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" volumeName="kubernetes.io/secret/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479769 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7635c0ff-4d40-4310-8187-230323e504e0" volumeName="kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-config" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479785 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aae1df07-cf9f-47a3-b146-2a0adb182660" volumeName="kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-metrics-client-ca" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479811 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aae1df07-cf9f-47a3-b146-2a0adb182660" volumeName="kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-client-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479832 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af18215b-e749-4565-bb6c-24e92c452817" volumeName="kubernetes.io/projected/af18215b-e749-4565-bb6c-24e92c452817-kube-api-access-7c9xz" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479855 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="312ca024-c8f0-4994-8f9a-b707607341fe" volumeName="kubernetes.io/secret/312ca024-c8f0-4994-8f9a-b707607341fe-metrics-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479873 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fca213c3-42ca-4341-a2e6-a143b9389f9e" volumeName="kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-encryption-config" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479903 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" volumeName="kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-server-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479927 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca" volumeName="kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479944 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ab951b1-6898-4357-b813-16365f3f89d5" volumeName="kubernetes.io/secret/8ab951b1-6898-4357-b813-16365f3f89d5-cert" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479966 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1" volumeName="kubernetes.io/projected/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-kube-api-access-sc9wx" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.479984 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd609bd3-2525-4b88-8f07-94a0418fb582" volumeName="kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-config" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.480001 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34382460-b2d7-4154-87ba-c0347a4c0f1b" volumeName="kubernetes.io/projected/34382460-b2d7-4154-87ba-c0347a4c0f1b-kube-api-access-5dx9s" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.480018 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89383482-190e-4f74-a81e-b1547e5b9ae6" volumeName="kubernetes.io/configmap/89383482-190e-4f74-a81e-b1547e5b9ae6-service-ca" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.480050 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59c1cc61-8692-4a35-83fc-6bbef7086117" volumeName="kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-etcd-serving-ca" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.480072 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" volumeName="kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-service-ca-bundle" seLinuxMountContext="" Feb 20 12:04:55.479875 master-0 kubenswrapper[31420]: I0220 12:04:55.480090 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c078827-3bdb-4509-aeb3-eb558df1f6e7" volumeName="kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-default-certificate" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.480134 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19cf75ed-6a4e-444d-8975-fa6ecba79f13" volumeName="kubernetes.io/empty-dir/19cf75ed-6a4e-444d-8975-fa6ecba79f13-catalog-content" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.480158 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c104245-d078-4856-9a60-207bb6efcfe8" volumeName="kubernetes.io/projected/5c104245-d078-4856-9a60-207bb6efcfe8-kube-api-access-nlcjf" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.480175 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8df029f2-d0ec-4543-9371-7694b1e85a06" volumeName="kubernetes.io/empty-dir/8df029f2-d0ec-4543-9371-7694b1e85a06-utilities" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488195 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9fe0660-fae4-4f97-8895-dbc4845cee40" volumeName="kubernetes.io/configmap/b9fe0660-fae4-4f97-8895-dbc4845cee40-metrics-client-ca" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488251 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8c48a22-ed96-42c5-ac4a-dd7d4f204539" volumeName="kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-auth-proxy-config" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488272 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39790258-73bc-4c37-a935-e8d3c2a2d5c6" volumeName="kubernetes.io/projected/39790258-73bc-4c37-a935-e8d3c2a2d5c6-kube-api-access-94lkp" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488286 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c29fd426-7c89-434e-8332-1ca31075d4bf" volumeName="kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-client-ca" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488305 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb135cff-1a2e-468d-80ab-f7db3f57552a" volumeName="kubernetes.io/projected/eb135cff-1a2e-468d-80ab-f7db3f57552a-kube-api-access-tk5sc" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488320 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" volumeName="kubernetes.io/empty-dir/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-available-featuregates" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488334 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae1fd116-6f63-4344-b7af-278665649e5a" volumeName="kubernetes.io/projected/ae1fd116-6f63-4344-b7af-278665649e5a-kube-api-access-wf682" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488350 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb135cff-1a2e-468d-80ab-f7db3f57552a" volumeName="kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-images" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488365 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca" volumeName="kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-bound-sa-token" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488378 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="11aaad8c-2f25-460f-b4af-f27d8bc682a0" volumeName="kubernetes.io/projected/11aaad8c-2f25-460f-b4af-f27d8bc682a0-kube-api-access-x5z86" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488394 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" volumeName="kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-ca" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488408 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37cb3bb1-f5ba-4b7b-9af9-55bf61906a51" volumeName="kubernetes.io/configmap/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-mcd-auth-proxy-config" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488426 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89ed6373-78f8-4d77-82b2-1ab055b5b862" volumeName="kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488442 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db2a7cb1-1d05-4b24-86ed-f823fad5013e" volumeName="kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-bound-sa-token" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488457 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0b28c90-d5b6-44f3-867c-020ece32ac7d" volumeName="kubernetes.io/projected/e0b28c90-d5b6-44f3-867c-020ece32ac7d-kube-api-access" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488475 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fca213c3-42ca-4341-a2e6-a143b9389f9e" volumeName="kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-etcd-client" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488488 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="11aaad8c-2f25-460f-b4af-f27d8bc682a0" volumeName="kubernetes.io/empty-dir/11aaad8c-2f25-460f-b4af-f27d8bc682a0-catalog-content" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488506 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7635c0ff-4d40-4310-8187-230323e504e0" volumeName="kubernetes.io/projected/7635c0ff-4d40-4310-8187-230323e504e0-kube-api-access-p5m78" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488520 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5360f3f5-2d07-432f-af45-22659538c55e" volumeName="kubernetes.io/configmap/5360f3f5-2d07-432f-af45-22659538c55e-config" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488553 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fca213c3-42ca-4341-a2e6-a143b9389f9e" volumeName="kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-trusted-ca-bundle" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488574 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fca5d50-eb5f-4dbb-bdf6-8e07231406f9" volumeName="kubernetes.io/secret/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488600 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aae1df07-cf9f-47a3-b146-2a0adb182660" volumeName="kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-federate-client-tls" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488620 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fca213c3-42ca-4341-a2e6-a143b9389f9e" volumeName="kubernetes.io/projected/fca213c3-42ca-4341-a2e6-a143b9389f9e-kube-api-access-7krn8" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488634 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62ba4bae-a5e1-4c4d-b544-25d0e59eeac2" volumeName="kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488649 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7635c0ff-4d40-4310-8187-230323e504e0" volumeName="kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-auth-proxy-config" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488668 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="836a6d7e-9b26-425f-ae21-00422515d7fe" volumeName="kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-ovnkube-identity-cm" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488683 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89ed6373-78f8-4d77-82b2-1ab055b5b862" volumeName="kubernetes.io/projected/89ed6373-78f8-4d77-82b2-1ab055b5b862-kube-api-access-f64ql" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488701 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1" volumeName="kubernetes.io/projected/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-ca-certs" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488716 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c29fd426-7c89-434e-8332-1ca31075d4bf" volumeName="kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-config" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488729 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62ba4bae-a5e1-4c4d-b544-25d0e59eeac2" volumeName="kubernetes.io/empty-dir/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-textfile" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488748 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34382460-b2d7-4154-87ba-c0347a4c0f1b" volumeName="kubernetes.io/empty-dir/34382460-b2d7-4154-87ba-c0347a4c0f1b-catalog-content" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488763 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a97bbf5-7409-4f36-894b-b88284e1b6d0" volumeName="kubernetes.io/projected/8a97bbf5-7409-4f36-894b-b88284e1b6d0-kube-api-access-vq4ct" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488778 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59c1cc61-8692-4a35-83fc-6bbef7086117" volumeName="kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-audit" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488799 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39ccf158-b40f-4dba-90e2-27b1409487b7" volumeName="kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488813 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="478be5e4-cf17-4ebf-a45a-c18cd2b69929" volumeName="kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-script-lib" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488829 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cbb46f1-1c33-42fc-8371-6a1bea8c28ff" volumeName="kubernetes.io/configmap/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-trusted-ca" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488843 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59c1cc61-8692-4a35-83fc-6bbef7086117" volumeName="kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-config" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488856 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af18215b-e749-4565-bb6c-24e92c452817" volumeName="kubernetes.io/secret/af18215b-e749-4565-bb6c-24e92c452817-metrics-tls" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488872 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="afa174b3-912c-4b56-b5eb-f3e3df012c11" volumeName="kubernetes.io/projected/afa174b3-912c-4b56-b5eb-f3e3df012c11-kube-api-access-2795m" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488886 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="042d8457-04dc-4171-8b0f-f9e3de695c46" volumeName="kubernetes.io/projected/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-api-access-hpz9d" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488903 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a97bbf5-7409-4f36-894b-b88284e1b6d0" volumeName="kubernetes.io/secret/8a97bbf5-7409-4f36-894b-b88284e1b6d0-signing-key" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488919 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ab951b1-6898-4357-b813-16365f3f89d5" volumeName="kubernetes.io/projected/8ab951b1-6898-4357-b813-16365f3f89d5-kube-api-access-xdzzt" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488933 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1388469-5e55-4c1b-97c3-c88777f29ae7" volumeName="kubernetes.io/configmap/f1388469-5e55-4c1b-97c3-c88777f29ae7-config" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488950 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19cf75ed-6a4e-444d-8975-fa6ecba79f13" volumeName="kubernetes.io/empty-dir/19cf75ed-6a4e-444d-8975-fa6ecba79f13-utilities" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488963 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd609bd3-2525-4b88-8f07-94a0418fb582" volumeName="kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-images" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488982 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dbce6cdc-040a-48e1-8a81-b6ff9c180eba" volumeName="kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.488996 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cbb46f1-1c33-42fc-8371-6a1bea8c28ff" volumeName="kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489009 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="533fe3c7-504f-40aa-aab0-8d66ef27920f" volumeName="kubernetes.io/projected/533fe3c7-504f-40aa-aab0-8d66ef27920f-kube-api-access-jrwcs" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489026 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98226a59-5234-48f3-a9cd-21de305810dc" volumeName="kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-config" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489040 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="533fe3c7-504f-40aa-aab0-8d66ef27920f" volumeName="kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-daemon-config" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489059 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31969539-bfd1-466f-8697-f13cbbd957df" volumeName="kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-env-overrides" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489073 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62fc400b-b3dd-4134-bd27-69dd8369153a" volumeName="kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-config" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489086 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8df029f2-d0ec-4543-9371-7694b1e85a06" volumeName="kubernetes.io/projected/8df029f2-d0ec-4543-9371-7694b1e85a06-kube-api-access-kwgg6" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489103 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="906307ef-d988-49e7-9d63-39116a2c4880" volumeName="kubernetes.io/configmap/906307ef-d988-49e7-9d63-39116a2c4880-iptables-alerter-script" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489116 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01e90033-9ddf-41b4-ab61-e89add6c2fde" volumeName="kubernetes.io/configmap/01e90033-9ddf-41b4-ab61-e89add6c2fde-config" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489133 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7635c0ff-4d40-4310-8187-230323e504e0" volumeName="kubernetes.io/secret/7635c0ff-4d40-4310-8187-230323e504e0-machine-approver-tls" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489168 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aae1df07-cf9f-47a3-b146-2a0adb182660" volumeName="kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-trusted-ca-bundle" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489191 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dbce6cdc-040a-48e1-8a81-b6ff9c180eba" volumeName="kubernetes.io/projected/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-kube-api-access-z2kct" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489212 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fca5d50-eb5f-4dbb-bdf6-8e07231406f9" volumeName="kubernetes.io/projected/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-kube-api-access-8p4w6" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489228 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07281644-2789-424f-8429-aa4448dda01e" volumeName="kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-sysctl-allowlist" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489249 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aae1df07-cf9f-47a3-b146-2a0adb182660" volumeName="kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-serving-certs-ca-bundle" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489269 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="042d8457-04dc-4171-8b0f-f9e3de695c46" volumeName="kubernetes.io/empty-dir/042d8457-04dc-4171-8b0f-f9e3de695c46-volume-directive-shadow" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489284 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19cf75ed-6a4e-444d-8975-fa6ecba79f13" volumeName="kubernetes.io/projected/19cf75ed-6a4e-444d-8975-fa6ecba79f13-kube-api-access-7hxz5" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489303 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" volumeName="kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-service-ca" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489322 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fca5d50-eb5f-4dbb-bdf6-8e07231406f9" volumeName="kubernetes.io/configmap/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-config" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489338 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae1fd116-6f63-4344-b7af-278665649e5a" volumeName="kubernetes.io/empty-dir/ae1fd116-6f63-4344-b7af-278665649e5a-tmpfs" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489359 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd609bd3-2525-4b88-8f07-94a0418fb582" volumeName="kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cert" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489378 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="daf25ef5-8247-4dbb-bdc1-55104b1015b7" volumeName="kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-service-ca-bundle" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489393 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="042d8457-04dc-4171-8b0f-f9e3de695c46" volumeName="kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489412 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c29fd426-7c89-434e-8332-1ca31075d4bf" volumeName="kubernetes.io/projected/c29fd426-7c89-434e-8332-1ca31075d4bf-kube-api-access-z7k2n" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489428 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c078827-3bdb-4509-aeb3-eb558df1f6e7" volumeName="kubernetes.io/configmap/9c078827-3bdb-4509-aeb3-eb558df1f6e7-service-ca-bundle" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489447 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37cb3bb1-f5ba-4b7b-9af9-55bf61906a51" volumeName="kubernetes.io/secret/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-proxy-tls" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489468 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37cb3bb1-f5ba-4b7b-9af9-55bf61906a51" volumeName="kubernetes.io/projected/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-kube-api-access-qkn7h" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489482 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="478be5e4-cf17-4ebf-a45a-c18cd2b69929" volumeName="kubernetes.io/secret/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovn-node-metrics-cert" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489501 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5360f3f5-2d07-432f-af45-22659538c55e" volumeName="kubernetes.io/projected/5360f3f5-2d07-432f-af45-22659538c55e-kube-api-access-7vvm8" seLinuxMountContext="" Feb 20 12:04:55.489719 master-0 kubenswrapper[31420]: I0220 12:04:55.489520 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98226a59-5234-48f3-a9cd-21de305810dc" volumeName="kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-client-ca" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493670 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1388469-5e55-4c1b-97c3-c88777f29ae7" volumeName="kubernetes.io/secret/f1388469-5e55-4c1b-97c3-c88777f29ae7-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493709 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1709ef31-9ddd-42bf-9a95-4be4502a0828" volumeName="kubernetes.io/projected/1709ef31-9ddd-42bf-9a95-4be4502a0828-kube-api-access-79j9f" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493720 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="042d8457-04dc-4171-8b0f-f9e3de695c46" volumeName="kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493732 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dfca740-0387-428a-b957-3e8a09c6e352" volumeName="kubernetes.io/configmap/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-trusted-ca" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493742 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dfca740-0387-428a-b957-3e8a09c6e352" volumeName="kubernetes.io/projected/6dfca740-0387-428a-b957-3e8a09c6e352-kube-api-access-d4457" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493752 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8c48a22-ed96-42c5-ac4a-dd7d4f204539" volumeName="kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-images" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493762 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1df81fcc-f967-4874-ad16-1a89f0e7875a" volumeName="kubernetes.io/projected/1df81fcc-f967-4874-ad16-1a89f0e7875a-kube-api-access-7mggv" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493772 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62fc400b-b3dd-4134-bd27-69dd8369153a" volumeName="kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-images" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493750 31420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493782 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4d060bff-3c25-4eeb-bdd3-e20fb2687645" volumeName="kubernetes.io/projected/4d060bff-3c25-4eeb-bdd3-e20fb2687645-kube-api-access-26x7b" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493833 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a97bbf5-7409-4f36-894b-b88284e1b6d0" volumeName="kubernetes.io/configmap/8a97bbf5-7409-4f36-894b-b88284e1b6d0-signing-cabundle" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493843 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce2b6fde-de56-49c3-9bd6-e81c679b02bc" volumeName="kubernetes.io/projected/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-kube-api-access-2k8n8" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493852 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8c48a22-ed96-42c5-ac4a-dd7d4f204539" volumeName="kubernetes.io/projected/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-kube-api-access-ksx6l" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493861 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fca213c3-42ca-4341-a2e6-a143b9389f9e" volumeName="kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493871 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1df81fcc-f967-4874-ad16-1a89f0e7875a" volumeName="kubernetes.io/secret/1df81fcc-f967-4874-ad16-1a89f0e7875a-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493881 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" volumeName="kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-config" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493891 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca" volumeName="kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-kube-api-access-lqxhp" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493904 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="906307ef-d988-49e7-9d63-39116a2c4880" volumeName="kubernetes.io/projected/906307ef-d988-49e7-9d63-39116a2c4880-kube-api-access-5j82z" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493917 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59c1cc61-8692-4a35-83fc-6bbef7086117" volumeName="kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-encryption-config" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493931 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59c1cc61-8692-4a35-83fc-6bbef7086117" volumeName="kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-serving-cert" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493943 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef18ace4-7316-4600-9be9-2adc792705e9" volumeName="kubernetes.io/configmap/ef18ace4-7316-4600-9be9-2adc792705e9-cco-trusted-ca" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493955 31420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="312ca024-c8f0-4994-8f9a-b707607341fe" volumeName="kubernetes.io/projected/312ca024-c8f0-4994-8f9a-b707607341fe-kube-api-access-bpnmz" seLinuxMountContext="" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493967 31420 reconstruct.go:97] "Volume reconstruction finished" Feb 20 12:04:55.494019 master-0 kubenswrapper[31420]: I0220 12:04:55.493976 31420 reconciler.go:26] "Reconciler: start to sync state" Feb 20 12:04:55.495987 master-0 kubenswrapper[31420]: I0220 12:04:55.495648 31420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 20 12:04:55.495987 master-0 kubenswrapper[31420]: I0220 12:04:55.495691 31420 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 20 12:04:55.495987 master-0 kubenswrapper[31420]: I0220 12:04:55.495717 31420 kubelet.go:2335] "Starting kubelet main sync loop" Feb 20 12:04:55.495987 master-0 kubenswrapper[31420]: E0220 12:04:55.495772 31420 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 20 12:04:55.497559 master-0 kubenswrapper[31420]: I0220 12:04:55.497251 31420 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 20 12:04:55.500955 master-0 kubenswrapper[31420]: I0220 12:04:55.500908 31420 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 20 12:04:55.503844 master-0 kubenswrapper[31420]: I0220 12:04:55.503793 31420 generic.go:334] "Generic (PLEG): container finished" podID="6c3aa45a-44cc-48fb-a478-ce01a70c4b02" containerID="de3cf90976c88f94ee4890bd56c7f0488152bb4020f300dabbcd987cd8523183" exitCode=0 Feb 20 12:04:55.506330 master-0 kubenswrapper[31420]: I0220 12:04:55.506297 31420 generic.go:334] "Generic (PLEG): container finished" podID="839bf5b1-b242-4bbd-bc09-cf6abcf7f734" containerID="a536c272954462921fc604267b25f8d65d6f6bc9444d2c6bb8607f4b9f14a00d" exitCode=0 Feb 20 12:04:55.509033 master-0 kubenswrapper[31420]: I0220 12:04:55.508257 31420 generic.go:334] "Generic (PLEG): container finished" podID="19cf75ed-6a4e-444d-8975-fa6ecba79f13" containerID="8d7f06e36f50ed12da8519e84b3ce5adfb4b80cd958663f0c41472dd9f14ecbe" exitCode=0 Feb 20 12:04:55.509033 master-0 kubenswrapper[31420]: I0220 12:04:55.508280 31420 generic.go:334] "Generic (PLEG): container finished" podID="19cf75ed-6a4e-444d-8975-fa6ecba79f13" containerID="6ae49cdae8b47f749dc0f6149f6ebf356a1a949182d0fffef1f74b151688ef30" exitCode=0 Feb 20 12:04:55.510201 master-0 kubenswrapper[31420]: I0220 12:04:55.510166 31420 generic.go:334] "Generic (PLEG): container finished" podID="eb93420d-7c5a-4492-bd16-0104104406b4" containerID="76e85ab561cbad6abc6fb8fe1c91c7b03e4b40963c9f88e69d0121b220aa047b" exitCode=0 Feb 20 12:04:55.512354 master-0 kubenswrapper[31420]: I0220 12:04:55.512293 31420 generic.go:334] "Generic (PLEG): container finished" podID="6dfca740-0387-428a-b957-3e8a09c6e352" containerID="30cc0163534ef05cf8f1af8016be6ca5a9410b7c83b47a06334775bed42b37ab" exitCode=0 Feb 20 12:04:55.515853 master-0 kubenswrapper[31420]: I0220 12:04:55.514916 31420 generic.go:334] "Generic (PLEG): container finished" podID="97095f88-ee81-4a47-9bd7-1dbe71ec8d4d" containerID="54c853c11767fb2e9c16b82b830e00aa5d8a596a5498e4384e29c0cde6cc8aed" exitCode=0 Feb 20 12:04:55.520010 master-0 kubenswrapper[31420]: I0220 12:04:55.519977 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_7bd4430b-8dbc-46df-9efe-49d520a7c75a/installer/0.log" Feb 20 12:04:55.520396 master-0 kubenswrapper[31420]: I0220 12:04:55.520287 31420 generic.go:334] "Generic (PLEG): container finished" podID="7bd4430b-8dbc-46df-9efe-49d520a7c75a" containerID="e80cac2721cbb0873c9a56ecbcc2ab13f0cf0ddd137a7458a4798813bbf93c32" exitCode=1 Feb 20 12:04:55.542113 master-0 kubenswrapper[31420]: I0220 12:04:55.541771 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-vs87f_b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1/manager/1.log" Feb 20 12:04:55.543299 master-0 kubenswrapper[31420]: I0220 12:04:55.543249 31420 generic.go:334] "Generic (PLEG): container finished" podID="b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1" containerID="c71d66a4b93651a9ca77699b6ac7544e90310b6a6968e997721a5f52319085ac" exitCode=1 Feb 20 12:04:55.546844 master-0 kubenswrapper[31420]: I0220 12:04:55.546791 31420 generic.go:334] "Generic (PLEG): container finished" podID="8a97bbf5-7409-4f36-894b-b88284e1b6d0" containerID="0394ee858152290726abadbd7c30c0f31262c014870cefb1d45db15a3536bc63" exitCode=0 Feb 20 12:04:55.551061 master-0 kubenswrapper[31420]: I0220 12:04:55.551019 31420 generic.go:334] "Generic (PLEG): container finished" podID="62ba4bae-a5e1-4c4d-b544-25d0e59eeac2" containerID="34f6ced44b08101957e2d08e45cc1aa5835ffa33e5d435353a4994d649f8ae48" exitCode=0 Feb 20 12:04:55.555364 master-0 kubenswrapper[31420]: I0220 12:04:55.555329 31420 generic.go:334] "Generic (PLEG): container finished" podID="1df81fcc-f967-4874-ad16-1a89f0e7875a" containerID="f658812d3a0840e273c061153c1646fa88e6e4617da166e0ff391ed3c4a82be1" exitCode=0 Feb 20 12:04:55.566704 master-0 kubenswrapper[31420]: I0220 12:04:55.566649 31420 generic.go:334] "Generic (PLEG): container finished" podID="312ca024-c8f0-4994-8f9a-b707607341fe" containerID="a2f57d0cbbd57b5325ad0aac9713219f739036a6acc3195c5bbfa570326dbcd4" exitCode=0 Feb 20 12:04:55.569717 master-0 kubenswrapper[31420]: I0220 12:04:55.569684 31420 generic.go:334] "Generic (PLEG): container finished" podID="1d3a36bb-9d11-48b3-a3b5-07b47738ef97" containerID="c6679863b5436d03c685416538ec6a0c239b8d55dfa6ed45b92990d366d1cd74" exitCode=0 Feb 20 12:04:55.585782 master-0 kubenswrapper[31420]: I0220 12:04:55.583927 31420 generic.go:334] "Generic (PLEG): container finished" podID="34382460-b2d7-4154-87ba-c0347a4c0f1b" containerID="2584c39a031705eeedc4fb35b529a7825665a56d2f5188033d504f6edec7e39f" exitCode=0 Feb 20 12:04:55.585782 master-0 kubenswrapper[31420]: I0220 12:04:55.583964 31420 generic.go:334] "Generic (PLEG): container finished" podID="34382460-b2d7-4154-87ba-c0347a4c0f1b" containerID="f4e2e12c0322e37b5aee8b5bfc056b8e62e780d54f553d8e5ba777ff04b0e41e" exitCode=0 Feb 20 12:04:55.596665 master-0 kubenswrapper[31420]: E0220 12:04:55.596612 31420 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 20 12:04:55.598488 master-0 kubenswrapper[31420]: I0220 12:04:55.598443 31420 generic.go:334] "Generic (PLEG): container finished" podID="bbdbadd9-eeaa-46ef-936e-5db8d395c118" containerID="6e11d702e4faa3980c4584f7fbbe0edd61d03b400f537710d4a26da3248d5efc" exitCode=0 Feb 20 12:04:55.609517 master-0 kubenswrapper[31420]: I0220 12:04:55.609482 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-check-endpoints/0.log" Feb 20 12:04:55.614382 master-0 kubenswrapper[31420]: I0220 12:04:55.614293 31420 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="ac763378dacfc4363dcfb084085dbc52f6dc5edd975cf1b421f17f519d7cca40" exitCode=255 Feb 20 12:04:55.614382 master-0 kubenswrapper[31420]: I0220 12:04:55.614348 31420 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="af8794e46bca44f5295255350b5f789a307ef0b49c6359ff00d86023682622b0" exitCode=0 Feb 20 12:04:55.617665 master-0 kubenswrapper[31420]: I0220 12:04:55.617629 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-sksbt_8ab951b1-6898-4357-b813-16365f3f89d5/cluster-autoscaler-operator/0.log" Feb 20 12:04:55.618007 master-0 kubenswrapper[31420]: I0220 12:04:55.617953 31420 generic.go:334] "Generic (PLEG): container finished" podID="8ab951b1-6898-4357-b813-16365f3f89d5" containerID="9a057bcbfd065697f6b207a64f408c746a9bea8b73ae774c709e37560f5635da" exitCode=255 Feb 20 12:04:55.619857 master-0 kubenswrapper[31420]: I0220 12:04:55.619824 31420 generic.go:334] "Generic (PLEG): container finished" podID="f1388469-5e55-4c1b-97c3-c88777f29ae7" containerID="b288109ee32770ae0136eb8073a319dc58d7b8d8a7d067c5f9bf71abd12290e4" exitCode=0 Feb 20 12:04:55.628625 master-0 kubenswrapper[31420]: I0220 12:04:55.628573 31420 generic.go:334] "Generic (PLEG): container finished" podID="9c078827-3bdb-4509-aeb3-eb558df1f6e7" containerID="e6a70c0e0f237b900ba323a2d2250f1ed5e02a069194617f8e9507c1f16cde63" exitCode=0 Feb 20 12:04:55.631260 master-0 kubenswrapper[31420]: I0220 12:04:55.631201 31420 generic.go:334] "Generic (PLEG): container finished" podID="37cb3bb1-f5ba-4b7b-9af9-55bf61906a51" containerID="cd16bd752b73b8b49c9f915a16effe79766d7670ad8d9f340d00a15fdc577892" exitCode=0 Feb 20 12:04:55.633682 master-0 kubenswrapper[31420]: I0220 12:04:55.633653 31420 generic.go:334] "Generic (PLEG): container finished" podID="59c1cc61-8692-4a35-83fc-6bbef7086117" containerID="80ebc6a1a97e735d70ead1262f4a97848d649fb396b046241e543eb44b09793c" exitCode=0 Feb 20 12:04:55.636507 master-0 kubenswrapper[31420]: I0220 12:04:55.636467 31420 generic.go:334] "Generic (PLEG): container finished" podID="ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" containerID="136d6f3a9793756201eb14c53a4ba43141e49068fbce78152349e9d918491065" exitCode=0 Feb 20 12:04:55.638162 master-0 kubenswrapper[31420]: I0220 12:04:55.637816 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-mk9fd_02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/openshift-config-operator/2.log" Feb 20 12:04:55.638476 master-0 kubenswrapper[31420]: I0220 12:04:55.638415 31420 generic.go:334] "Generic (PLEG): container finished" podID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerID="31ff1e117529b9aa438962fcdb3c5051bf53ab61f9540449f696309f0c076076" exitCode=255 Feb 20 12:04:55.638476 master-0 kubenswrapper[31420]: I0220 12:04:55.638473 31420 generic.go:334] "Generic (PLEG): container finished" podID="02c6a0e7-6363-4d7e-a8eb-b4d38b74b145" containerID="7271b0c2f4252bb9d18ca82cf9dc28e192310d41fc4837e2edcbc00ae9a2f5cd" exitCode=0 Feb 20 12:04:55.641041 master-0 kubenswrapper[31420]: I0220 12:04:55.640541 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 20 12:04:55.643460 master-0 kubenswrapper[31420]: I0220 12:04:55.643396 31420 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="53e7dc45156105f926a77b4b48981d5e387a572098dd2e0e299ab01a43056605" exitCode=1 Feb 20 12:04:55.643460 master-0 kubenswrapper[31420]: I0220 12:04:55.643446 31420 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="fbba6df4a59d8edb9a6ffa0ddbac2d1f8af28cf04b9ed9d72f140a13ab377500" exitCode=0 Feb 20 12:04:55.651505 master-0 kubenswrapper[31420]: I0220 12:04:55.648292 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg_e8c48a22-ed96-42c5-ac4a-dd7d4f204539/config-sync-controllers/0.log" Feb 20 12:04:55.651505 master-0 kubenswrapper[31420]: I0220 12:04:55.648642 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg_e8c48a22-ed96-42c5-ac4a-dd7d4f204539/cluster-cloud-controller-manager/0.log" Feb 20 12:04:55.651505 master-0 kubenswrapper[31420]: I0220 12:04:55.648679 31420 generic.go:334] "Generic (PLEG): container finished" podID="e8c48a22-ed96-42c5-ac4a-dd7d4f204539" containerID="3de8f37a5f333a2a0c06335a1e1da92af4239f0f86ce6fc2f55eb1e6b9d57ccf" exitCode=1 Feb 20 12:04:55.651505 master-0 kubenswrapper[31420]: I0220 12:04:55.648694 31420 generic.go:334] "Generic (PLEG): container finished" podID="e8c48a22-ed96-42c5-ac4a-dd7d4f204539" containerID="5ae28e0dd7617cbe98b911e55270072130fade6b7dce5510c67c9d3d17bc60bf" exitCode=1 Feb 20 12:04:55.659845 master-0 kubenswrapper[31420]: I0220 12:04:55.659001 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-gwpst_4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/cluster-node-tuning-operator/0.log" Feb 20 12:04:55.659845 master-0 kubenswrapper[31420]: I0220 12:04:55.659056 31420 generic.go:334] "Generic (PLEG): container finished" podID="4cbb46f1-1c33-42fc-8371-6a1bea8c28ff" containerID="f478ae19f7f37b0b144530d29503cc9eb3edcf8d27e26035c2139b9aa149987b" exitCode=1 Feb 20 12:04:55.676192 master-0 kubenswrapper[31420]: I0220 12:04:55.675923 31420 generic.go:334] "Generic (PLEG): container finished" podID="478be5e4-cf17-4ebf-a45a-c18cd2b69929" containerID="5b4211a2cc9a2198d36fabbec1b685ffa0d3133fee06da2f4d880279f8a4b229" exitCode=0 Feb 20 12:04:55.683172 master-0 kubenswrapper[31420]: I0220 12:04:55.683114 31420 generic.go:334] "Generic (PLEG): container finished" podID="98226a59-5234-48f3-a9cd-21de305810dc" containerID="1b7b0cda43f9601273f5b828026cbdd290a92a99bdd94b1cd74e1268067e317e" exitCode=0 Feb 20 12:04:55.689770 master-0 kubenswrapper[31420]: I0220 12:04:55.689744 31420 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="5fc806dcdedd7688a77f47260543d32b11e6e7c063979fee3300f4d963557c80" exitCode=0 Feb 20 12:04:55.689931 master-0 kubenswrapper[31420]: I0220 12:04:55.689767 31420 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="da92cbde4f74d2a7379dc50dca70ba345568f184d4de102a2743c4569e81bf1e" exitCode=0 Feb 20 12:04:55.689931 master-0 kubenswrapper[31420]: I0220 12:04:55.689793 31420 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="75dfcf2c7e75ed34e7d8254c990b8555834b339c0315692edbac504af1d4c6bd" exitCode=0 Feb 20 12:04:55.691549 master-0 kubenswrapper[31420]: I0220 12:04:55.691507 31420 generic.go:334] "Generic (PLEG): container finished" podID="89383482-190e-4f74-a81e-b1547e5b9ae6" containerID="cdc9cc9ed8b0ca2df37b48bd33917f4f6c78f23c4f8aeddaab64905dab048bcd" exitCode=0 Feb 20 12:04:55.693715 master-0 kubenswrapper[31420]: I0220 12:04:55.693668 31420 generic.go:334] "Generic (PLEG): container finished" podID="31969539-bfd1-466f-8697-f13cbbd957df" containerID="61a6b1802bd2528d8da1d6327d61e384e195f07e99b735a85a4645765053313c" exitCode=0 Feb 20 12:04:55.698520 master-0 kubenswrapper[31420]: I0220 12:04:55.698470 31420 generic.go:334] "Generic (PLEG): container finished" podID="11aaad8c-2f25-460f-b4af-f27d8bc682a0" containerID="063c2153ddfff922b46919bbdf5dbe745ed9d91ad8a4df1a43233846341ae431" exitCode=0 Feb 20 12:04:55.698520 master-0 kubenswrapper[31420]: I0220 12:04:55.698516 31420 generic.go:334] "Generic (PLEG): container finished" podID="11aaad8c-2f25-460f-b4af-f27d8bc682a0" containerID="ff081d256f16ffc5993aee690d47f471a95bc015fd6771879fb0da9d5c9c2b0b" exitCode=0 Feb 20 12:04:55.703613 master-0 kubenswrapper[31420]: I0220 12:04:55.703531 31420 generic.go:334] "Generic (PLEG): container finished" podID="ce2b6fde-de56-49c3-9bd6-e81c679b02bc" containerID="f3706b3c34cf4ca963f10ba2e8498b0291187d135d8a240b66a3eb3e3ede44fb" exitCode=0 Feb 20 12:04:55.703710 master-0 kubenswrapper[31420]: I0220 12:04:55.703659 31420 generic.go:334] "Generic (PLEG): container finished" podID="ce2b6fde-de56-49c3-9bd6-e81c679b02bc" containerID="55207161b0670236349ac65a2776c47132c8ff804fc186b630f3016022116ce7" exitCode=0 Feb 20 12:04:55.703710 master-0 kubenswrapper[31420]: I0220 12:04:55.703679 31420 generic.go:334] "Generic (PLEG): container finished" podID="ce2b6fde-de56-49c3-9bd6-e81c679b02bc" containerID="cbf4059981e662ffa8f5572d1a08ac8b15a360a7ff62236f5ccfa4eb74c73c26" exitCode=0 Feb 20 12:04:55.705295 master-0 kubenswrapper[31420]: I0220 12:04:55.705269 31420 generic.go:334] "Generic (PLEG): container finished" podID="a41b23ca-9eed-4eb9-95dc-92418a6f4e86" containerID="2260e76cd3d2df450c12a0158d94e76ddd7d3b92f4e2a837f57c5c73685c7d75" exitCode=0 Feb 20 12:04:55.712419 master-0 kubenswrapper[31420]: I0220 12:04:55.712384 31420 generic.go:334] "Generic (PLEG): container finished" podID="8df029f2-d0ec-4543-9371-7694b1e85a06" containerID="98f8ace42aab6b9228e43ce90bfe3ea401b8bf607616e8f25c18422bae53c536" exitCode=0 Feb 20 12:04:55.712419 master-0 kubenswrapper[31420]: I0220 12:04:55.712411 31420 generic.go:334] "Generic (PLEG): container finished" podID="8df029f2-d0ec-4543-9371-7694b1e85a06" containerID="9d2f7db518937a8a9582f2e5f129e777d3a2b50cb5bc9e1a2c9bbfd577def479" exitCode=0 Feb 20 12:04:55.714110 master-0 kubenswrapper[31420]: I0220 12:04:55.714079 31420 generic.go:334] "Generic (PLEG): container finished" podID="5710eb66-9717-4beb-a8b2-19f6886376b3" containerID="b016752d8ba5cbc29441e53dbfb424ff953b01aa96097dd394c1910c4e093b09" exitCode=0 Feb 20 12:04:55.717104 master-0 kubenswrapper[31420]: I0220 12:04:55.717045 31420 generic.go:334] "Generic (PLEG): container finished" podID="f98aeaf7-bf1a-46af-bf1b-85713baa4c67" containerID="f8d154b1c828589837ec3c8ec4ad4d835c269d69b663caaef17de5eec1f25aa8" exitCode=0 Feb 20 12:04:55.719220 master-0 kubenswrapper[31420]: I0220 12:04:55.719187 31420 generic.go:334] "Generic (PLEG): container finished" podID="35310285-fff9-43d6-ad9a-5d959ef116ec" containerID="359ae664c23ea8eeb6016bf515179345b86f7e1a68413d3d25df9e81032b59ac" exitCode=0 Feb 20 12:04:55.721822 master-0 kubenswrapper[31420]: I0220 12:04:55.721798 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-fn7j5_21e8e44b-b883-4afb-af90-d6c1265edf34/control-plane-machine-set-operator/0.log" Feb 20 12:04:55.721908 master-0 kubenswrapper[31420]: I0220 12:04:55.721832 31420 generic.go:334] "Generic (PLEG): container finished" podID="21e8e44b-b883-4afb-af90-d6c1265edf34" containerID="35ae4f95ab0c966594fd2d547d61e743ca73d94994a40e72e5d8f5673d88afb4" exitCode=1 Feb 20 12:04:55.723695 master-0 kubenswrapper[31420]: I0220 12:04:55.723674 31420 generic.go:334] "Generic (PLEG): container finished" podID="e0b28c90-d5b6-44f3-867c-020ece32ac7d" containerID="77890d6705292359843e6d71e469ce5d5c4b9d196554afc0ee3e0617dea2273f" exitCode=0 Feb 20 12:04:55.728500 master-0 kubenswrapper[31420]: I0220 12:04:55.728477 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k95mq_bd609bd3-2525-4b88-8f07-94a0418fb582/cluster-baremetal-operator/1.log" Feb 20 12:04:55.728807 master-0 kubenswrapper[31420]: I0220 12:04:55.728776 31420 generic.go:334] "Generic (PLEG): container finished" podID="bd609bd3-2525-4b88-8f07-94a0418fb582" containerID="1f0c874f0434630bd93de4bc13495f67300659cb1712b213b4e98726a3091219" exitCode=1 Feb 20 12:04:55.730716 master-0 kubenswrapper[31420]: I0220 12:04:55.730657 31420 generic.go:334] "Generic (PLEG): container finished" podID="148cc321-3a17-4852-a75a-e8ac95139eb8" containerID="02fab2fdba837309f8086a9fb1b2446510dff4e4ca65a786c3fc86a795a7af11" exitCode=0 Feb 20 12:04:55.737912 master-0 kubenswrapper[31420]: I0220 12:04:55.737871 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_aa2f6c0cf73fadd0d96a26150bb4dbb3/kube-scheduler-cert-syncer/0.log" Feb 20 12:04:55.738530 master-0 kubenswrapper[31420]: I0220 12:04:55.738499 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_aa2f6c0cf73fadd0d96a26150bb4dbb3/kube-scheduler/0.log" Feb 20 12:04:55.740205 master-0 kubenswrapper[31420]: I0220 12:04:55.740157 31420 generic.go:334] "Generic (PLEG): container finished" podID="aa2f6c0cf73fadd0d96a26150bb4dbb3" containerID="b3973bb4e0436fc81dccb8348c1f9f8491e95c0a5851afc33de82d620bb3b291" exitCode=1 Feb 20 12:04:55.740260 master-0 kubenswrapper[31420]: I0220 12:04:55.740204 31420 generic.go:334] "Generic (PLEG): container finished" podID="aa2f6c0cf73fadd0d96a26150bb4dbb3" containerID="7e6c16941011718bcf6a9f94acdb17c25246b75a0407ed5d83ac4536ca1a0a88" exitCode=1 Feb 20 12:04:55.740260 master-0 kubenswrapper[31420]: I0220 12:04:55.740215 31420 generic.go:334] "Generic (PLEG): container finished" podID="aa2f6c0cf73fadd0d96a26150bb4dbb3" containerID="970c2892630032916bff16279185e91dd2db588a1ad81c9a738b21187856ab20" exitCode=0 Feb 20 12:04:55.742604 master-0 kubenswrapper[31420]: I0220 12:04:55.742512 31420 generic.go:334] "Generic (PLEG): container finished" podID="5827049e-6178-46cf-83c5-cff55daac768" containerID="2f6e2cdefb1f6c8584f138cfc2ac8b1cae268cc4e1730c5cf5119ebd8fc9f159" exitCode=0 Feb 20 12:04:55.744967 master-0 kubenswrapper[31420]: I0220 12:04:55.744842 31420 generic.go:334] "Generic (PLEG): container finished" podID="5360f3f5-2d07-432f-af45-22659538c55e" containerID="2d9f878c267250c634175c8afa99432d0586168560ba8d948183859d4b64504a" exitCode=0 Feb 20 12:04:55.746570 master-0 kubenswrapper[31420]: I0220 12:04:55.746519 31420 generic.go:334] "Generic (PLEG): container finished" podID="fca213c3-42ca-4341-a2e6-a143b9389f9e" containerID="046adf25484f79333206c0fe041bc8e17c66fcf7f4778670c5be72d62d2804ae" exitCode=0 Feb 20 12:04:55.752370 master-0 kubenswrapper[31420]: I0220 12:04:55.752333 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/3.log" Feb 20 12:04:55.752454 master-0 kubenswrapper[31420]: I0220 12:04:55.752382 31420 generic.go:334] "Generic (PLEG): container finished" podID="bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4" containerID="de9e8c93c0df2890c4752dca06392332c979a4e0bdd653de93c036cd77ec19ee" exitCode=1 Feb 20 12:04:55.754381 master-0 kubenswrapper[31420]: I0220 12:04:55.754351 31420 generic.go:334] "Generic (PLEG): container finished" podID="29489539-68c6-49dd-bc1b-dcf0c7bb2ebe" containerID="4b16a34c164e3dca501c4332ff0f388668786b32102a5a19b7bf01b7c8440060" exitCode=0 Feb 20 12:04:55.756708 master-0 kubenswrapper[31420]: I0220 12:04:55.756682 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/4.log" Feb 20 12:04:55.756985 master-0 kubenswrapper[31420]: I0220 12:04:55.756957 31420 generic.go:334] "Generic (PLEG): container finished" podID="db2a7cb1-1d05-4b24-86ed-f823fad5013e" containerID="0590c15c418b11d5d4116f66dd28e44110553b9745f97ed4ab83a1781e3345eb" exitCode=1 Feb 20 12:04:55.758822 master-0 kubenswrapper[31420]: I0220 12:04:55.758795 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-mr99g_dbce6cdc-040a-48e1-8a81-b6ff9c180eba/package-server-manager/0.log" Feb 20 12:04:55.760918 master-0 kubenswrapper[31420]: I0220 12:04:55.759697 31420 generic.go:334] "Generic (PLEG): container finished" podID="dbce6cdc-040a-48e1-8a81-b6ff9c180eba" containerID="d69dad82c79e06506f238a23ca41e2826075f52d69f22b3756440b59139033ec" exitCode=1 Feb 20 12:04:55.761484 master-0 kubenswrapper[31420]: I0220 12:04:55.761457 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-psm4s_836a6d7e-9b26-425f-ae21-00422515d7fe/approver/1.log" Feb 20 12:04:55.761888 master-0 kubenswrapper[31420]: I0220 12:04:55.761863 31420 generic.go:334] "Generic (PLEG): container finished" podID="836a6d7e-9b26-425f-ae21-00422515d7fe" containerID="8ee62624db1bf28c038634c2f6ef81ccfdeef3084369265ba22b099552cdd3a8" exitCode=1 Feb 20 12:04:55.765472 master-0 kubenswrapper[31420]: I0220 12:04:55.765450 31420 generic.go:334] "Generic (PLEG): container finished" podID="7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca" containerID="e47a05c8d2dbbc49205addf05b6f326c0f38dfd41f3498f290a08ebfa22cbc94" exitCode=0 Feb 20 12:04:55.767161 master-0 kubenswrapper[31420]: I0220 12:04:55.767146 31420 generic.go:334] "Generic (PLEG): container finished" podID="01e90033-9ddf-41b4-ab61-e89add6c2fde" containerID="f9528f6d61bdc5d1282c2d9d2f6d9758a8e04364c9337158e14aef2c2ffff6b4" exitCode=0 Feb 20 12:04:55.772744 master-0 kubenswrapper[31420]: I0220 12:04:55.772712 31420 generic.go:334] "Generic (PLEG): container finished" podID="eb135cff-1a2e-468d-80ab-f7db3f57552a" containerID="9583a5d028e457a8b1106eee87ac3a3f6e2e8ded0c2d13dad805b6ccfd5190e1" exitCode=0 Feb 20 12:04:55.781739 master-0 kubenswrapper[31420]: I0220 12:04:55.781706 31420 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="9c73ec43a36008a1472e95cc448d96cb453a34c7d0f5983c1a526f4f124df839" exitCode=0 Feb 20 12:04:55.781739 master-0 kubenswrapper[31420]: I0220 12:04:55.781736 31420 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="f69d0d27fc97dfc5ca9cd544f311dfc218b6f712d28eef596d03ab2168409d7f" exitCode=0 Feb 20 12:04:55.781739 master-0 kubenswrapper[31420]: I0220 12:04:55.781745 31420 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="3aec6ee8f7b5920e9465051d7cfad692f6df3984abc458694d67b2ca16e3fc95" exitCode=0 Feb 20 12:04:55.781930 master-0 kubenswrapper[31420]: I0220 12:04:55.781752 31420 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="f1fbf807f82eab937178a587053f37db417fee5bbaad310485c0ef4a2b0f6684" exitCode=0 Feb 20 12:04:55.781930 master-0 kubenswrapper[31420]: I0220 12:04:55.781760 31420 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="59c4640ef16d19d630f393a377f5a55900e0d594bb8e948836367e29624486c7" exitCode=0 Feb 20 12:04:55.781930 master-0 kubenswrapper[31420]: I0220 12:04:55.781766 31420 generic.go:334] "Generic (PLEG): container finished" podID="07281644-2789-424f-8429-aa4448dda01e" containerID="fd6a9476a5e46b15b6371b4f9b6a262cda38dc0b2ce85f673487d39ba4902d2c" exitCode=0 Feb 20 12:04:55.787093 master-0 kubenswrapper[31420]: I0220 12:04:55.787066 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_74e9ba02-39d0-41fb-aed1-39923698bc0b/installer/0.log" Feb 20 12:04:55.787297 master-0 kubenswrapper[31420]: I0220 12:04:55.787275 31420 generic.go:334] "Generic (PLEG): container finished" podID="74e9ba02-39d0-41fb-aed1-39923698bc0b" containerID="ca54dfc79fe363224f0633dc3e9a5365e79752aa92793a430f4511b5aeb939dc" exitCode=1 Feb 20 12:04:55.797765 master-0 kubenswrapper[31420]: E0220 12:04:55.797714 31420 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 20 12:04:55.818600 master-0 kubenswrapper[31420]: I0220 12:04:55.818330 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-k8vs5_d9f9442b-25b9-420f-b748-bb13423809fe/manager/1.log" Feb 20 12:04:55.823784 master-0 kubenswrapper[31420]: I0220 12:04:55.823723 31420 generic.go:334] "Generic (PLEG): container finished" podID="d9f9442b-25b9-420f-b748-bb13423809fe" containerID="84ef230cc54cd476fcc604e2b0f1b7222d22839f67c943242d5c00ce3857fed6" exitCode=1 Feb 20 12:04:55.828000 master-0 kubenswrapper[31420]: I0220 12:04:55.827968 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-qg84l_7635c0ff-4d40-4310-8187-230323e504e0/machine-approver-controller/0.log" Feb 20 12:04:55.828280 master-0 kubenswrapper[31420]: I0220 12:04:55.828251 31420 generic.go:334] "Generic (PLEG): container finished" podID="7635c0ff-4d40-4310-8187-230323e504e0" containerID="43e3bfd2d03db486eaa07c471fb4184138af1fd2a51e7d71dbadb2ebc26dee9d" exitCode=255 Feb 20 12:04:55.829555 master-0 kubenswrapper[31420]: I0220 12:04:55.829538 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_7de8fb9d-34f7-49bc-867d-827a0f9a11e7/installer/0.log" Feb 20 12:04:55.829675 master-0 kubenswrapper[31420]: I0220 12:04:55.829659 31420 generic.go:334] "Generic (PLEG): container finished" podID="7de8fb9d-34f7-49bc-867d-827a0f9a11e7" containerID="56ae66462a4df6b3b10343480cd4dc180d6cf045523fb628f58018d2caac8f02" exitCode=1 Feb 20 12:04:55.832448 master-0 kubenswrapper[31420]: I0220 12:04:55.832414 31420 generic.go:334] "Generic (PLEG): container finished" podID="305f625e-16b0-4840-a9e2-25571b49ad2a" containerID="aa7475b04d1f2f206998430be0a72c2f43703844dbdb13b2c6bf74e325b14f62" exitCode=0 Feb 20 12:04:55.834318 master-0 kubenswrapper[31420]: I0220 12:04:55.834248 31420 generic.go:334] "Generic (PLEG): container finished" podID="c29fd426-7c89-434e-8332-1ca31075d4bf" containerID="b4292dccd690e9143e933dee29f59d01786a2f035fd7b57469d300f2f8a55365" exitCode=0 Feb 20 12:04:55.841429 master-0 kubenswrapper[31420]: I0220 12:04:55.841377 31420 generic.go:334] "Generic (PLEG): container finished" podID="1fca5d50-eb5f-4dbb-bdf6-8e07231406f9" containerID="74b4edd626e209801e3786cc1dc29bf2a950a730269d6de5ed8a28d1b435f9b4" exitCode=0 Feb 20 12:04:55.990978 master-0 kubenswrapper[31420]: I0220 12:04:55.990890 31420 manager.go:324] Recovery completed Feb 20 12:04:56.093797 master-0 kubenswrapper[31420]: I0220 12:04:56.093740 31420 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 20 12:04:56.093797 master-0 kubenswrapper[31420]: I0220 12:04:56.093771 31420 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 20 12:04:56.093797 master-0 kubenswrapper[31420]: I0220 12:04:56.093791 31420 state_mem.go:36] "Initialized new in-memory state store" Feb 20 12:04:56.094053 master-0 kubenswrapper[31420]: I0220 12:04:56.093975 31420 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 20 12:04:56.094053 master-0 kubenswrapper[31420]: I0220 12:04:56.093986 31420 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 20 12:04:56.094053 master-0 kubenswrapper[31420]: I0220 12:04:56.094013 31420 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 20 12:04:56.094053 master-0 kubenswrapper[31420]: I0220 12:04:56.094019 31420 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 20 12:04:56.094053 master-0 kubenswrapper[31420]: I0220 12:04:56.094026 31420 policy_none.go:49] "None policy: Start" Feb 20 12:04:56.102276 master-0 kubenswrapper[31420]: I0220 12:04:56.102219 31420 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 20 12:04:56.102276 master-0 kubenswrapper[31420]: I0220 12:04:56.102256 31420 state_mem.go:35] "Initializing new in-memory state store" Feb 20 12:04:56.102668 master-0 kubenswrapper[31420]: I0220 12:04:56.102459 31420 state_mem.go:75] "Updated machine memory state" Feb 20 12:04:56.102668 master-0 kubenswrapper[31420]: I0220 12:04:56.102472 31420 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 20 12:04:56.123479 master-0 kubenswrapper[31420]: I0220 12:04:56.123410 31420 manager.go:334] "Starting Device Plugin manager" Feb 20 12:04:56.123479 master-0 kubenswrapper[31420]: I0220 12:04:56.123488 31420 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 20 12:04:56.123796 master-0 kubenswrapper[31420]: I0220 12:04:56.123504 31420 server.go:79] "Starting device plugin registration server" Feb 20 12:04:56.124022 master-0 kubenswrapper[31420]: I0220 12:04:56.123981 31420 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 20 12:04:56.124106 master-0 kubenswrapper[31420]: I0220 12:04:56.124008 31420 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 20 12:04:56.124204 master-0 kubenswrapper[31420]: I0220 12:04:56.124159 31420 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 20 12:04:56.124346 master-0 kubenswrapper[31420]: I0220 12:04:56.124321 31420 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 20 12:04:56.124346 master-0 kubenswrapper[31420]: I0220 12:04:56.124336 31420 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 20 12:04:56.198873 master-0 kubenswrapper[31420]: I0220 12:04:56.198062 31420 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.200368 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2afa2b7ebc56f1b83ba6eea0931272420c7f296c9bd03931d27ab411eab9454b" Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.200468 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3590f63863912596b171ca5f35809210ae59c7b19c2fdb801182abdc3cd97397" Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.200645 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55661699f170197933eff4a7d62dfa673dfa4d47667b396a98f1b608289f577a" Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.200759 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"fb183355686e4afc132c4d4de7e53c26823b10e5e50f94804dcb7abd86778e66"} Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.201512 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"b37a738de54db612fada8bd81cf2bdbaea3d8eea466401eb8ad83715ec6bea2f"} Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.201632 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c62c9e4d7c03ed804b559ff9f9468e7a7f91ed8870a1b6239bb6b24438d3b6a" Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.201799 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0485d41f9e9692494bf4d5e2e9529ac2d8ff045a5850618429ca0e5f2f95327f" Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.201831 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerDied","Data":"ac763378dacfc4363dcfb084085dbc52f6dc5edd975cf1b421f17f519d7cca40"} Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.201897 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"47dcc57de81019756b69aa0cf77c795b704b232ab0a7c095b93f80ca1a705412"} Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.201917 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"f59e5b8432d51685db9583bb02bd7e9ee26b994dd372cb6fcd8949b7311e8f4c"} Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.201974 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"cc54e902a495db0a20ab369e2d2afe374c42435a5041faf1f245a36239c276fb"} Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.201997 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"98cdcc382cdaf9d32531fded311eebe18429997b139f27fb5370e4a6029e108d"} Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.202018 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerDied","Data":"af8794e46bca44f5295255350b5f789a307ef0b49c6359ff00d86023682622b0"} Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.202079 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"1090f75c486a323ae59e4678633d1d8f31d3b0da933bc500d26a674a58096eb0"} Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.202119 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"65774ccd44b6b404cec890cd0cfa3872","Type":"ContainerStarted","Data":"b0d8055ab8671dd87e8c6f4600409f2168abd1ce04e2f64cb6ec241a84ad82db"} Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.202181 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"65774ccd44b6b404cec890cd0cfa3872","Type":"ContainerStarted","Data":"7d1e40608e20859f716be438c8e8c5245ae85a9137bce4bf53bfccc4ff8fc568"} Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.202201 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"65774ccd44b6b404cec890cd0cfa3872","Type":"ContainerStarted","Data":"f81c629f14de2675e27ed03b16f717338fc763727ad4d8279bef5f402d84b0bd"} Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.202218 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"65774ccd44b6b404cec890cd0cfa3872","Type":"ContainerStarted","Data":"fb26f752e48be63937e70537d486ea02b5e41733fdb3b27eed62024dc371a88d"} Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.202235 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"65774ccd44b6b404cec890cd0cfa3872","Type":"ContainerStarted","Data":"b2b9e99c760ef8b2b1d3b355cd2d86c95a75c8f2455bc3e22e89b188ba7101e3"} Feb 20 12:04:56.202299 master-0 kubenswrapper[31420]: I0220 12:04:56.202295 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17066e06e0b2e2b17534c5886b653c787b7eefd7c251f9787e27a3b174b19ab1" Feb 20 12:04:56.204637 master-0 kubenswrapper[31420]: I0220 12:04:56.202321 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"6a6cafc7c486ca7c318193e8cb75dc02c40abcaf8818e09b14c243a316830547"} Feb 20 12:04:56.205629 master-0 kubenswrapper[31420]: I0220 12:04:56.204871 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"53e7dc45156105f926a77b4b48981d5e387a572098dd2e0e299ab01a43056605"} Feb 20 12:04:56.205696 master-0 kubenswrapper[31420]: I0220 12:04:56.205648 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"fbba6df4a59d8edb9a6ffa0ddbac2d1f8af28cf04b9ed9d72f140a13ab377500"} Feb 20 12:04:56.205696 master-0 kubenswrapper[31420]: I0220 12:04:56.205675 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"fb6f6ab6826113043c422e9cb31e951a4709e29a8f548f2f0410e49be87f511d"} Feb 20 12:04:56.205772 master-0 kubenswrapper[31420]: I0220 12:04:56.205747 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"a238930ecbb0a76e558bddf991220f2abccffd8a3149eaf2e96e10a5a7336ae9"} Feb 20 12:04:56.205772 master-0 kubenswrapper[31420]: I0220 12:04:56.205761 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"9b0fc4bd4c3cfd9b9709f31ae2aefd01c06a176b29710776ce6e72efcd897ae5"} Feb 20 12:04:56.205772 master-0 kubenswrapper[31420]: I0220 12:04:56.205771 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"5dcb14edc89c8213a50a7e6739f83d85ebe48b452d347e79ab4d89bb7e065fc0"} Feb 20 12:04:56.205879 master-0 kubenswrapper[31420]: I0220 12:04:56.205781 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"b422dd2ce4dd289728186b260c4a3879d1a3b820c3ec1d35590e2886b5db5a66"} Feb 20 12:04:56.205879 master-0 kubenswrapper[31420]: I0220 12:04:56.205792 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"eaf506e1d0783590de31091ab32fce9d35d713eecc3c017ce74b9f3f24f2dadf"} Feb 20 12:04:56.205879 master-0 kubenswrapper[31420]: I0220 12:04:56.205807 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"5fc806dcdedd7688a77f47260543d32b11e6e7c063979fee3300f4d963557c80"} Feb 20 12:04:56.205879 master-0 kubenswrapper[31420]: I0220 12:04:56.205818 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"da92cbde4f74d2a7379dc50dca70ba345568f184d4de102a2743c4569e81bf1e"} Feb 20 12:04:56.205879 master-0 kubenswrapper[31420]: I0220 12:04:56.205827 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"75dfcf2c7e75ed34e7d8254c990b8555834b339c0315692edbac504af1d4c6bd"} Feb 20 12:04:56.205879 master-0 kubenswrapper[31420]: I0220 12:04:56.205836 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"9cc7b181ab55ab6abb3242c925ed6067592af711ebb394b812dbd9cfe003dfbd"} Feb 20 12:04:56.205879 master-0 kubenswrapper[31420]: I0220 12:04:56.205881 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce9ed94bd982d2f41102a55cb2e618edd19c6224d6f0adfa7cf35da3a1237451" Feb 20 12:04:56.206371 master-0 kubenswrapper[31420]: I0220 12:04:56.205909 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97dbf6403141d9540379400f393a21ef236f6a9b6384164aeddd18c354d998df" Feb 20 12:04:56.206371 master-0 kubenswrapper[31420]: I0220 12:04:56.205925 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4736b5e4686f09c0b07f8d18c3b19a3ccd55085c207b7cd94523bcb6efbbf4ee" Feb 20 12:04:56.206371 master-0 kubenswrapper[31420]: I0220 12:04:56.205958 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d637b82c89f26c321cedef58ed73b3beb4ce3dd682ac20250458654f4757c3e7" Feb 20 12:04:56.206371 master-0 kubenswrapper[31420]: I0220 12:04:56.205965 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerStarted","Data":"37900565eb75d9b798f3f149616903b7d394f85e312ddc281cb50f56eac08ff1"} Feb 20 12:04:56.206371 master-0 kubenswrapper[31420]: I0220 12:04:56.205976 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerStarted","Data":"6328c06f948b47accbad70a96e466cffb8cc3d168855718a0c1e45d2ff1a2d20"} Feb 20 12:04:56.206371 master-0 kubenswrapper[31420]: I0220 12:04:56.205985 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerStarted","Data":"fa37ec307276e57d8dcd075e874fe4bda0e8faf9e0a2759374c512cf7a51b796"} Feb 20 12:04:56.206371 master-0 kubenswrapper[31420]: I0220 12:04:56.205995 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerDied","Data":"b3973bb4e0436fc81dccb8348c1f9f8491e95c0a5851afc33de82d620bb3b291"} Feb 20 12:04:56.206371 master-0 kubenswrapper[31420]: I0220 12:04:56.206021 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerDied","Data":"7e6c16941011718bcf6a9f94acdb17c25246b75a0407ed5d83ac4536ca1a0a88"} Feb 20 12:04:56.206371 master-0 kubenswrapper[31420]: I0220 12:04:56.206031 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerDied","Data":"970c2892630032916bff16279185e91dd2db588a1ad81c9a738b21187856ab20"} Feb 20 12:04:56.206371 master-0 kubenswrapper[31420]: I0220 12:04:56.206045 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa2f6c0cf73fadd0d96a26150bb4dbb3","Type":"ContainerStarted","Data":"c16bea21819b6e1d15437de870badf6de1fc66d12913185c9cecf80f61f24b54"} Feb 20 12:04:56.206371 master-0 kubenswrapper[31420]: I0220 12:04:56.206059 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="985737b750f90a1abc5074451459d70393e87cfe0c6e8a5a88f5b55243c61581" Feb 20 12:04:56.206371 master-0 kubenswrapper[31420]: I0220 12:04:56.206169 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79f70b0ba5af48f333359cfd6f71307155a704d196b35bf91b2237ea4c31acbc" Feb 20 12:04:56.206371 master-0 kubenswrapper[31420]: I0220 12:04:56.206199 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52931bca33b633a8f7b4a404b3d376c51a9562b00ed924bbb1fbf19380cd707f" Feb 20 12:04:56.206371 master-0 kubenswrapper[31420]: I0220 12:04:56.206220 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf890effc236ee0e21e9e57ddce2324a331c4793a53024dbaa2deb40164eb945" Feb 20 12:04:56.212135 master-0 kubenswrapper[31420]: E0220 12:04:56.211707 31420 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.215044 master-0 kubenswrapper[31420]: E0220 12:04:56.214996 31420 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 12:04:56.226418 master-0 kubenswrapper[31420]: I0220 12:04:56.226378 31420 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 12:04:56.229968 master-0 kubenswrapper[31420]: I0220 12:04:56.229925 31420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 12:04:56.229968 master-0 kubenswrapper[31420]: I0220 12:04:56.229967 31420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 12:04:56.230063 master-0 kubenswrapper[31420]: I0220 12:04:56.229976 31420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 12:04:56.230063 master-0 kubenswrapper[31420]: I0220 12:04:56.230059 31420 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 12:04:56.242909 master-0 kubenswrapper[31420]: I0220 12:04:56.242866 31420 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 20 12:04:56.243077 master-0 kubenswrapper[31420]: I0220 12:04:56.243006 31420 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 20 12:04:56.311095 master-0 kubenswrapper[31420]: I0220 12:04:56.311027 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.311367 master-0 kubenswrapper[31420]: I0220 12:04:56.311117 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.311367 master-0 kubenswrapper[31420]: I0220 12:04:56.311157 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.311367 master-0 kubenswrapper[31420]: I0220 12:04:56.311213 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.311367 master-0 kubenswrapper[31420]: I0220 12:04:56.311269 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa2f6c0cf73fadd0d96a26150bb4dbb3-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa2f6c0cf73fadd0d96a26150bb4dbb3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 12:04:56.311367 master-0 kubenswrapper[31420]: I0220 12:04:56.311311 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.311367 master-0 kubenswrapper[31420]: I0220 12:04:56.311341 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.311639 master-0 kubenswrapper[31420]: I0220 12:04:56.311386 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:56.311639 master-0 kubenswrapper[31420]: I0220 12:04:56.311465 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.311639 master-0 kubenswrapper[31420]: I0220 12:04:56.311552 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.311639 master-0 kubenswrapper[31420]: I0220 12:04:56.311581 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"65774ccd44b6b404cec890cd0cfa3872\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:56.311639 master-0 kubenswrapper[31420]: I0220 12:04:56.311616 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 12:04:56.311639 master-0 kubenswrapper[31420]: I0220 12:04:56.311637 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 12:04:56.311804 master-0 kubenswrapper[31420]: I0220 12:04:56.311661 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.311804 master-0 kubenswrapper[31420]: I0220 12:04:56.311683 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.311804 master-0 kubenswrapper[31420]: I0220 12:04:56.311717 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:56.311804 master-0 kubenswrapper[31420]: I0220 12:04:56.311738 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:56.311804 master-0 kubenswrapper[31420]: I0220 12:04:56.311758 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"65774ccd44b6b404cec890cd0cfa3872\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:56.311804 master-0 kubenswrapper[31420]: I0220 12:04:56.311775 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa2f6c0cf73fadd0d96a26150bb4dbb3-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa2f6c0cf73fadd0d96a26150bb4dbb3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 12:04:56.311804 master-0 kubenswrapper[31420]: I0220 12:04:56.311792 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.323773 master-0 kubenswrapper[31420]: E0220 12:04:56.323722 31420 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:56.325408 master-0 kubenswrapper[31420]: E0220 12:04:56.325372 31420 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 12:04:56.326589 master-0 kubenswrapper[31420]: E0220 12:04:56.326514 31420 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.412563 master-0 kubenswrapper[31420]: I0220 12:04:56.412481 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.412985 master-0 kubenswrapper[31420]: I0220 12:04:56.412577 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"65774ccd44b6b404cec890cd0cfa3872\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:56.412985 master-0 kubenswrapper[31420]: I0220 12:04:56.412652 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.412985 master-0 kubenswrapper[31420]: I0220 12:04:56.412725 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 12:04:56.412985 master-0 kubenswrapper[31420]: I0220 12:04:56.412750 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 12:04:56.412985 master-0 kubenswrapper[31420]: I0220 12:04:56.412771 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.412985 master-0 kubenswrapper[31420]: I0220 12:04:56.412789 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.412985 master-0 kubenswrapper[31420]: I0220 12:04:56.412809 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:56.412985 master-0 kubenswrapper[31420]: I0220 12:04:56.412826 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:56.412985 master-0 kubenswrapper[31420]: I0220 12:04:56.412860 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"65774ccd44b6b404cec890cd0cfa3872\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:56.412985 master-0 kubenswrapper[31420]: I0220 12:04:56.412880 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa2f6c0cf73fadd0d96a26150bb4dbb3-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa2f6c0cf73fadd0d96a26150bb4dbb3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 12:04:56.412985 master-0 kubenswrapper[31420]: I0220 12:04:56.412903 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.412985 master-0 kubenswrapper[31420]: I0220 12:04:56.412921 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.412985 master-0 kubenswrapper[31420]: I0220 12:04:56.412940 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.412985 master-0 kubenswrapper[31420]: I0220 12:04:56.412960 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.413463 master-0 kubenswrapper[31420]: I0220 12:04:56.413002 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.413463 master-0 kubenswrapper[31420]: I0220 12:04:56.413023 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa2f6c0cf73fadd0d96a26150bb4dbb3-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa2f6c0cf73fadd0d96a26150bb4dbb3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 12:04:56.413463 master-0 kubenswrapper[31420]: I0220 12:04:56.413054 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.413463 master-0 kubenswrapper[31420]: I0220 12:04:56.413081 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.413463 master-0 kubenswrapper[31420]: I0220 12:04:56.413115 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:56.413463 master-0 kubenswrapper[31420]: I0220 12:04:56.413141 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.413463 master-0 kubenswrapper[31420]: I0220 12:04:56.413187 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.413463 master-0 kubenswrapper[31420]: I0220 12:04:56.413235 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"65774ccd44b6b404cec890cd0cfa3872\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:56.413463 master-0 kubenswrapper[31420]: I0220 12:04:56.413285 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 12:04:56.413463 master-0 kubenswrapper[31420]: I0220 12:04:56.413320 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 12:04:56.413463 master-0 kubenswrapper[31420]: I0220 12:04:56.413357 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.413463 master-0 kubenswrapper[31420]: I0220 12:04:56.413395 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.413463 master-0 kubenswrapper[31420]: I0220 12:04:56.413436 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:56.413463 master-0 kubenswrapper[31420]: I0220 12:04:56.413463 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:56.413933 master-0 kubenswrapper[31420]: I0220 12:04:56.413489 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"65774ccd44b6b404cec890cd0cfa3872\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:56.413933 master-0 kubenswrapper[31420]: I0220 12:04:56.413516 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa2f6c0cf73fadd0d96a26150bb4dbb3-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa2f6c0cf73fadd0d96a26150bb4dbb3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 12:04:56.413933 master-0 kubenswrapper[31420]: I0220 12:04:56.413572 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.413933 master-0 kubenswrapper[31420]: I0220 12:04:56.413600 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.413933 master-0 kubenswrapper[31420]: I0220 12:04:56.413627 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.413933 master-0 kubenswrapper[31420]: I0220 12:04:56.413652 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.413933 master-0 kubenswrapper[31420]: I0220 12:04:56.413677 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:04:56.413933 master-0 kubenswrapper[31420]: I0220 12:04:56.413702 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa2f6c0cf73fadd0d96a26150bb4dbb3-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa2f6c0cf73fadd0d96a26150bb4dbb3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 12:04:56.413933 master-0 kubenswrapper[31420]: I0220 12:04:56.413726 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.413933 master-0 kubenswrapper[31420]: I0220 12:04:56.413754 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 12:04:56.413933 master-0 kubenswrapper[31420]: I0220 12:04:56.413789 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:56.428834 master-0 kubenswrapper[31420]: I0220 12:04:56.428794 31420 apiserver.go:52] "Watching apiserver" Feb 20 12:04:56.452043 master-0 kubenswrapper[31420]: I0220 12:04:56.451939 31420 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 20 12:04:56.454035 master-0 kubenswrapper[31420]: I0220 12:04:56.453957 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9qpc7","openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8","openshift-etcd/installer-1-master-0","openshift-ingress-canary/ingress-canary-f6xzr","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85","openshift-kube-scheduler/installer-2-retry-1-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-machine-config-operator/machine-config-server-4wkh4","openshift-monitoring/telemeter-client-796b9bd86f-sp4fc","openshift-service-ca/service-ca-576b4d78bd-5fph4","openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst","openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj","openshift-dns/dns-default-kx4ch","openshift-kube-apiserver/installer-3-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-network-operator/network-operator-7d7db75979-fv598","openshift-cluster-node-tuning-operator/tuned-z82cm","openshift-cluster-version/cluster-version-operator-57476485-dwvgg","openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75","openshift-ingress-operator/ingress-operator-6569778c84-kw2v6","openshift-ingress/router-default-7b65dc9fcb-fkkd5","openshift-kube-apiserver/installer-2-master-0","openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh","openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f","openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2","assisted-installer/assisted-installer-controller-s6zmp","openshift-dns-operator/dns-operator-8c7d49845-qhx9j","openshift-kube-controller-manager/installer-3-master-0","openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l","openshift-monitoring/node-exporter-8d7nc","openshift-multus/multus-additional-cni-plugins-f2l64","openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5","openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg","openshift-kube-storage-version-migrator/migrator-5c85bff57-j46n9","openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt","openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq","openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd","openshift-monitoring/prometheus-operator-754bc4d665-5kbrl","openshift-ovn-kubernetes/ovnkube-node-7l848","openshift-apiserver/apiserver-7666bb78cc-jxswr","openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc","openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt","openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw","openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk","openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6","openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw","openshift-etcd/etcd-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/machine-config-daemon-mpwks","openshift-marketplace/community-operators-7kn5q","openshift-network-diagnostics/network-check-target-h5w2t","openshift-marketplace/redhat-marketplace-89t2q","openshift-network-diagnostics/network-check-source-58fb6744f5-gjgxv","openshift-network-node-identity/network-node-identity-psm4s","openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc","openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g","openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx","openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw","openshift-kube-controller-manager/installer-2-master-0","openshift-network-operator/iptables-alerter-gkxzr","openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg","openshift-controller-manager/controller-manager-599c7886f5-zltnd","openshift-dns/node-resolver-jlp7n","openshift-marketplace/certified-operators-76v4z","openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z","openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt","openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q","openshift-insights/insights-operator-59b498fcfb-hsjr7","openshift-kube-apiserver/installer-1-master-0","openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n","openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l","openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn","openshift-etcd/installer-2-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-scheduler/installer-3-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8","openshift-marketplace/marketplace-operator-6f5488b997-nr4tg","openshift-kube-scheduler/installer-2-master-0","openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5","openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr","openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt","openshift-multus/network-metrics-daemon-29622","openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4","openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn","openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-marketplace/redhat-operators-q287t","openshift-monitoring/kube-state-metrics-59584d565f-9fdgm"] Feb 20 12:04:56.454444 master-0 kubenswrapper[31420]: I0220 12:04:56.454368 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-s6zmp" Feb 20 12:04:56.457162 master-0 kubenswrapper[31420]: I0220 12:04:56.456537 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 20 12:04:56.458562 master-0 kubenswrapper[31420]: I0220 12:04:56.458523 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 20 12:04:56.458756 master-0 kubenswrapper[31420]: I0220 12:04:56.458731 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 20 12:04:56.458969 master-0 kubenswrapper[31420]: I0220 12:04:56.458922 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 20 12:04:56.459731 master-0 kubenswrapper[31420]: I0220 12:04:56.459692 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 20 12:04:56.459996 master-0 kubenswrapper[31420]: I0220 12:04:56.459958 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 20 12:04:56.460536 master-0 kubenswrapper[31420]: I0220 12:04:56.460493 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 20 12:04:56.460883 master-0 kubenswrapper[31420]: I0220 12:04:56.460669 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 20 12:04:56.460883 master-0 kubenswrapper[31420]: I0220 12:04:56.460827 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 20 12:04:56.461072 master-0 kubenswrapper[31420]: I0220 12:04:56.460979 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 20 12:04:56.461216 master-0 kubenswrapper[31420]: I0220 12:04:56.461170 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 20 12:04:56.461601 master-0 kubenswrapper[31420]: I0220 12:04:56.461547 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4" Feb 20 12:04:56.462088 master-0 kubenswrapper[31420]: I0220 12:04:56.462052 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 20 12:04:56.462368 master-0 kubenswrapper[31420]: I0220 12:04:56.462305 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 20 12:04:56.462448 master-0 kubenswrapper[31420]: I0220 12:04:56.462371 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 20 12:04:56.462613 master-0 kubenswrapper[31420]: I0220 12:04:56.462561 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 20 12:04:56.462685 master-0 kubenswrapper[31420]: I0220 12:04:56.462620 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 20 12:04:56.464294 master-0 kubenswrapper[31420]: I0220 12:04:56.463812 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 20 12:04:56.464294 master-0 kubenswrapper[31420]: I0220 12:04:56.464137 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 20 12:04:56.464629 master-0 kubenswrapper[31420]: I0220 12:04:56.464313 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 20 12:04:56.464818 master-0 kubenswrapper[31420]: I0220 12:04:56.464777 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 20 12:04:56.474677 master-0 kubenswrapper[31420]: I0220 12:04:56.471159 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 20 12:04:56.477586 master-0 kubenswrapper[31420]: I0220 12:04:56.476820 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 20 12:04:56.477586 master-0 kubenswrapper[31420]: I0220 12:04:56.477101 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 20 12:04:56.477586 master-0 kubenswrapper[31420]: I0220 12:04:56.477419 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 20 12:04:56.477586 master-0 kubenswrapper[31420]: I0220 12:04:56.477453 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 20 12:04:56.477920 master-0 kubenswrapper[31420]: I0220 12:04:56.477603 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 20 12:04:56.477920 master-0 kubenswrapper[31420]: I0220 12:04:56.477720 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 20 12:04:56.477920 master-0 kubenswrapper[31420]: I0220 12:04:56.477696 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 20 12:04:56.477920 master-0 kubenswrapper[31420]: I0220 12:04:56.477875 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 20 12:04:56.478114 master-0 kubenswrapper[31420]: I0220 12:04:56.478031 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 20 12:04:56.478114 master-0 kubenswrapper[31420]: I0220 12:04:56.478034 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 20 12:04:56.478205 master-0 kubenswrapper[31420]: I0220 12:04:56.478126 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 20 12:04:56.478205 master-0 kubenswrapper[31420]: I0220 12:04:56.478033 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 20 12:04:56.478205 master-0 kubenswrapper[31420]: I0220 12:04:56.478188 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 20 12:04:56.478314 master-0 kubenswrapper[31420]: I0220 12:04:56.478267 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 20 12:04:56.478366 master-0 kubenswrapper[31420]: I0220 12:04:56.478343 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 20 12:04:56.478413 master-0 kubenswrapper[31420]: I0220 12:04:56.478371 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 20 12:04:56.478595 master-0 kubenswrapper[31420]: I0220 12:04:56.478478 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 20 12:04:56.478595 master-0 kubenswrapper[31420]: I0220 12:04:56.478501 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 20 12:04:56.478595 master-0 kubenswrapper[31420]: I0220 12:04:56.478543 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 20 12:04:56.478728 master-0 kubenswrapper[31420]: I0220 12:04:56.478625 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 20 12:04:56.479078 master-0 kubenswrapper[31420]: I0220 12:04:56.479059 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 20 12:04:56.483757 master-0 kubenswrapper[31420]: I0220 12:04:56.483498 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 20 12:04:56.485577 master-0 kubenswrapper[31420]: I0220 12:04:56.485497 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 20 12:04:56.485798 master-0 kubenswrapper[31420]: I0220 12:04:56.485759 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 20 12:04:56.485855 master-0 kubenswrapper[31420]: I0220 12:04:56.485790 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 20 12:04:56.485855 master-0 kubenswrapper[31420]: I0220 12:04:56.485800 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 20 12:04:56.485966 master-0 kubenswrapper[31420]: I0220 12:04:56.485882 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 20 12:04:56.485966 master-0 kubenswrapper[31420]: I0220 12:04:56.485776 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 20 12:04:56.485966 master-0 kubenswrapper[31420]: I0220 12:04:56.485960 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 20 12:04:56.486091 master-0 kubenswrapper[31420]: I0220 12:04:56.485817 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 20 12:04:56.486091 master-0 kubenswrapper[31420]: I0220 12:04:56.486067 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 20 12:04:56.486261 master-0 kubenswrapper[31420]: I0220 12:04:56.486218 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 20 12:04:56.486261 master-0 kubenswrapper[31420]: I0220 12:04:56.486255 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 20 12:04:56.486929 master-0 kubenswrapper[31420]: I0220 12:04:56.486313 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 20 12:04:56.486929 master-0 kubenswrapper[31420]: I0220 12:04:56.486416 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 20 12:04:56.486929 master-0 kubenswrapper[31420]: I0220 12:04:56.486632 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 20 12:04:56.486929 master-0 kubenswrapper[31420]: I0220 12:04:56.486680 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 20 12:04:56.486929 master-0 kubenswrapper[31420]: I0220 12:04:56.486755 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 20 12:04:56.486929 master-0 kubenswrapper[31420]: I0220 12:04:56.486835 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 20 12:04:56.486929 master-0 kubenswrapper[31420]: I0220 12:04:56.486879 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 20 12:04:56.487202 master-0 kubenswrapper[31420]: I0220 12:04:56.486970 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 20 12:04:56.487202 master-0 kubenswrapper[31420]: I0220 12:04:56.487070 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 20 12:04:56.487202 master-0 kubenswrapper[31420]: I0220 12:04:56.487113 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 20 12:04:56.487315 master-0 kubenswrapper[31420]: I0220 12:04:56.487281 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 20 12:04:56.487359 master-0 kubenswrapper[31420]: I0220 12:04:56.487311 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 20 12:04:56.487359 master-0 kubenswrapper[31420]: I0220 12:04:56.487326 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 20 12:04:56.487438 master-0 kubenswrapper[31420]: I0220 12:04:56.487406 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 20 12:04:56.487438 master-0 kubenswrapper[31420]: I0220 12:04:56.487434 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 20 12:04:56.487523 master-0 kubenswrapper[31420]: I0220 12:04:56.487468 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 20 12:04:56.487523 master-0 kubenswrapper[31420]: I0220 12:04:56.487468 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 20 12:04:56.487523 master-0 kubenswrapper[31420]: I0220 12:04:56.487508 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 20 12:04:56.487523 master-0 kubenswrapper[31420]: I0220 12:04:56.487522 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 20 12:04:56.487779 master-0 kubenswrapper[31420]: I0220 12:04:56.487629 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 20 12:04:56.487827 master-0 kubenswrapper[31420]: I0220 12:04:56.487800 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 20 12:04:56.488016 master-0 kubenswrapper[31420]: I0220 12:04:56.487988 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 20 12:04:56.488424 master-0 kubenswrapper[31420]: I0220 12:04:56.488373 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 20 12:04:56.489776 master-0 kubenswrapper[31420]: I0220 12:04:56.489169 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 12:04:56.489776 master-0 kubenswrapper[31420]: I0220 12:04:56.489468 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 20 12:04:56.491161 master-0 kubenswrapper[31420]: I0220 12:04:56.491104 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 12:04:56.491244 master-0 kubenswrapper[31420]: I0220 12:04:56.491175 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 12:04:56.491496 master-0 kubenswrapper[31420]: I0220 12:04:56.491463 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 20 12:04:56.491765 master-0 kubenswrapper[31420]: I0220 12:04:56.491739 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 20 12:04:56.492123 master-0 kubenswrapper[31420]: I0220 12:04:56.491950 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 20 12:04:56.492299 master-0 kubenswrapper[31420]: I0220 12:04:56.492261 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 20 12:04:56.492905 master-0 kubenswrapper[31420]: I0220 12:04:56.492773 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 20 12:04:56.492905 master-0 kubenswrapper[31420]: I0220 12:04:56.492803 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 20 12:04:56.494253 master-0 kubenswrapper[31420]: I0220 12:04:56.493068 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 20 12:04:56.494253 master-0 kubenswrapper[31420]: I0220 12:04:56.493183 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 20 12:04:56.494253 master-0 kubenswrapper[31420]: I0220 12:04:56.493208 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 20 12:04:56.494471 master-0 kubenswrapper[31420]: I0220 12:04:56.494453 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 20 12:04:56.498568 master-0 kubenswrapper[31420]: I0220 12:04:56.495410 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 20 12:04:56.498568 master-0 kubenswrapper[31420]: I0220 12:04:56.495411 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 12:04:56.498568 master-0 kubenswrapper[31420]: I0220 12:04:56.497505 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 20 12:04:56.498568 master-0 kubenswrapper[31420]: I0220 12:04:56.498048 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 20 12:04:56.498568 master-0 kubenswrapper[31420]: I0220 12:04:56.497568 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 20 12:04:56.498766 master-0 kubenswrapper[31420]: I0220 12:04:56.497743 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 20 12:04:56.499396 master-0 kubenswrapper[31420]: I0220 12:04:56.498815 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 20 12:04:56.501382 master-0 kubenswrapper[31420]: I0220 12:04:56.501342 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 20 12:04:56.502214 master-0 kubenswrapper[31420]: I0220 12:04:56.502195 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 20 12:04:56.502950 master-0 kubenswrapper[31420]: I0220 12:04:56.502915 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-retry-1-master-0" Feb 20 12:04:56.503996 master-0 kubenswrapper[31420]: I0220 12:04:56.503275 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 12:04:56.504978 master-0 kubenswrapper[31420]: I0220 12:04:56.504956 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 20 12:04:56.505477 master-0 kubenswrapper[31420]: I0220 12:04:56.505420 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 20 12:04:56.511649 master-0 kubenswrapper[31420]: I0220 12:04:56.510893 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:56.511649 master-0 kubenswrapper[31420]: I0220 12:04:56.511522 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 12:04:56.511649 master-0 kubenswrapper[31420]: I0220 12:04:56.511558 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 20 12:04:56.512007 master-0 kubenswrapper[31420]: I0220 12:04:56.511980 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 12:04:56.514344 master-0 kubenswrapper[31420]: I0220 12:04:56.514308 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 20 12:04:56.517935 master-0 kubenswrapper[31420]: I0220 12:04:56.515165 31420 scope.go:117] "RemoveContainer" containerID="ac763378dacfc4363dcfb084085dbc52f6dc5edd975cf1b421f17f519d7cca40" Feb 20 12:04:56.517935 master-0 kubenswrapper[31420]: I0220 12:04:56.516527 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 20 12:04:56.519755 master-0 kubenswrapper[31420]: I0220 12:04:56.519703 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 20 12:04:56.535839 master-0 kubenswrapper[31420]: I0220 12:04:56.535794 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 20 12:04:56.557203 master-0 kubenswrapper[31420]: I0220 12:04:56.557160 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 20 12:04:56.559083 master-0 kubenswrapper[31420]: I0220 12:04:56.559050 31420 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 20 12:04:56.575864 master-0 kubenswrapper[31420]: I0220 12:04:56.575832 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:56.583932 master-0 kubenswrapper[31420]: I0220 12:04:56.583897 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 20 12:04:56.595792 master-0 kubenswrapper[31420]: I0220 12:04:56.595757 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 20 12:04:56.616066 master-0 kubenswrapper[31420]: I0220 12:04:56.616025 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 20 12:04:56.617711 master-0 kubenswrapper[31420]: I0220 12:04:56.617669 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f64ql\" (UniqueName: \"kubernetes.io/projected/89ed6373-78f8-4d77-82b2-1ab055b5b862-kube-api-access-f64ql\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 12:04:56.617711 master-0 kubenswrapper[31420]: I0220 12:04:56.617708 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 12:04:56.617868 master-0 kubenswrapper[31420]: I0220 12:04:56.617728 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r85p\" (UniqueName: \"kubernetes.io/projected/b9fe0660-fae4-4f97-8895-dbc4845cee40-kube-api-access-7r85p\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 12:04:56.617868 master-0 kubenswrapper[31420]: I0220 12:04:56.617744 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 12:04:56.617868 master-0 kubenswrapper[31420]: I0220 12:04:56.617764 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/89ed6373-78f8-4d77-82b2-1ab055b5b862-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 12:04:56.617868 master-0 kubenswrapper[31420]: I0220 12:04:56.617779 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9fe0660-fae4-4f97-8895-dbc4845cee40-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 12:04:56.617868 master-0 kubenswrapper[31420]: I0220 12:04:56.617794 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxr6j\" (UniqueName: \"kubernetes.io/projected/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-kube-api-access-rxr6j\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 12:04:56.617868 master-0 kubenswrapper[31420]: I0220 12:04:56.617817 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-hostroot\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.617868 master-0 kubenswrapper[31420]: I0220 12:04:56.617841 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dx69\" (UniqueName: \"kubernetes.io/projected/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-kube-api-access-2dx69\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.617862 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/89383482-190e-4f74-a81e-b1547e5b9ae6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.617907 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-image-import-ca\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.617923 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms8wk\" (UniqueName: \"kubernetes.io/projected/836a6d7e-9b26-425f-ae21-00422515d7fe-kube-api-access-ms8wk\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.617938 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.617954 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-client-ca\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.617970 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwgg6\" (UniqueName: \"kubernetes.io/projected/8df029f2-d0ec-4543-9371-7694b1e85a06-kube-api-access-kwgg6\") pod \"redhat-marketplace-89t2q\" (UID: \"8df029f2-d0ec-4543-9371-7694b1e85a06\") " pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.617987 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-proxy-tls\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.618001 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j4cs\" (UniqueName: \"kubernetes.io/projected/478be5e4-cf17-4ebf-a45a-c18cd2b69929-kube-api-access-5j4cs\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.618017 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.618032 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-netns\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.618047 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.618063 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.618079 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-config\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.618094 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-kubelet\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.618110 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-multus\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.618128 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hxz5\" (UniqueName: \"kubernetes.io/projected/19cf75ed-6a4e-444d-8975-fa6ecba79f13-kube-api-access-7hxz5\") pod \"certified-operators-76v4z\" (UID: \"19cf75ed-6a4e-444d-8975-fa6ecba79f13\") " pod="openshift-marketplace/certified-operators-76v4z" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.618144 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-config\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 12:04:56.618138 master-0 kubenswrapper[31420]: I0220 12:04:56.618160 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpz9d\" (UniqueName: \"kubernetes.io/projected/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-api-access-hpz9d\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618175 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmwm\" (UniqueName: \"kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm\") pod \"network-check-target-h5w2t\" (UID: \"39ccf158-b40f-4dba-90e2-27b1409487b7\") " pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618192 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ts6s\" (UniqueName: \"kubernetes.io/projected/31969539-bfd1-466f-8697-f13cbbd957df-kube-api-access-7ts6s\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618207 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-default-certificate\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618224 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-images\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618253 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/22bba1b3-587d-4802-b4ae-946827c3fa7a-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618267 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfzqt\" (UniqueName: \"kubernetes.io/projected/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-kube-api-access-kfzqt\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618283 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618302 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618319 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1388469-5e55-4c1b-97c3-c88777f29ae7-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618340 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c078827-3bdb-4509-aeb3-eb558df1f6e7-service-ca-bundle\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618356 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39790258-73bc-4c37-a935-e8d3c2a2d5c6-cert\") pod \"ingress-canary-f6xzr\" (UID: \"39790258-73bc-4c37-a935-e8d3c2a2d5c6\") " pod="openshift-ingress-canary/ingress-canary-f6xzr" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618374 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-var-lib-kubelet\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618390 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-apiservice-cert\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618404 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-stats-auth\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618421 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618435 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-client-certs\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618450 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618467 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-system-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618483 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618498 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618513 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-node-log\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618530 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-os-release\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618545 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618581 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ab951b1-6898-4357-b813-16365f3f89d5-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618597 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ae1fd116-6f63-4344-b7af-278665649e5a-tmpfs\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618613 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618633 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d9f9442b-25b9-420f-b748-bb13423809fe-cache\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618656 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqxhp\" (UniqueName: \"kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-kube-api-access-lqxhp\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618678 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/906307ef-d988-49e7-9d63-39116a2c4880-iptables-alerter-script\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618703 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 12:04:56.618858 master-0 kubenswrapper[31420]: I0220 12:04:56.618726 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4457\" (UniqueName: \"kubernetes.io/projected/6dfca740-0387-428a-b957-3e8a09c6e352-kube-api-access-d4457\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 12:04:56.620047 master-0 kubenswrapper[31420]: I0220 12:04:56.619031 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/22bba1b3-587d-4802-b4ae-946827c3fa7a-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 12:04:56.620047 master-0 kubenswrapper[31420]: I0220 12:04:56.619497 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.620047 master-0 kubenswrapper[31420]: I0220 12:04:56.619592 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 12:04:56.620047 master-0 kubenswrapper[31420]: I0220 12:04:56.619660 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:04:56.620047 master-0 kubenswrapper[31420]: I0220 12:04:56.619663 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ae1fd116-6f63-4344-b7af-278665649e5a-tmpfs\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 12:04:56.620047 master-0 kubenswrapper[31420]: I0220 12:04:56.619801 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d9f9442b-25b9-420f-b748-bb13423809fe-cache\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:04:56.620047 master-0 kubenswrapper[31420]: I0220 12:04:56.619916 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 12:04:56.620047 master-0 kubenswrapper[31420]: I0220 12:04:56.619930 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1709ef31-9ddd-42bf-9a95-4be4502a0828-metrics-certs\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 12:04:56.620356 master-0 kubenswrapper[31420]: I0220 12:04:56.620104 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-config\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 12:04:56.620356 master-0 kubenswrapper[31420]: I0220 12:04:56.620144 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-metrics-certs\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:04:56.620356 master-0 kubenswrapper[31420]: I0220 12:04:56.620221 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 12:04:56.620356 master-0 kubenswrapper[31420]: I0220 12:04:56.620304 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98226a59-5234-48f3-a9cd-21de305810dc-serving-cert\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:04:56.620356 master-0 kubenswrapper[31420]: I0220 12:04:56.620333 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 12:04:56.620630 master-0 kubenswrapper[31420]: I0220 12:04:56.620404 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4j88\" (UniqueName: \"kubernetes.io/projected/bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4-kube-api-access-s4j88\") pod \"csi-snapshot-controller-6847bb4785-792hn\" (UID: \"bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" Feb 20 12:04:56.620630 master-0 kubenswrapper[31420]: I0220 12:04:56.620426 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/906307ef-d988-49e7-9d63-39116a2c4880-iptables-alerter-script\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 12:04:56.620630 master-0 kubenswrapper[31420]: I0220 12:04:56.620432 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 12:04:56.620630 master-0 kubenswrapper[31420]: I0220 12:04:56.620451 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1388469-5e55-4c1b-97c3-c88777f29ae7-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 12:04:56.620630 master-0 kubenswrapper[31420]: I0220 12:04:56.620585 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-tuned\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.620630 master-0 kubenswrapper[31420]: I0220 12:04:56.620616 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdzzt\" (UniqueName: \"kubernetes.io/projected/8ab951b1-6898-4357-b813-16365f3f89d5-kube-api-access-xdzzt\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 12:04:56.620630 master-0 kubenswrapper[31420]: I0220 12:04:56.620636 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d9f9442b-25b9-420f-b748-bb13423809fe-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:04:56.620913 master-0 kubenswrapper[31420]: I0220 12:04:56.620655 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.620913 master-0 kubenswrapper[31420]: I0220 12:04:56.620672 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-audit-log\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:04:56.620913 master-0 kubenswrapper[31420]: I0220 12:04:56.620690 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zkbq\" (UniqueName: \"kubernetes.io/projected/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-kube-api-access-2zkbq\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 12:04:56.620913 master-0 kubenswrapper[31420]: I0220 12:04:56.620715 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-ovn\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.620913 master-0 kubenswrapper[31420]: I0220 12:04:56.620741 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 12:04:56.620913 master-0 kubenswrapper[31420]: I0220 12:04:56.620715 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-tuned\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.620913 master-0 kubenswrapper[31420]: I0220 12:04:56.620762 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-certs\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 12:04:56.620913 master-0 kubenswrapper[31420]: I0220 12:04:56.620772 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-metrics-tls\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 12:04:56.620913 master-0 kubenswrapper[31420]: I0220 12:04:56.620792 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-metrics-server-audit-profiles\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:04:56.620913 master-0 kubenswrapper[31420]: I0220 12:04:56.620779 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-audit-log\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:04:56.620913 master-0 kubenswrapper[31420]: I0220 12:04:56.620825 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlcjf\" (UniqueName: \"kubernetes.io/projected/5c104245-d078-4856-9a60-207bb6efcfe8-kube-api-access-nlcjf\") pod \"cluster-samples-operator-65c5c48b9b-2k7xj\" (UID: \"5c104245-d078-4856-9a60-207bb6efcfe8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 12:04:56.620913 master-0 kubenswrapper[31420]: I0220 12:04:56.620855 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-log-socket\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.620913 master-0 kubenswrapper[31420]: I0220 12:04:56.620876 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5360f3f5-2d07-432f-af45-22659538c55e-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 12:04:56.620913 master-0 kubenswrapper[31420]: I0220 12:04:56.620892 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-netns\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.620913 master-0 kubenswrapper[31420]: I0220 12:04:56.620920 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpk24\" (UniqueName: \"kubernetes.io/projected/d65a0af4-c96f-44f8-9384-6bae4585983b-kube-api-access-bpk24\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621016 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-lib-modules\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621039 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2795m\" (UniqueName: \"kubernetes.io/projected/afa174b3-912c-4b56-b5eb-f3e3df012c11-kube-api-access-2795m\") pod \"node-resolver-jlp7n\" (UID: \"afa174b3-912c-4b56-b5eb-f3e3df012c11\") " pod="openshift-dns/node-resolver-jlp7n" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621036 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/22bba1b3-587d-4802-b4ae-946827c3fa7a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621060 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621070 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5360f3f5-2d07-432f-af45-22659538c55e-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621079 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-config\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621097 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-run\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621115 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nd7r\" (UniqueName: \"kubernetes.io/projected/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-kube-api-access-8nd7r\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621133 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621151 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8a97bbf5-7409-4f36-894b-b88284e1b6d0-signing-key\") pod \"service-ca-576b4d78bd-5fph4\" (UID: \"8a97bbf5-7409-4f36-894b-b88284e1b6d0\") " pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621167 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d9f9442b-25b9-420f-b748-bb13423809fe-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621183 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-proxy-ca-bundles\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621274 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-metrics-client-ca\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621299 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8df029f2-d0ec-4543-9371-7694b1e85a06-catalog-content\") pod \"redhat-marketplace-89t2q\" (UID: \"8df029f2-d0ec-4543-9371-7694b1e85a06\") " pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621317 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d9f9442b-25b9-420f-b748-bb13423809fe-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621336 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/312ca024-c8f0-4994-8f9a-b707607341fe-metrics-tls\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621354 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-audit-policies\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621375 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8df029f2-d0ec-4543-9371-7694b1e85a06-catalog-content\") pod \"redhat-marketplace-89t2q\" (UID: \"8df029f2-d0ec-4543-9371-7694b1e85a06\") " pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 12:04:56.621498 master-0 kubenswrapper[31420]: I0220 12:04:56.621496 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621519 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-kubelet\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621560 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-multus-certs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621578 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621582 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/312ca024-c8f0-4994-8f9a-b707607341fe-metrics-tls\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621595 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-cnibin\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621619 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d9f9442b-25b9-420f-b748-bb13423809fe-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621665 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621690 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpnmz\" (UniqueName: \"kubernetes.io/projected/312ca024-c8f0-4994-8f9a-b707607341fe-kube-api-access-bpnmz\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621716 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-daemon-config\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621734 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-srv-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621736 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-textfile\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621769 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-federate-client-tls\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621789 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-textfile\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621791 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-host\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621824 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31969539-bfd1-466f-8697-f13cbbd957df-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621825 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621864 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp57v\" (UniqueName: \"kubernetes.io/projected/59c1cc61-8692-4a35-83fc-6bbef7086117-kube-api-access-mp57v\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621883 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-etcd-client\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621904 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-config\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621928 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-mcd-auth-proxy-config\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621947 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621965 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/89383482-190e-4f74-a81e-b1547e5b9ae6-etc-ssl-certs\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.621987 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-daemon-config\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.622149 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db2a7cb1-1d05-4b24-86ed-f823fad5013e-metrics-tls\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.622190 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-kubernetes\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.622193 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-mcd-auth-proxy-config\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.622211 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ab951b1-6898-4357-b813-16365f3f89d5-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.622247 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6td56\" (UniqueName: \"kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-kube-api-access-6td56\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.622265 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5360f3f5-2d07-432f-af45-22659538c55e-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.622283 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-metrics-client-ca\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.622299 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.622317 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1388469-5e55-4c1b-97c3-c88777f29ae7-config\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.622336 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 12:04:56.622326 master-0 kubenswrapper[31420]: I0220 12:04:56.622356 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf682\" (UniqueName: \"kubernetes.io/projected/ae1fd116-6f63-4344-b7af-278665649e5a-kube-api-access-wf682\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.622373 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksx6l\" (UniqueName: \"kubernetes.io/projected/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-kube-api-access-ksx6l\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.622391 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/906307ef-d988-49e7-9d63-39116a2c4880-host-slash\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.622411 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af18215b-e749-4565-bb6c-24e92c452817-metrics-tls\") pod \"dns-default-kx4ch\" (UID: \"af18215b-e749-4565-bb6c-24e92c452817\") " pod="openshift-dns/dns-default-kx4ch" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.622428 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbdbadd9-eeaa-46ef-936e-5db8d395c118-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-d9vsg\" (UID: \"bbdbadd9-eeaa-46ef-936e-5db8d395c118\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.622434 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5360f3f5-2d07-432f-af45-22659538c55e-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.622445 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqzpj\" (UniqueName: \"kubernetes.io/projected/aae1df07-cf9f-47a3-b146-2a0adb182660-kube-api-access-qqzpj\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.622653 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1388469-5e55-4c1b-97c3-c88777f29ae7-config\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.622835 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-srv-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.622899 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-slash\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.622925 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78bqv\" (UniqueName: \"kubernetes.io/projected/daf25ef5-8247-4dbb-bdc1-55104b1015b7-kube-api-access-78bqv\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.622945 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8a97bbf5-7409-4f36-894b-b88284e1b6d0-signing-cabundle\") pod \"service-ca-576b4d78bd-5fph4\" (UID: \"8a97bbf5-7409-4f36-894b-b88284e1b6d0\") " pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.622964 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.622985 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-client-ca\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623010 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-systemd-units\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623030 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623049 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-k8s-cni-cncf-io\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623075 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623094 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkn7h\" (UniqueName: \"kubernetes.io/projected/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-kube-api-access-qkn7h\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623111 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-env-overrides\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623129 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxs4n\" (UniqueName: \"kubernetes.io/projected/d9f9442b-25b9-420f-b748-bb13423809fe-kube-api-access-kxs4n\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623147 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19cf75ed-6a4e-444d-8975-fa6ecba79f13-utilities\") pod \"certified-operators-76v4z\" (UID: \"19cf75ed-6a4e-444d-8975-fa6ecba79f13\") " pod="openshift-marketplace/certified-operators-76v4z" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623165 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623187 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/312ca024-c8f0-4994-8f9a-b707607341fe-host-etc-kube\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623205 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/daf25ef5-8247-4dbb-bdc1-55104b1015b7-snapshots\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623217 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623222 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-service-ca-bundle\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623252 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623270 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/62fc400b-b3dd-4134-bd27-69dd8369153a-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623289 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-node-bootstrap-token\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623398 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-env-overrides\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623428 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-config\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623441 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623447 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-config\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623475 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrwcs\" (UniqueName: \"kubernetes.io/projected/533fe3c7-504f-40aa-aab0-8d66ef27920f-kube-api-access-jrwcs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623497 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-bin\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623514 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-serving-cert\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623543 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-audit\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623591 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623598 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19cf75ed-6a4e-444d-8975-fa6ecba79f13-utilities\") pod \"certified-operators-76v4z\" (UID: \"19cf75ed-6a4e-444d-8975-fa6ecba79f13\") " pod="openshift-marketplace/certified-operators-76v4z" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623614 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11aaad8c-2f25-460f-b4af-f27d8bc682a0-utilities\") pod \"redhat-operators-q287t\" (UID: \"11aaad8c-2f25-460f-b4af-f27d8bc682a0\") " pod="openshift-marketplace/redhat-operators-q287t" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623633 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-etcd-client\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623652 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2qdb\" (UniqueName: \"kubernetes.io/projected/9c078827-3bdb-4509-aeb3-eb558df1f6e7-kube-api-access-x2qdb\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623685 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7krn8\" (UniqueName: \"kubernetes.io/projected/fca213c3-42ca-4341-a2e6-a143b9389f9e-kube-api-access-7krn8\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623704 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvxsh\" (UniqueName: \"kubernetes.io/projected/839bf5b1-b242-4bbd-bc09-cf6abcf7f734-kube-api-access-pvxsh\") pod \"csi-snapshot-controller-operator-6fb4df594f-8x7xw\" (UID: \"839bf5b1-b242-4bbd-bc09-cf6abcf7f734\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623722 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-proxy-tls\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623738 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-serving-certs-ca-bundle\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:56.623704 master-0 kubenswrapper[31420]: I0220 12:04:56.623754 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-images\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.623774 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysctl-conf\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.623783 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.623791 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/59c1cc61-8692-4a35-83fc-6bbef7086117-node-pullsecrets\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.623814 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-conf-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.623839 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m98rt\" (UniqueName: \"kubernetes.io/projected/2f9cd117-c84f-44c9-80a9-879a04d62934-kube-api-access-m98rt\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.623852 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/daf25ef5-8247-4dbb-bdc1-55104b1015b7-snapshots\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.623859 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbsxw\" (UniqueName: \"kubernetes.io/projected/62fc400b-b3dd-4134-bd27-69dd8369153a-kube-api-access-zbsxw\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.623881 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dx9s\" (UniqueName: \"kubernetes.io/projected/34382460-b2d7-4154-87ba-c0347a4c0f1b-kube-api-access-5dx9s\") pod \"community-operators-7kn5q\" (UID: \"34382460-b2d7-4154-87ba-c0347a4c0f1b\") " pod="openshift-marketplace/community-operators-7kn5q" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.623899 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.623927 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.623947 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-etcd-serving-ca\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.623965 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.623995 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-cnibin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624004 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-config\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624010 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59c1cc61-8692-4a35-83fc-6bbef7086117-audit-dir\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624039 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2a7cb1-1d05-4b24-86ed-f823fad5013e-trusted-ca\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624059 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-systemd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624076 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovn-node-metrics-cert\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624095 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89383482-190e-4f74-a81e-b1547e5b9ae6-kube-api-access\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624113 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cnvt\" (UniqueName: \"kubernetes.io/projected/fca78741-ca32-4867-b44f-483fd62f2942-kube-api-access-2cnvt\") pod \"network-check-source-58fb6744f5-gjgxv\" (UID: \"fca78741-ca32-4867-b44f-483fd62f2942\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-gjgxv" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624167 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf25ef5-8247-4dbb-bdc1-55104b1015b7-serving-cert\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624190 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624219 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k2dv\" (UniqueName: \"kubernetes.io/projected/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-kube-api-access-8k2dv\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624242 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-encryption-config\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624263 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j82z\" (UniqueName: \"kubernetes.io/projected/906307ef-d988-49e7-9d63-39116a2c4880-kube-api-access-5j82z\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624285 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-netd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624304 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p4w6\" (UniqueName: \"kubernetes.io/projected/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-kube-api-access-8p4w6\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624325 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k8n8\" (UniqueName: \"kubernetes.io/projected/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-kube-api-access-2k8n8\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624343 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af18215b-e749-4565-bb6c-24e92c452817-config-volume\") pod \"dns-default-kx4ch\" (UID: \"af18215b-e749-4565-bb6c-24e92c452817\") " pod="openshift-dns/dns-default-kx4ch" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624361 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5z86\" (UniqueName: \"kubernetes.io/projected/11aaad8c-2f25-460f-b4af-f27d8bc682a0-kube-api-access-x5z86\") pod \"redhat-operators-q287t\" (UID: \"11aaad8c-2f25-460f-b4af-f27d8bc682a0\") " pod="openshift-marketplace/redhat-operators-q287t" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624378 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-bound-sa-token\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624395 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-serving-cert\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624414 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc9wx\" (UniqueName: \"kubernetes.io/projected/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-kube-api-access-sc9wx\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624432 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624450 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624470 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624491 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89383482-190e-4f74-a81e-b1547e5b9ae6-service-ca\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624508 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-client-ca-bundle\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624529 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34382460-b2d7-4154-87ba-c0347a4c0f1b-catalog-content\") pod \"community-operators-7kn5q\" (UID: \"34382460-b2d7-4154-87ba-c0347a4c0f1b\") " pod="openshift-marketplace/community-operators-7kn5q" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624546 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-systemd\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624581 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-binary-copy\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624598 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-ca\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624616 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624634 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1df81fcc-f967-4874-ad16-1a89f0e7875a-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624680 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-sys\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624641 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11aaad8c-2f25-460f-b4af-f27d8bc682a0-utilities\") pod \"redhat-operators-q287t\" (UID: \"11aaad8c-2f25-460f-b4af-f27d8bc682a0\") " pod="openshift-marketplace/redhat-operators-q287t" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.625025 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2a7cb1-1d05-4b24-86ed-f823fad5013e-trusted-ca\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.625212 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-serving-cert\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.625433 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.625466 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-images\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 12:04:56.625507 master-0 kubenswrapper[31420]: I0220 12:04:56.624697 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-var-lock\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.625641 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8df029f2-d0ec-4543-9371-7694b1e85a06-utilities\") pod \"redhat-marketplace-89t2q\" (UID: \"8df029f2-d0ec-4543-9371-7694b1e85a06\") " pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.625944 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb135cff-1a2e-468d-80ab-f7db3f57552a-proxy-tls\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.625968 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-ca\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.626288 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovn-node-metrics-cert\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.626304 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-cni-binary-copy\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.626409 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34382460-b2d7-4154-87ba-c0347a4c0f1b-catalog-content\") pod \"community-operators-7kn5q\" (UID: \"34382460-b2d7-4154-87ba-c0347a4c0f1b\") " pod="openshift-marketplace/community-operators-7kn5q" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.626578 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.626571 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4d060bff-3c25-4eeb-bdd3-e20fb2687645-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.626626 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89383482-190e-4f74-a81e-b1547e5b9ae6-serving-cert\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.626632 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8df029f2-d0ec-4543-9371-7694b1e85a06-utilities\") pod \"redhat-marketplace-89t2q\" (UID: \"8df029f2-d0ec-4543-9371-7694b1e85a06\") " pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.626656 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c9xz\" (UniqueName: \"kubernetes.io/projected/af18215b-e749-4565-bb6c-24e92c452817-kube-api-access-7c9xz\") pod \"dns-default-kx4ch\" (UID: \"af18215b-e749-4565-bb6c-24e92c452817\") " pod="openshift-dns/dns-default-kx4ch" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.626753 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vvm8\" (UniqueName: \"kubernetes.io/projected/5360f3f5-2d07-432f-af45-22659538c55e-kube-api-access-7vvm8\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.626787 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-bin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.626820 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfmdd\" (UniqueName: \"kubernetes.io/projected/6479d88f-463f-48ed-846d-2747752a8abb-kube-api-access-mfmdd\") pod \"multus-admission-controller-5f54bf67d4-zxsc2\" (UID: \"6479d88f-463f-48ed-846d-2747752a8abb\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.627113 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-wtmp\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.627199 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.627215 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1df81fcc-f967-4874-ad16-1a89f0e7875a-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.627234 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-trusted-ca-bundle\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.627278 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d65a0af4-c96f-44f8-9384-6bae4585983b-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.627346 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7635c0ff-4d40-4310-8187-230323e504e0-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.627394 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.627435 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttmwx\" (UniqueName: \"kubernetes.io/projected/bbdbadd9-eeaa-46ef-936e-5db8d395c118-kube-api-access-ttmwx\") pod \"cluster-storage-operator-f94476f49-d9vsg\" (UID: \"bbdbadd9-eeaa-46ef-936e-5db8d395c118\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.627435 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.627483 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.627513 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-whereabouts-configmap\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.627567 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.627768 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.627942 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wnh5\" (UniqueName: \"kubernetes.io/projected/22bba1b3-587d-4802-b4ae-946827c3fa7a-kube-api-access-2wnh5\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.628005 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/07281644-2789-424f-8429-aa4448dda01e-whereabouts-configmap\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.628021 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8djgj\" (UniqueName: \"kubernetes.io/projected/1fb59696-1d5f-41bb-9211-b89c63b10840-kube-api-access-8djgj\") pod \"migrator-5c85bff57-j46n9\" (UID: \"1fb59696-1d5f-41bb-9211-b89c63b10840\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-j46n9" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.628072 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-client-tls\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.628093 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.628124 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11aaad8c-2f25-460f-b4af-f27d8bc682a0-catalog-content\") pod \"redhat-operators-q287t\" (UID: \"11aaad8c-2f25-460f-b4af-f27d8bc682a0\") " pod="openshift-marketplace/redhat-operators-q287t" Feb 20 12:04:56.628184 master-0 kubenswrapper[31420]: I0220 12:04:56.628198 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-modprobe-d\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628255 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11aaad8c-2f25-460f-b4af-f27d8bc682a0-catalog-content\") pod \"redhat-operators-q287t\" (UID: \"11aaad8c-2f25-460f-b4af-f27d8bc682a0\") " pod="openshift-marketplace/redhat-operators-q287t" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628256 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk5sc\" (UniqueName: \"kubernetes.io/projected/eb135cff-1a2e-468d-80ab-f7db3f57552a-kube-api-access-tk5sc\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628404 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-etcd-serving-ca\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628475 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0b28c90-d5b6-44f3-867c-020ece32ac7d-config\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628510 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7k2n\" (UniqueName: \"kubernetes.io/projected/c29fd426-7c89-434e-8332-1ca31075d4bf-kube-api-access-z7k2n\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628564 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-serving-cert\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628601 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-socket-dir-parent\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628632 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2kct\" (UniqueName: \"kubernetes.io/projected/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-kube-api-access-z2kct\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628664 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df81fcc-f967-4874-ad16-1a89f0e7875a-config\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628693 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-ovnkube-identity-cm\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628729 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31969539-bfd1-466f-8697-f13cbbd957df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628761 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/4d8cd7c5-31fd-4dca-b39b-6d62eb573707-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-s57jn\" (UID: \"4d8cd7c5-31fd-4dca-b39b-6d62eb573707\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628765 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0b28c90-d5b6-44f3-867c-020ece32ac7d-config\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628789 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/afa174b3-912c-4b56-b5eb-f3e3df012c11-hosts-file\") pod \"node-resolver-jlp7n\" (UID: \"afa174b3-912c-4b56-b5eb-f3e3df012c11\") " pod="openshift-dns/node-resolver-jlp7n" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628899 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxm8p\" (UniqueName: \"kubernetes.io/projected/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-kube-api-access-qxm8p\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628921 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq4ct\" (UniqueName: \"kubernetes.io/projected/8a97bbf5-7409-4f36-894b-b88284e1b6d0-kube-api-access-vq4ct\") pod \"service-ca-576b4d78bd-5fph4\" (UID: \"8a97bbf5-7409-4f36-894b-b88284e1b6d0\") " pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628949 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.628992 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19cf75ed-6a4e-444d-8975-fa6ecba79f13-catalog-content\") pod \"certified-operators-76v4z\" (UID: \"19cf75ed-6a4e-444d-8975-fa6ecba79f13\") " pod="openshift-marketplace/certified-operators-76v4z" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.629016 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-var-lib-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.629046 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26x7b\" (UniqueName: \"kubernetes.io/projected/4d060bff-3c25-4eeb-bdd3-e20fb2687645-kube-api-access-26x7b\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.629075 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.629100 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-serving-cert\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.629114 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.629150 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-client\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.629341 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.629463 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn7cs\" (UniqueName: \"kubernetes.io/projected/ef18ace4-7316-4600-9be9-2adc792705e9-kube-api-access-kn7cs\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.629486 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-config\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.629499 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.629504 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-rootfs\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.629528 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.629535 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-etc-kubernetes\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.629588 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e90033-9ddf-41b4-ab61-e89add6c2fde-serving-cert\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 12:04:56.629603 master-0 kubenswrapper[31420]: I0220 12:04:56.629612 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2hwr\" (UniqueName: \"kubernetes.io/projected/98226a59-5234-48f3-a9cd-21de305810dc-kube-api-access-j2hwr\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.629648 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df81fcc-f967-4874-ad16-1a89f0e7875a-config\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.629662 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.629685 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-encryption-config\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.629702 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.629719 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fca213c3-42ca-4341-a2e6-a143b9389f9e-audit-dir\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.629728 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31969539-bfd1-466f-8697-f13cbbd957df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.629737 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.629759 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.629777 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.629783 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19cf75ed-6a4e-444d-8975-fa6ecba79f13-catalog-content\") pod \"certified-operators-76v4z\" (UID: \"19cf75ed-6a4e-444d-8975-fa6ecba79f13\") " pod="openshift-marketplace/certified-operators-76v4z" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.629794 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1388469-5e55-4c1b-97c3-c88777f29ae7-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.629800 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-client\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630028 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e90033-9ddf-41b4-ab61-e89add6c2fde-serving-cert\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630046 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/836a6d7e-9b26-425f-ae21-00422515d7fe-webhook-cert\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630080 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-script-lib\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630110 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2tk7\" (UniqueName: \"kubernetes.io/projected/01e90033-9ddf-41b4-ab61-e89add6c2fde-kube-api-access-j2tk7\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630137 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630203 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630220 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk6hv\" (UniqueName: \"kubernetes.io/projected/21e8e44b-b883-4afb-af90-d6c1265edf34-kube-api-access-rk6hv\") pod \"control-plane-machine-set-operator-686847ff5f-fn7j5\" (UID: \"21e8e44b-b883-4afb-af90-d6c1265edf34\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630465 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6dfca740-0387-428a-b957-3e8a09c6e352-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630607 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630640 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94lkp\" (UniqueName: \"kubernetes.io/projected/39790258-73bc-4c37-a935-e8d3c2a2d5c6-kube-api-access-94lkp\") pod \"ingress-canary-f6xzr\" (UID: \"39790258-73bc-4c37-a935-e8d3c2a2d5c6\") " pod="openshift-ingress-canary/ingress-canary-f6xzr" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630686 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/836a6d7e-9b26-425f-ae21-00422515d7fe-ovnkube-identity-cm\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630726 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630756 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-tls\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630776 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630795 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-server-tls\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:04:56.630840 master-0 kubenswrapper[31420]: I0220 12:04:56.630868 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5pw4\" (UniqueName: \"kubernetes.io/projected/07281644-2789-424f-8429-aa4448dda01e-kube-api-access-l5pw4\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.630894 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79j9f\" (UniqueName: \"kubernetes.io/projected/1709ef31-9ddd-42bf-9a95-4be4502a0828-kube-api-access-79j9f\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.630912 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef18ace4-7316-4600-9be9-2adc792705e9-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.630929 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcnmk\" (UniqueName: \"kubernetes.io/projected/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-kube-api-access-rcnmk\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.630947 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysconfig\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.630956 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/836a6d7e-9b26-425f-ae21-00422515d7fe-webhook-cert\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.630964 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-serving-cert\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.630983 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631156 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-ovnkube-script-lib\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631198 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631224 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631266 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34382460-b2d7-4154-87ba-c0347a4c0f1b-utilities\") pod \"community-operators-7kn5q\" (UID: \"34382460-b2d7-4154-87ba-c0347a4c0f1b\") " pod="openshift-marketplace/community-operators-7kn5q" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631296 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysctl-d\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631323 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef18ace4-7316-4600-9be9-2adc792705e9-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631351 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29fd426-7c89-434e-8332-1ca31075d4bf-serving-cert\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631377 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5m78\" (UniqueName: \"kubernetes.io/projected/7635c0ff-4d40-4310-8187-230323e504e0-kube-api-access-p5m78\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631465 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6479d88f-463f-48ed-846d-2747752a8abb-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-zxsc2\" (UID: \"6479d88f-463f-48ed-846d-2747752a8abb\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631493 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e90033-9ddf-41b4-ab61-e89add6c2fde-config\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631504 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-serving-cert\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631521 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mggv\" (UniqueName: \"kubernetes.io/projected/1df81fcc-f967-4874-ad16-1a89f0e7875a-kube-api-access-7mggv\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631569 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-os-release\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631596 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-serving-cert\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631622 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvjcp\" (UniqueName: \"kubernetes.io/projected/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-kube-api-access-lvjcp\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631648 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631666 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0b28c90-d5b6-44f3-867c-020ece32ac7d-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631683 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-trusted-ca-bundle\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.631869 master-0 kubenswrapper[31420]: I0220 12:04:56.631752 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.631944 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e90033-9ddf-41b4-ab61-e89add6c2fde-config\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.631988 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34382460-b2d7-4154-87ba-c0347a4c0f1b-utilities\") pod \"community-operators-7kn5q\" (UID: \"34382460-b2d7-4154-87ba-c0347a4c0f1b\") " pod="openshift-marketplace/community-operators-7kn5q" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632025 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0b28c90-d5b6-44f3-867c-020ece32ac7d-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632070 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zztmz\" (UniqueName: \"kubernetes.io/projected/bd609bd3-2525-4b88-8f07-94a0418fb582-kube-api-access-zztmz\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632185 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0b28c90-d5b6-44f3-867c-020ece32ac7d-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632257 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-system-cni-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632275 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632285 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-webhook-cert\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632317 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-cni-binary-copy\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632354 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-etc-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632393 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-env-overrides\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632447 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-sys\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632476 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/533fe3c7-504f-40aa-aab0-8d66ef27920f-cni-binary-copy\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632484 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632508 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-config\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632564 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632585 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/21e8e44b-b883-4afb-af90-d6c1265edf34-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-fn7j5\" (UID: \"21e8e44b-b883-4afb-af90-d6c1265edf34\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632610 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/478be5e4-cf17-4ebf-a45a-c18cd2b69929-env-overrides\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632632 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-trusted-ca-bundle\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632657 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632675 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c104245-d078-4856-9a60-207bb6efcfe8-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-2k7xj\" (UID: \"5c104245-d078-4856-9a60-207bb6efcfe8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632696 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632750 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 12:04:56.632840 master-0 kubenswrapper[31420]: I0220 12:04:56.632818 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/042d8457-04dc-4171-8b0f-f9e3de695c46-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:56.633784 master-0 kubenswrapper[31420]: I0220 12:04:56.632893 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 12:04:56.633784 master-0 kubenswrapper[31420]: I0220 12:04:56.632755 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/042d8457-04dc-4171-8b0f-f9e3de695c46-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:56.633784 master-0 kubenswrapper[31420]: I0220 12:04:56.633043 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb135cff-1a2e-468d-80ab-f7db3f57552a-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 12:04:56.633784 master-0 kubenswrapper[31420]: I0220 12:04:56.633083 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-tmp\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.633784 master-0 kubenswrapper[31420]: I0220 12:04:56.633091 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-config\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 12:04:56.633784 master-0 kubenswrapper[31420]: I0220 12:04:56.633103 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 12:04:56.633784 master-0 kubenswrapper[31420]: I0220 12:04:56.633140 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-root\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:56.633784 master-0 kubenswrapper[31420]: I0220 12:04:56.633177 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-tmp\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.636178 master-0 kubenswrapper[31420]: I0220 12:04:56.636147 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 20 12:04:56.656340 master-0 kubenswrapper[31420]: I0220 12:04:56.656302 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 20 12:04:56.676148 master-0 kubenswrapper[31420]: I0220 12:04:56.676103 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 20 12:04:56.697842 master-0 kubenswrapper[31420]: I0220 12:04:56.697475 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 20 12:04:56.704464 master-0 kubenswrapper[31420]: I0220 12:04:56.704333 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8a97bbf5-7409-4f36-894b-b88284e1b6d0-signing-cabundle\") pod \"service-ca-576b4d78bd-5fph4\" (UID: \"8a97bbf5-7409-4f36-894b-b88284e1b6d0\") " pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 12:04:56.717117 master-0 kubenswrapper[31420]: I0220 12:04:56.717071 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 20 12:04:56.722651 master-0 kubenswrapper[31420]: I0220 12:04:56.722428 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8a97bbf5-7409-4f36-894b-b88284e1b6d0-signing-key\") pod \"service-ca-576b4d78bd-5fph4\" (UID: \"8a97bbf5-7409-4f36-894b-b88284e1b6d0\") " pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 12:04:56.734309 master-0 kubenswrapper[31420]: I0220 12:04:56.734245 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.734420 master-0 kubenswrapper[31420]: I0220 12:04:56.734378 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:04:56.734420 master-0 kubenswrapper[31420]: I0220 12:04:56.734401 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.734610 master-0 kubenswrapper[31420]: I0220 12:04:56.734498 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:04:56.734610 master-0 kubenswrapper[31420]: I0220 12:04:56.734572 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 12:04:56.734706 master-0 kubenswrapper[31420]: I0220 12:04:56.734676 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysconfig\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.734765 master-0 kubenswrapper[31420]: I0220 12:04:56.734746 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysctl-d\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.734829 master-0 kubenswrapper[31420]: I0220 12:04:56.734685 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 12:04:56.734871 master-0 kubenswrapper[31420]: I0220 12:04:56.734805 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysconfig\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.734871 master-0 kubenswrapper[31420]: I0220 12:04:56.734845 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-os-release\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.735392 master-0 kubenswrapper[31420]: I0220 12:04:56.735330 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-system-cni-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.735457 master-0 kubenswrapper[31420]: I0220 12:04:56.735387 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-sys\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:56.735457 master-0 kubenswrapper[31420]: I0220 12:04:56.735443 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-etc-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.735600 master-0 kubenswrapper[31420]: I0220 12:04:56.735549 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-root\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:56.735736 master-0 kubenswrapper[31420]: I0220 12:04:56.735696 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/89383482-190e-4f74-a81e-b1547e5b9ae6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 12:04:56.735799 master-0 kubenswrapper[31420]: I0220 12:04:56.735777 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-hostroot\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.735892 master-0 kubenswrapper[31420]: I0220 12:04:56.735865 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-kubelet\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.735943 master-0 kubenswrapper[31420]: I0220 12:04:56.735909 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-netns\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.735993 master-0 kubenswrapper[31420]: I0220 12:04:56.735954 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-sys\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:56.736049 master-0 kubenswrapper[31420]: I0220 12:04:56.735967 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-multus\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.736049 master-0 kubenswrapper[31420]: I0220 12:04:56.736039 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-etc-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.736131 master-0 kubenswrapper[31420]: I0220 12:04:56.736106 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysctl-d\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.736192 master-0 kubenswrapper[31420]: I0220 12:04:56.736162 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-root\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:56.736239 master-0 kubenswrapper[31420]: I0220 12:04:56.736167 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-os-release\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.736239 master-0 kubenswrapper[31420]: I0220 12:04:56.736196 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-var-lib-kubelet\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.736239 master-0 kubenswrapper[31420]: I0220 12:04:56.736230 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-var-lib-kubelet\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.736351 master-0 kubenswrapper[31420]: I0220 12:04:56.736278 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-system-cni-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.736351 master-0 kubenswrapper[31420]: I0220 12:04:56.736302 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.736427 master-0 kubenswrapper[31420]: I0220 12:04:56.736348 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-system-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.736427 master-0 kubenswrapper[31420]: I0220 12:04:56.736384 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-netns\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.736427 master-0 kubenswrapper[31420]: I0220 12:04:56.736400 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-node-log\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.736533 master-0 kubenswrapper[31420]: I0220 12:04:56.736441 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-os-release\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.736533 master-0 kubenswrapper[31420]: I0220 12:04:56.736445 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-hostroot\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.736533 master-0 kubenswrapper[31420]: I0220 12:04:56.736475 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-kubelet\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.736685 master-0 kubenswrapper[31420]: I0220 12:04:56.736625 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 20 12:04:56.736685 master-0 kubenswrapper[31420]: I0220 12:04:56.736663 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d9f9442b-25b9-420f-b748-bb13423809fe-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:04:56.736758 master-0 kubenswrapper[31420]: I0220 12:04:56.736697 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.736758 master-0 kubenswrapper[31420]: I0220 12:04:56.736732 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.736836 master-0 kubenswrapper[31420]: I0220 12:04:56.736761 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-multus\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.736836 master-0 kubenswrapper[31420]: I0220 12:04:56.736739 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-ovn\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.736836 master-0 kubenswrapper[31420]: I0220 12:04:56.736811 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d9f9442b-25b9-420f-b748-bb13423809fe-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:04:56.736938 master-0 kubenswrapper[31420]: I0220 12:04:56.736830 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-ovn\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.736938 master-0 kubenswrapper[31420]: I0220 12:04:56.736861 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-run-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.736938 master-0 kubenswrapper[31420]: I0220 12:04:56.736895 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-os-release\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.736938 master-0 kubenswrapper[31420]: I0220 12:04:56.736915 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-node-log\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.736938 master-0 kubenswrapper[31420]: I0220 12:04:56.736898 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/89383482-190e-4f74-a81e-b1547e5b9ae6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 12:04:56.737142 master-0 kubenswrapper[31420]: I0220 12:04:56.736967 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-system-cni-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.737142 master-0 kubenswrapper[31420]: I0220 12:04:56.737065 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-log-socket\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.737641 master-0 kubenswrapper[31420]: I0220 12:04:56.737581 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-log-socket\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.737758 master-0 kubenswrapper[31420]: I0220 12:04:56.737668 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-netns\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.737758 master-0 kubenswrapper[31420]: I0220 12:04:56.737711 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-lib-modules\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.737758 master-0 kubenswrapper[31420]: I0220 12:04:56.737735 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-run\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.737915 master-0 kubenswrapper[31420]: I0220 12:04:56.737798 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-netns\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.737915 master-0 kubenswrapper[31420]: I0220 12:04:56.737866 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:56.737999 master-0 kubenswrapper[31420]: I0220 12:04:56.737911 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d9f9442b-25b9-420f-b748-bb13423809fe-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:04:56.737999 master-0 kubenswrapper[31420]: I0220 12:04:56.737943 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-run\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.737999 master-0 kubenswrapper[31420]: I0220 12:04:56.737975 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-kubelet\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.738138 master-0 kubenswrapper[31420]: I0220 12:04:56.738017 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-multus-certs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.738138 master-0 kubenswrapper[31420]: I0220 12:04:56.738053 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-lib-modules\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.738138 master-0 kubenswrapper[31420]: I0220 12:04:56.738071 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-cnibin\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.738138 master-0 kubenswrapper[31420]: I0220 12:04:56.738118 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d9f9442b-25b9-420f-b748-bb13423809fe-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:04:56.738324 master-0 kubenswrapper[31420]: I0220 12:04:56.738160 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-multus-certs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.738324 master-0 kubenswrapper[31420]: I0220 12:04:56.738200 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-host\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.738324 master-0 kubenswrapper[31420]: I0220 12:04:56.738278 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/89383482-190e-4f74-a81e-b1547e5b9ae6-etc-ssl-certs\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 12:04:56.738324 master-0 kubenswrapper[31420]: I0220 12:04:56.738080 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-kubelet\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.738324 master-0 kubenswrapper[31420]: I0220 12:04:56.738310 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/89383482-190e-4f74-a81e-b1547e5b9ae6-etc-ssl-certs\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 12:04:56.738508 master-0 kubenswrapper[31420]: I0220 12:04:56.738340 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/07281644-2789-424f-8429-aa4448dda01e-cnibin\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:04:56.738508 master-0 kubenswrapper[31420]: I0220 12:04:56.738367 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-kubernetes\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.738508 master-0 kubenswrapper[31420]: I0220 12:04:56.738374 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-host\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.738508 master-0 kubenswrapper[31420]: I0220 12:04:56.738426 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-kubernetes\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.738508 master-0 kubenswrapper[31420]: I0220 12:04:56.738434 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/906307ef-d988-49e7-9d63-39116a2c4880-host-slash\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 12:04:56.738508 master-0 kubenswrapper[31420]: I0220 12:04:56.738464 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/906307ef-d988-49e7-9d63-39116a2c4880-host-slash\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 12:04:56.738791 master-0 kubenswrapper[31420]: I0220 12:04:56.738562 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-slash\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.738791 master-0 kubenswrapper[31420]: I0220 12:04:56.738627 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.738791 master-0 kubenswrapper[31420]: I0220 12:04:56.738690 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-k8s-cni-cncf-io\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.738791 master-0 kubenswrapper[31420]: I0220 12:04:56.738757 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-systemd-units\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.738936 master-0 kubenswrapper[31420]: I0220 12:04:56.738797 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:04:56.738936 master-0 kubenswrapper[31420]: I0220 12:04:56.738854 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/312ca024-c8f0-4994-8f9a-b707607341fe-host-etc-kube\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 12:04:56.738936 master-0 kubenswrapper[31420]: I0220 12:04:56.738923 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-bin\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.739272 master-0 kubenswrapper[31420]: I0220 12:04:56.739100 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-conf-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.739272 master-0 kubenswrapper[31420]: I0220 12:04:56.739167 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysctl-conf\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.739272 master-0 kubenswrapper[31420]: I0220 12:04:56.739204 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/59c1cc61-8692-4a35-83fc-6bbef7086117-node-pullsecrets\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.739272 master-0 kubenswrapper[31420]: I0220 12:04:56.739253 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.739505 master-0 kubenswrapper[31420]: I0220 12:04:56.739329 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59c1cc61-8692-4a35-83fc-6bbef7086117-audit-dir\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.739505 master-0 kubenswrapper[31420]: I0220 12:04:56.739369 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-systemd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.739505 master-0 kubenswrapper[31420]: I0220 12:04:56.739420 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-cnibin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.739505 master-0 kubenswrapper[31420]: I0220 12:04:56.739469 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:56.739706 master-0 kubenswrapper[31420]: I0220 12:04:56.739564 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-netd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.739774 master-0 kubenswrapper[31420]: I0220 12:04:56.739743 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-systemd\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.739818 master-0 kubenswrapper[31420]: I0220 12:04:56.739777 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-conf-dir\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.739818 master-0 kubenswrapper[31420]: I0220 12:04:56.739810 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-slash\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.739898 master-0 kubenswrapper[31420]: I0220 12:04:56.739823 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-var-lock\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:56.739898 master-0 kubenswrapper[31420]: I0220 12:04:56.739835 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.739898 master-0 kubenswrapper[31420]: I0220 12:04:56.739861 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-run-k8s-cni-cncf-io\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.739898 master-0 kubenswrapper[31420]: I0220 12:04:56.739878 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-sys\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.740038 master-0 kubenswrapper[31420]: I0220 12:04:56.739909 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:04:56.740038 master-0 kubenswrapper[31420]: I0220 12:04:56.739916 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-bin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.740038 master-0 kubenswrapper[31420]: I0220 12:04:56.739938 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/312ca024-c8f0-4994-8f9a-b707607341fe-host-etc-kube\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 12:04:56.740038 master-0 kubenswrapper[31420]: I0220 12:04:56.739961 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-bin\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.740038 master-0 kubenswrapper[31420]: I0220 12:04:56.739967 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-wtmp\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:56.740038 master-0 kubenswrapper[31420]: I0220 12:04:56.739986 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:56.740250 master-0 kubenswrapper[31420]: I0220 12:04:56.740094 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-sysctl-conf\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.740250 master-0 kubenswrapper[31420]: I0220 12:04:56.739886 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-systemd-units\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.740438 master-0 kubenswrapper[31420]: I0220 12:04:56.740390 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-var-lock\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:56.740438 master-0 kubenswrapper[31420]: I0220 12:04:56.740425 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-sys\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.740663 master-0 kubenswrapper[31420]: I0220 12:04:56.740447 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59c1cc61-8692-4a35-83fc-6bbef7086117-audit-dir\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.740663 master-0 kubenswrapper[31420]: I0220 12:04:56.740431 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-host-cni-netd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.740663 master-0 kubenswrapper[31420]: I0220 12:04:56.740467 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.740663 master-0 kubenswrapper[31420]: I0220 12:04:56.740486 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-systemd\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.740663 master-0 kubenswrapper[31420]: I0220 12:04:56.740490 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-run-systemd\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.740663 master-0 kubenswrapper[31420]: I0220 12:04:56.740491 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-host-var-lib-cni-bin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.740663 master-0 kubenswrapper[31420]: I0220 12:04:56.740520 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-cnibin\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.740663 master-0 kubenswrapper[31420]: I0220 12:04:56.740554 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-wtmp\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:56.740663 master-0 kubenswrapper[31420]: I0220 12:04:56.740517 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/59c1cc61-8692-4a35-83fc-6bbef7086117-node-pullsecrets\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.741775 master-0 kubenswrapper[31420]: I0220 12:04:56.740764 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-modprobe-d\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.741775 master-0 kubenswrapper[31420]: I0220 12:04:56.740823 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-socket-dir-parent\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.741775 master-0 kubenswrapper[31420]: I0220 12:04:56.740895 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-var-lib-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.741775 master-0 kubenswrapper[31420]: I0220 12:04:56.740951 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-etc-modprobe-d\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:04:56.741775 master-0 kubenswrapper[31420]: I0220 12:04:56.741039 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-multus-socket-dir-parent\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.741775 master-0 kubenswrapper[31420]: I0220 12:04:56.741069 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/afa174b3-912c-4b56-b5eb-f3e3df012c11-hosts-file\") pod \"node-resolver-jlp7n\" (UID: \"afa174b3-912c-4b56-b5eb-f3e3df012c11\") " pod="openshift-dns/node-resolver-jlp7n" Feb 20 12:04:56.741775 master-0 kubenswrapper[31420]: I0220 12:04:56.741090 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/478be5e4-cf17-4ebf-a45a-c18cd2b69929-var-lib-openvswitch\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:56.741775 master-0 kubenswrapper[31420]: I0220 12:04:56.741138 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/afa174b3-912c-4b56-b5eb-f3e3df012c11-hosts-file\") pod \"node-resolver-jlp7n\" (UID: \"afa174b3-912c-4b56-b5eb-f3e3df012c11\") " pod="openshift-dns/node-resolver-jlp7n" Feb 20 12:04:56.741775 master-0 kubenswrapper[31420]: I0220 12:04:56.741236 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-etc-kubernetes\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.741775 master-0 kubenswrapper[31420]: I0220 12:04:56.741288 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-rootfs\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 12:04:56.741775 master-0 kubenswrapper[31420]: I0220 12:04:56.741324 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fca213c3-42ca-4341-a2e6-a143b9389f9e-audit-dir\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:56.741775 master-0 kubenswrapper[31420]: I0220 12:04:56.741380 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/533fe3c7-504f-40aa-aab0-8d66ef27920f-etc-kubernetes\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:04:56.741775 master-0 kubenswrapper[31420]: I0220 12:04:56.741386 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fca213c3-42ca-4341-a2e6-a143b9389f9e-audit-dir\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:56.741775 master-0 kubenswrapper[31420]: I0220 12:04:56.741414 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-rootfs\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 12:04:56.757407 master-0 kubenswrapper[31420]: I0220 12:04:56.757172 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 20 12:04:56.766704 master-0 kubenswrapper[31420]: I0220 12:04:56.766605 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-etcd-client\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.777033 master-0 kubenswrapper[31420]: I0220 12:04:56.776957 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 20 12:04:56.781499 master-0 kubenswrapper[31420]: I0220 12:04:56.781415 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-config\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.796978 master-0 kubenswrapper[31420]: I0220 12:04:56.796909 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 20 12:04:56.801653 master-0 kubenswrapper[31420]: I0220 12:04:56.801462 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-image-import-ca\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.816753 master-0 kubenswrapper[31420]: I0220 12:04:56.816687 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 20 12:04:56.837704 master-0 kubenswrapper[31420]: I0220 12:04:56.837635 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 20 12:04:56.842882 master-0 kubenswrapper[31420]: I0220 12:04:56.842827 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-serving-cert\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.860601 master-0 kubenswrapper[31420]: I0220 12:04:56.852568 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-check-endpoints/0.log" Feb 20 12:04:56.861553 master-0 kubenswrapper[31420]: I0220 12:04:56.861220 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:56.861553 master-0 kubenswrapper[31420]: I0220 12:04:56.861123 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"1e3248e967546eaa2d899edca8d1f7a776ac31561d08f2c141ee5bf1df67a989"} Feb 20 12:04:56.861553 master-0 kubenswrapper[31420]: I0220 12:04:56.861498 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 20 12:04:56.870872 master-0 kubenswrapper[31420]: I0220 12:04:56.870805 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/59c1cc61-8692-4a35-83fc-6bbef7086117-encryption-config\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.876438 master-0 kubenswrapper[31420]: I0220 12:04:56.876399 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 20 12:04:56.881120 master-0 kubenswrapper[31420]: I0220 12:04:56.881068 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:56.886051 master-0 kubenswrapper[31420]: I0220 12:04:56.886008 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-audit\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.905971 master-0 kubenswrapper[31420]: I0220 12:04:56.905922 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 20 12:04:56.912288 master-0 kubenswrapper[31420]: I0220 12:04:56.912243 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-trusted-ca-bundle\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.916043 master-0 kubenswrapper[31420]: I0220 12:04:56.916017 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 20 12:04:56.919756 master-0 kubenswrapper[31420]: I0220 12:04:56.919719 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/59c1cc61-8692-4a35-83fc-6bbef7086117-etcd-serving-ca\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:04:56.943600 master-0 kubenswrapper[31420]: I0220 12:04:56.939929 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 20 12:04:56.943600 master-0 kubenswrapper[31420]: I0220 12:04:56.941045 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-stats-auth\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:04:56.964655 master-0 kubenswrapper[31420]: I0220 12:04:56.963890 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 20 12:04:56.971406 master-0 kubenswrapper[31420]: I0220 12:04:56.971321 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c078827-3bdb-4509-aeb3-eb558df1f6e7-service-ca-bundle\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:04:56.979772 master-0 kubenswrapper[31420]: I0220 12:04:56.979602 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 20 12:04:56.996859 master-0 kubenswrapper[31420]: I0220 12:04:56.996632 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 20 12:04:57.017627 master-0 kubenswrapper[31420]: I0220 12:04:57.017574 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 20 12:04:57.036702 master-0 kubenswrapper[31420]: I0220 12:04:57.036647 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 20 12:04:57.040770 master-0 kubenswrapper[31420]: I0220 12:04:57.040726 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-metrics-certs\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:04:57.056973 master-0 kubenswrapper[31420]: I0220 12:04:57.056913 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 20 12:04:57.059637 master-0 kubenswrapper[31420]: I0220 12:04:57.059585 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9c078827-3bdb-4509-aeb3-eb558df1f6e7-default-certificate\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:04:57.075691 master-0 kubenswrapper[31420]: I0220 12:04:57.075622 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kubelet-dir\") pod \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " Feb 20 12:04:57.075789 master-0 kubenswrapper[31420]: I0220 12:04:57.075705 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-var-lock\") pod \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " Feb 20 12:04:57.075789 master-0 kubenswrapper[31420]: I0220 12:04:57.075696 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d" (UID: "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:57.076009 master-0 kubenswrapper[31420]: I0220 12:04:57.075865 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-var-lock" (OuterVolumeSpecName: "var-lock") pod "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d" (UID: "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:04:57.077433 master-0 kubenswrapper[31420]: I0220 12:04:57.077389 31420 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:57.077433 master-0 kubenswrapper[31420]: I0220 12:04:57.077425 31420 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 12:04:57.078077 master-0 kubenswrapper[31420]: I0220 12:04:57.078036 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 20 12:04:57.098069 master-0 kubenswrapper[31420]: I0220 12:04:57.098006 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 20 12:04:57.116616 master-0 kubenswrapper[31420]: I0220 12:04:57.116557 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 20 12:04:57.126334 master-0 kubenswrapper[31420]: I0220 12:04:57.126280 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-encryption-config\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:57.139185 master-0 kubenswrapper[31420]: I0220 12:04:57.138916 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 20 12:04:57.142819 master-0 kubenswrapper[31420]: I0220 12:04:57.142760 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-etcd-client\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:57.177843 master-0 kubenswrapper[31420]: I0220 12:04:57.177757 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 20 12:04:57.185960 master-0 kubenswrapper[31420]: I0220 12:04:57.185908 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-etcd-serving-ca\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:57.196687 master-0 kubenswrapper[31420]: I0220 12:04:57.196634 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 20 12:04:57.204800 master-0 kubenswrapper[31420]: I0220 12:04:57.204757 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca213c3-42ca-4341-a2e6-a143b9389f9e-serving-cert\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:57.216333 master-0 kubenswrapper[31420]: I0220 12:04:57.216239 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 20 12:04:57.223831 master-0 kubenswrapper[31420]: I0220 12:04:57.223795 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af18215b-e749-4565-bb6c-24e92c452817-metrics-tls\") pod \"dns-default-kx4ch\" (UID: \"af18215b-e749-4565-bb6c-24e92c452817\") " pod="openshift-dns/dns-default-kx4ch" Feb 20 12:04:57.236675 master-0 kubenswrapper[31420]: I0220 12:04:57.236620 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 20 12:04:57.242905 master-0 kubenswrapper[31420]: I0220 12:04:57.242860 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-audit-policies\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:57.255881 master-0 kubenswrapper[31420]: I0220 12:04:57.255840 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 20 12:04:57.258265 master-0 kubenswrapper[31420]: I0220 12:04:57.258234 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fca213c3-42ca-4341-a2e6-a143b9389f9e-trusted-ca-bundle\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:04:57.277346 master-0 kubenswrapper[31420]: I0220 12:04:57.277295 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 20 12:04:57.296854 master-0 kubenswrapper[31420]: I0220 12:04:57.296805 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 20 12:04:57.316877 master-0 kubenswrapper[31420]: I0220 12:04:57.316787 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 20 12:04:57.326012 master-0 kubenswrapper[31420]: I0220 12:04:57.325943 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af18215b-e749-4565-bb6c-24e92c452817-config-volume\") pod \"dns-default-kx4ch\" (UID: \"af18215b-e749-4565-bb6c-24e92c452817\") " pod="openshift-dns/dns-default-kx4ch" Feb 20 12:04:57.336431 master-0 kubenswrapper[31420]: I0220 12:04:57.336380 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 20 12:04:57.341645 master-0 kubenswrapper[31420]: I0220 12:04:57.341500 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/4d8cd7c5-31fd-4dca-b39b-6d62eb573707-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-s57jn\" (UID: \"4d8cd7c5-31fd-4dca-b39b-6d62eb573707\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn" Feb 20 12:04:57.357877 master-0 kubenswrapper[31420]: I0220 12:04:57.357785 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 20 12:04:57.377698 master-0 kubenswrapper[31420]: I0220 12:04:57.377599 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 20 12:04:57.382289 master-0 kubenswrapper[31420]: I0220 12:04:57.382230 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d9f9442b-25b9-420f-b748-bb13423809fe-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:04:57.413368 master-0 kubenswrapper[31420]: I0220 12:04:57.413273 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 20 12:04:57.417373 master-0 kubenswrapper[31420]: I0220 12:04:57.417298 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 20 12:04:57.423146 master-0 kubenswrapper[31420]: I0220 12:04:57.423065 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d9f9442b-25b9-420f-b748-bb13423809fe-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:04:57.437328 master-0 kubenswrapper[31420]: I0220 12:04:57.437243 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 20 12:04:57.456761 master-0 kubenswrapper[31420]: I0220 12:04:57.456639 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 20 12:04:57.458317 master-0 kubenswrapper[31420]: I0220 12:04:57.458205 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89383482-190e-4f74-a81e-b1547e5b9ae6-serving-cert\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 12:04:57.478093 master-0 kubenswrapper[31420]: I0220 12:04:57.477952 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 20 12:04:57.487491 master-0 kubenswrapper[31420]: I0220 12:04:57.487423 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89383482-190e-4f74-a81e-b1547e5b9ae6-service-ca\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 12:04:57.494818 master-0 kubenswrapper[31420]: I0220 12:04:57.494752 31420 request.go:700] Waited for 1.004795896s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 20 12:04:57.497514 master-0 kubenswrapper[31420]: I0220 12:04:57.497451 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 20 12:04:57.530256 master-0 kubenswrapper[31420]: I0220 12:04:57.530168 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 20 12:04:57.537157 master-0 kubenswrapper[31420]: I0220 12:04:57.537092 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 20 12:04:57.546952 master-0 kubenswrapper[31420]: I0220 12:04:57.546777 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:04:57.557107 master-0 kubenswrapper[31420]: I0220 12:04:57.557020 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-jmbqp" Feb 20 12:04:57.577733 master-0 kubenswrapper[31420]: I0220 12:04:57.577654 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-llz42" Feb 20 12:04:57.597273 master-0 kubenswrapper[31420]: I0220 12:04:57.597191 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ffxph" Feb 20 12:04:57.616223 master-0 kubenswrapper[31420]: I0220 12:04:57.616143 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-9z85g" Feb 20 12:04:57.619265 master-0 kubenswrapper[31420]: E0220 12:04:57.619207 31420 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.619265 master-0 kubenswrapper[31420]: E0220 12:04:57.619254 31420 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.619445 master-0 kubenswrapper[31420]: E0220 12:04:57.619285 31420 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.619445 master-0 kubenswrapper[31420]: E0220 12:04:57.619339 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-images podName:62fc400b-b3dd-4134-bd27-69dd8369153a nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.119306686 +0000 UTC m=+2.838544967 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-images") pod "machine-api-operator-5c7cf458b4-dmvlr" (UID: "62fc400b-b3dd-4134-bd27-69dd8369153a") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.619445 master-0 kubenswrapper[31420]: E0220 12:04:57.619364 31420 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.619445 master-0 kubenswrapper[31420]: E0220 12:04:57.619396 31420 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.619445 master-0 kubenswrapper[31420]: E0220 12:04:57.619401 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b9fe0660-fae4-4f97-8895-dbc4845cee40-metrics-client-ca podName:b9fe0660-fae4-4f97-8895-dbc4845cee40 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.119371698 +0000 UTC m=+2.838609979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/b9fe0660-fae4-4f97-8895-dbc4845cee40-metrics-client-ca") pod "prometheus-operator-754bc4d665-5kbrl" (UID: "b9fe0660-fae4-4f97-8895-dbc4845cee40") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.619445 master-0 kubenswrapper[31420]: E0220 12:04:57.619444 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-custom-resource-state-configmap podName:042d8457-04dc-4171-8b0f-f9e3de695c46 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.119422489 +0000 UTC m=+2.838660770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-59584d565f-9fdgm" (UID: "042d8457-04dc-4171-8b0f-f9e3de695c46") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.619445 master-0 kubenswrapper[31420]: E0220 12:04:57.619451 31420 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.619892 master-0 kubenswrapper[31420]: E0220 12:04:57.619469 31420 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.619892 master-0 kubenswrapper[31420]: E0220 12:04:57.619473 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-images podName:bd609bd3-2525-4b88-8f07-94a0418fb582 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.11945935 +0000 UTC m=+2.838697631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-images") pod "cluster-baremetal-operator-d6bb9bb76-k95mq" (UID: "bd609bd3-2525-4b88-8f07-94a0418fb582") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.619892 master-0 kubenswrapper[31420]: E0220 12:04:57.619502 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-images podName:e8c48a22-ed96-42c5-ac4a-dd7d4f204539 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.119491261 +0000 UTC m=+2.838729542 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-images") pod "cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" (UID: "e8c48a22-ed96-42c5-ac4a-dd7d4f204539") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.619892 master-0 kubenswrapper[31420]: E0220 12:04:57.619533 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89ed6373-78f8-4d77-82b2-1ab055b5b862-metrics-client-ca podName:89ed6373-78f8-4d77-82b2-1ab055b5b862 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.119514652 +0000 UTC m=+2.838752933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/89ed6373-78f8-4d77-82b2-1ab055b5b862-metrics-client-ca") pod "openshift-state-metrics-6dbff8cb4c-zbh2z" (UID: "89ed6373-78f8-4d77-82b2-1ab055b5b862") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.619892 master-0 kubenswrapper[31420]: E0220 12:04:57.619587 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client podName:aae1df07-cf9f-47a3-b146-2a0adb182660 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.119576363 +0000 UTC m=+2.838814644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client") pod "telemeter-client-796b9bd86f-sp4fc" (UID: "aae1df07-cf9f-47a3-b146-2a0adb182660") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.619892 master-0 kubenswrapper[31420]: E0220 12:04:57.619621 31420 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.619892 master-0 kubenswrapper[31420]: E0220 12:04:57.619665 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ab951b1-6898-4357-b813-16365f3f89d5-cert podName:8ab951b1-6898-4357-b813-16365f3f89d5 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.119654186 +0000 UTC m=+2.838892467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8ab951b1-6898-4357-b813-16365f3f89d5-cert") pod "cluster-autoscaler-operator-86b8dc6d6-sksbt" (UID: "8ab951b1-6898-4357-b813-16365f3f89d5") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.619892 master-0 kubenswrapper[31420]: E0220 12:04:57.619840 31420 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.620509 master-0 kubenswrapper[31420]: E0220 12:04:57.619910 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-config podName:7635c0ff-4d40-4310-8187-230323e504e0 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.119892182 +0000 UTC m=+2.839130463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-config") pod "machine-approver-7dd9c7d7b9-qg84l" (UID: "7635c0ff-4d40-4310-8187-230323e504e0") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.620509 master-0 kubenswrapper[31420]: E0220 12:04:57.619934 31420 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.620509 master-0 kubenswrapper[31420]: E0220 12:04:57.619969 31420 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.620509 master-0 kubenswrapper[31420]: E0220 12:04:57.619984 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-client-certs podName:6717f0b4-c2f6-4ed5-94fb-778e5c7c983c nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.119971115 +0000 UTC m=+2.839209396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-client-certs") pod "metrics-server-7dcc9fb5fb-2fx9l" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.620509 master-0 kubenswrapper[31420]: E0220 12:04:57.620005 31420 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.620509 master-0 kubenswrapper[31420]: E0220 12:04:57.620010 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-auth-proxy-config podName:7635c0ff-4d40-4310-8187-230323e504e0 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.119998115 +0000 UTC m=+2.839236386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-auth-proxy-config") pod "machine-approver-7dd9c7d7b9-qg84l" (UID: "7635c0ff-4d40-4310-8187-230323e504e0") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.620509 master-0 kubenswrapper[31420]: E0220 12:04:57.620110 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-client-ca podName:c29fd426-7c89-434e-8332-1ca31075d4bf nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.120089008 +0000 UTC m=+2.839327289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-client-ca") pod "route-controller-manager-689d967cd5-ptpq6" (UID: "c29fd426-7c89-434e-8332-1ca31075d4bf") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.621026 master-0 kubenswrapper[31420]: E0220 12:04:57.620684 31420 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.621026 master-0 kubenswrapper[31420]: E0220 12:04:57.620749 31420 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.621026 master-0 kubenswrapper[31420]: E0220 12:04:57.620778 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39790258-73bc-4c37-a935-e8d3c2a2d5c6-cert podName:39790258-73bc-4c37-a935-e8d3c2a2d5c6 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.120755827 +0000 UTC m=+2.839994198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/39790258-73bc-4c37-a935-e8d3c2a2d5c6-cert") pod "ingress-canary-f6xzr" (UID: "39790258-73bc-4c37-a935-e8d3c2a2d5c6") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.621026 master-0 kubenswrapper[31420]: E0220 12:04:57.620815 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98226a59-5234-48f3-a9cd-21de305810dc-serving-cert podName:98226a59-5234-48f3-a9cd-21de305810dc nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.120795138 +0000 UTC m=+2.840033419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/98226a59-5234-48f3-a9cd-21de305810dc-serving-cert") pod "controller-manager-599c7886f5-zltnd" (UID: "98226a59-5234-48f3-a9cd-21de305810dc") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.621026 master-0 kubenswrapper[31420]: E0220 12:04:57.620826 31420 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.621026 master-0 kubenswrapper[31420]: E0220 12:04:57.620848 31420 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.621026 master-0 kubenswrapper[31420]: E0220 12:04:57.620889 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-proxy-tls podName:37cb3bb1-f5ba-4b7b-9af9-55bf61906a51 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.12086975 +0000 UTC m=+2.840108141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-proxy-tls") pod "machine-config-daemon-mpwks" (UID: "37cb3bb1-f5ba-4b7b-9af9-55bf61906a51") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.621026 master-0 kubenswrapper[31420]: E0220 12:04:57.620903 31420 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.621026 master-0 kubenswrapper[31420]: E0220 12:04:57.620926 31420 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.621026 master-0 kubenswrapper[31420]: E0220 12:04:57.620943 31420 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.621026 master-0 kubenswrapper[31420]: E0220 12:04:57.620975 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-certs podName:2f9cd117-c84f-44c9-80a9-879a04d62934 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.120907991 +0000 UTC m=+2.840146412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-certs") pod "machine-config-server-4wkh4" (UID: "2f9cd117-c84f-44c9-80a9-879a04d62934") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.621026 master-0 kubenswrapper[31420]: E0220 12:04:57.621016 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-kube-rbac-proxy-config podName:89ed6373-78f8-4d77-82b2-1ab055b5b862 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.120996494 +0000 UTC m=+2.840234855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-6dbff8cb4c-zbh2z" (UID: "89ed6373-78f8-4d77-82b2-1ab055b5b862") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.621828 master-0 kubenswrapper[31420]: E0220 12:04:57.621062 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-metrics-server-audit-profiles podName:6717f0b4-c2f6-4ed5-94fb-778e5c7c983c nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.121043735 +0000 UTC m=+2.840282176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-metrics-server-audit-profiles") pod "metrics-server-7dcc9fb5fb-2fx9l" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.621828 master-0 kubenswrapper[31420]: E0220 12:04:57.621110 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-apiservice-cert podName:ae1fd116-6f63-4344-b7af-278665649e5a nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.121094417 +0000 UTC m=+2.840332918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-apiservice-cert") pod "packageserver-795fd44d5c-t99pw" (UID: "ae1fd116-6f63-4344-b7af-278665649e5a") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.622212 master-0 kubenswrapper[31420]: E0220 12:04:57.622138 31420 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.622212 master-0 kubenswrapper[31420]: E0220 12:04:57.622203 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-metrics-client-ca podName:aae1df07-cf9f-47a3-b146-2a0adb182660 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.122184278 +0000 UTC m=+2.841422559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-metrics-client-ca") pod "telemeter-client-796b9bd86f-sp4fc" (UID: "aae1df07-cf9f-47a3-b146-2a0adb182660") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.622380 master-0 kubenswrapper[31420]: E0220 12:04:57.622231 31420 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.622380 master-0 kubenswrapper[31420]: E0220 12:04:57.622262 31420 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.622380 master-0 kubenswrapper[31420]: E0220 12:04:57.622303 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-proxy-ca-bundles podName:98226a59-5234-48f3-a9cd-21de305810dc nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.122291571 +0000 UTC m=+2.841529842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-proxy-ca-bundles") pod "controller-manager-599c7886f5-zltnd" (UID: "98226a59-5234-48f3-a9cd-21de305810dc") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.622380 master-0 kubenswrapper[31420]: E0220 12:04:57.622315 31420 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.622380 master-0 kubenswrapper[31420]: E0220 12:04:57.622339 31420 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.622380 master-0 kubenswrapper[31420]: E0220 12:04:57.622350 31420 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.622380 master-0 kubenswrapper[31420]: E0220 12:04:57.622389 31420 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.622797 master-0 kubenswrapper[31420]: E0220 12:04:57.622411 31420 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.622797 master-0 kubenswrapper[31420]: E0220 12:04:57.622458 31420 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.622797 master-0 kubenswrapper[31420]: E0220 12:04:57.622584 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-config podName:62fc400b-b3dd-4134-bd27-69dd8369153a nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.122312881 +0000 UTC m=+2.841551172 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-config") pod "machine-api-operator-5c7cf458b4-dmvlr" (UID: "62fc400b-b3dd-4134-bd27-69dd8369153a") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.622797 master-0 kubenswrapper[31420]: E0220 12:04:57.622647 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-config podName:c29fd426-7c89-434e-8332-1ca31075d4bf nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.12260758 +0000 UTC m=+2.841845861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-config") pod "route-controller-manager-689d967cd5-ptpq6" (UID: "c29fd426-7c89-434e-8332-1ca31075d4bf") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.622797 master-0 kubenswrapper[31420]: E0220 12:04:57.622699 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-tls podName:042d8457-04dc-4171-8b0f-f9e3de695c46 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.122683592 +0000 UTC m=+2.841921873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-tls") pod "kube-state-metrics-59584d565f-9fdgm" (UID: "042d8457-04dc-4171-8b0f-f9e3de695c46") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.622797 master-0 kubenswrapper[31420]: E0220 12:04:57.622719 31420 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.623177 master-0 kubenswrapper[31420]: E0220 12:04:57.622784 31420 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.623177 master-0 kubenswrapper[31420]: E0220 12:04:57.622725 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-trusted-ca-bundle podName:daf25ef5-8247-4dbb-bdc1-55104b1015b7 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.122713623 +0000 UTC m=+2.841951904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-trusted-ca-bundle") pod "insights-operator-59b498fcfb-hsjr7" (UID: "daf25ef5-8247-4dbb-bdc1-55104b1015b7") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.623177 master-0 kubenswrapper[31420]: E0220 12:04:57.623004 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8ab951b1-6898-4357-b813-16365f3f89d5-auth-proxy-config podName:8ab951b1-6898-4357-b813-16365f3f89d5 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.12297657 +0000 UTC m=+2.842214871 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/8ab951b1-6898-4357-b813-16365f3f89d5-auth-proxy-config") pod "cluster-autoscaler-operator-86b8dc6d6-sksbt" (UID: "8ab951b1-6898-4357-b813-16365f3f89d5") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.623177 master-0 kubenswrapper[31420]: E0220 12:04:57.623049 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-kube-rbac-proxy-config podName:042d8457-04dc-4171-8b0f-f9e3de695c46 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.123028532 +0000 UTC m=+2.842266933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-59584d565f-9fdgm" (UID: "042d8457-04dc-4171-8b0f-f9e3de695c46") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.623177 master-0 kubenswrapper[31420]: E0220 12:04:57.623059 31420 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.623177 master-0 kubenswrapper[31420]: E0220 12:04:57.623081 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-federate-client-tls podName:aae1df07-cf9f-47a3-b146-2a0adb182660 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.123065853 +0000 UTC m=+2.842304214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-federate-client-tls") pod "telemeter-client-796b9bd86f-sp4fc" (UID: "aae1df07-cf9f-47a3-b146-2a0adb182660") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.623177 master-0 kubenswrapper[31420]: E0220 12:04:57.623113 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-metrics-client-ca podName:62ba4bae-a5e1-4c4d-b544-25d0e59eeac2 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.123097794 +0000 UTC m=+2.842336165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-metrics-client-ca") pod "node-exporter-8d7nc" (UID: "62ba4bae-a5e1-4c4d-b544-25d0e59eeac2") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.623177 master-0 kubenswrapper[31420]: E0220 12:04:57.623152 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-metrics-client-ca podName:042d8457-04dc-4171-8b0f-f9e3de695c46 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.123127084 +0000 UTC m=+2.842365365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-metrics-client-ca") pod "kube-state-metrics-59584d565f-9fdgm" (UID: "042d8457-04dc-4171-8b0f-f9e3de695c46") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.623739 master-0 kubenswrapper[31420]: E0220 12:04:57.623220 31420 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.623739 master-0 kubenswrapper[31420]: E0220 12:04:57.623245 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bbdbadd9-eeaa-46ef-936e-5db8d395c118-cluster-storage-operator-serving-cert podName:bbdbadd9-eeaa-46ef-936e-5db8d395c118 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.123222307 +0000 UTC m=+2.842460648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/bbdbadd9-eeaa-46ef-936e-5db8d395c118-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f94476f49-d9vsg" (UID: "bbdbadd9-eeaa-46ef-936e-5db8d395c118") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.623739 master-0 kubenswrapper[31420]: E0220 12:04:57.623287 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-client-ca podName:98226a59-5234-48f3-a9cd-21de305810dc nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.123265998 +0000 UTC m=+2.842504289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-client-ca") pod "controller-manager-599c7886f5-zltnd" (UID: "98226a59-5234-48f3-a9cd-21de305810dc") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.623739 master-0 kubenswrapper[31420]: E0220 12:04:57.623429 31420 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.623739 master-0 kubenswrapper[31420]: E0220 12:04:57.623492 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-service-ca-bundle podName:daf25ef5-8247-4dbb-bdc1-55104b1015b7 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.123476474 +0000 UTC m=+2.842714755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-service-ca-bundle") pod "insights-operator-59b498fcfb-hsjr7" (UID: "daf25ef5-8247-4dbb-bdc1-55104b1015b7") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.623739 master-0 kubenswrapper[31420]: E0220 12:04:57.623607 31420 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.623739 master-0 kubenswrapper[31420]: E0220 12:04:57.623671 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-node-bootstrap-token podName:2f9cd117-c84f-44c9-80a9-879a04d62934 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.123655399 +0000 UTC m=+2.842893670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-node-bootstrap-token") pod "machine-config-server-4wkh4" (UID: "2f9cd117-c84f-44c9-80a9-879a04d62934") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.624825 master-0 kubenswrapper[31420]: E0220 12:04:57.624763 31420 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.624923 master-0 kubenswrapper[31420]: E0220 12:04:57.624846 31420 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.624923 master-0 kubenswrapper[31420]: E0220 12:04:57.624883 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62fc400b-b3dd-4134-bd27-69dd8369153a-machine-api-operator-tls podName:62fc400b-b3dd-4134-bd27-69dd8369153a nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.124861354 +0000 UTC m=+2.844099605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/62fc400b-b3dd-4134-bd27-69dd8369153a-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-dmvlr" (UID: "62fc400b-b3dd-4134-bd27-69dd8369153a") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.625089 master-0 kubenswrapper[31420]: E0220 12:04:57.624926 31420 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.625089 master-0 kubenswrapper[31420]: E0220 12:04:57.624941 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-config podName:98226a59-5234-48f3-a9cd-21de305810dc nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.124915935 +0000 UTC m=+2.844154226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-config") pod "controller-manager-599c7886f5-zltnd" (UID: "98226a59-5234-48f3-a9cd-21de305810dc") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.625089 master-0 kubenswrapper[31420]: E0220 12:04:57.624968 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daf25ef5-8247-4dbb-bdc1-55104b1015b7-serving-cert podName:daf25ef5-8247-4dbb-bdc1-55104b1015b7 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.124959556 +0000 UTC m=+2.844197807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/daf25ef5-8247-4dbb-bdc1-55104b1015b7-serving-cert") pod "insights-operator-59b498fcfb-hsjr7" (UID: "daf25ef5-8247-4dbb-bdc1-55104b1015b7") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.626109 master-0 kubenswrapper[31420]: E0220 12:04:57.626048 31420 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.626270 master-0 kubenswrapper[31420]: E0220 12:04:57.626159 31420 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.626270 master-0 kubenswrapper[31420]: E0220 12:04:57.626208 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-proxy-tls podName:29489539-68c6-49dd-bc1b-dcf0c7bb2ebe nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.126183011 +0000 UTC m=+2.845421302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-proxy-tls") pod "machine-config-controller-54cb48566c-8m59n" (UID: "29489539-68c6-49dd-bc1b-dcf0c7bb2ebe") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.626393 master-0 kubenswrapper[31420]: E0220 12:04:57.626272 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-serving-certs-ca-bundle podName:aae1df07-cf9f-47a3-b146-2a0adb182660 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.126245753 +0000 UTC m=+2.845484044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-serving-certs-ca-bundle") pod "telemeter-client-796b9bd86f-sp4fc" (UID: "aae1df07-cf9f-47a3-b146-2a0adb182660") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.627280 master-0 kubenswrapper[31420]: E0220 12:04:57.627236 31420 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-cqc0j177hn3k9: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.627368 master-0 kubenswrapper[31420]: E0220 12:04:57.627291 31420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.627368 master-0 kubenswrapper[31420]: E0220 12:04:57.627314 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-client-ca-bundle podName:6717f0b4-c2f6-4ed5-94fb-778e5c7c983c nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.127296773 +0000 UTC m=+2.846535024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-client-ca-bundle") pod "metrics-server-7dcc9fb5fb-2fx9l" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.627368 master-0 kubenswrapper[31420]: E0220 12:04:57.627344 31420 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.627368 master-0 kubenswrapper[31420]: E0220 12:04:57.627365 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-tls podName:b9fe0660-fae4-4f97-8895-dbc4845cee40 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.127342234 +0000 UTC m=+2.846580515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-5kbrl" (UID: "b9fe0660-fae4-4f97-8895-dbc4845cee40") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.627657 master-0 kubenswrapper[31420]: E0220 12:04:57.627436 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-auth-proxy-config podName:e8c48a22-ed96-42c5-ac4a-dd7d4f204539 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.127412956 +0000 UTC m=+2.846651247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" (UID: "e8c48a22-ed96-42c5-ac4a-dd7d4f204539") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.627657 master-0 kubenswrapper[31420]: E0220 12:04:57.627347 31420 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.627657 master-0 kubenswrapper[31420]: E0220 12:04:57.627511 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-kube-rbac-proxy-config podName:62ba4bae-a5e1-4c4d-b544-25d0e59eeac2 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.127494189 +0000 UTC m=+2.846732570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-kube-rbac-proxy-config") pod "node-exporter-8d7nc" (UID: "62ba4bae-a5e1-4c4d-b544-25d0e59eeac2") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.627657 master-0 kubenswrapper[31420]: E0220 12:04:57.627553 31420 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.627657 master-0 kubenswrapper[31420]: E0220 12:04:57.627587 31420 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.627657 master-0 kubenswrapper[31420]: E0220 12:04:57.627597 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7635c0ff-4d40-4310-8187-230323e504e0-machine-approver-tls podName:7635c0ff-4d40-4310-8187-230323e504e0 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.127586891 +0000 UTC m=+2.846825142 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/7635c0ff-4d40-4310-8187-230323e504e0-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-qg84l" (UID: "7635c0ff-4d40-4310-8187-230323e504e0") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.628001 master-0 kubenswrapper[31420]: E0220 12:04:57.627687 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-tls podName:89ed6373-78f8-4d77-82b2-1ab055b5b862 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.127666173 +0000 UTC m=+2.846904464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-tls") pod "openshift-state-metrics-6dbff8cb4c-zbh2z" (UID: "89ed6373-78f8-4d77-82b2-1ab055b5b862") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.628955 master-0 kubenswrapper[31420]: E0220 12:04:57.628888 31420 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.629065 master-0 kubenswrapper[31420]: E0220 12:04:57.628963 31420 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.629065 master-0 kubenswrapper[31420]: E0220 12:04:57.628992 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-client-tls podName:aae1df07-cf9f-47a3-b146-2a0adb182660 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.128974111 +0000 UTC m=+2.848212392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-client-tls") pod "telemeter-client-796b9bd86f-sp4fc" (UID: "aae1df07-cf9f-47a3-b146-2a0adb182660") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.629065 master-0 kubenswrapper[31420]: E0220 12:04:57.629052 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-cloud-controller-manager-operator-tls podName:e8c48a22-ed96-42c5-ac4a-dd7d4f204539 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.129027732 +0000 UTC m=+2.848266023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" (UID: "e8c48a22-ed96-42c5-ac4a-dd7d4f204539") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.630472 master-0 kubenswrapper[31420]: E0220 12:04:57.630427 31420 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.630712 master-0 kubenswrapper[31420]: E0220 12:04:57.630483 31420 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.630712 master-0 kubenswrapper[31420]: E0220 12:04:57.630497 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-config podName:bd609bd3-2525-4b88-8f07-94a0418fb582 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.130481794 +0000 UTC m=+2.849720075 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-config") pod "cluster-baremetal-operator-d6bb9bb76-k95mq" (UID: "bd609bd3-2525-4b88-8f07-94a0418fb582") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.630712 master-0 kubenswrapper[31420]: E0220 12:04:57.630617 31420 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.630712 master-0 kubenswrapper[31420]: E0220 12:04:57.630630 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cluster-baremetal-operator-tls podName:bd609bd3-2525-4b88-8f07-94a0418fb582 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.130598247 +0000 UTC m=+2.849836558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-d6bb9bb76-k95mq" (UID: "bd609bd3-2525-4b88-8f07-94a0418fb582") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.630712 master-0 kubenswrapper[31420]: E0220 12:04:57.630673 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-configmap-kubelet-serving-ca-bundle podName:6717f0b4-c2f6-4ed5-94fb-778e5c7c983c nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.130658439 +0000 UTC m=+2.849896720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-configmap-kubelet-serving-ca-bundle") pod "metrics-server-7dcc9fb5fb-2fx9l" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.631955 master-0 kubenswrapper[31420]: E0220 12:04:57.631903 31420 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.632045 master-0 kubenswrapper[31420]: E0220 12:04:57.631969 31420 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.632105 master-0 kubenswrapper[31420]: E0220 12:04:57.632020 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6479d88f-463f-48ed-846d-2747752a8abb-webhook-certs podName:6479d88f-463f-48ed-846d-2747752a8abb nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.131981306 +0000 UTC m=+2.851219607 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6479d88f-463f-48ed-846d-2747752a8abb-webhook-certs") pod "multus-admission-controller-5f54bf67d4-zxsc2" (UID: "6479d88f-463f-48ed-846d-2747752a8abb") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.632288 master-0 kubenswrapper[31420]: E0220 12:04:57.632058 31420 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.632362 master-0 kubenswrapper[31420]: E0220 12:04:57.632071 31420 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.632423 master-0 kubenswrapper[31420]: E0220 12:04:57.632077 31420 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.632484 master-0 kubenswrapper[31420]: E0220 12:04:57.632097 31420 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.632637 master-0 kubenswrapper[31420]: E0220 12:04:57.632104 31420 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.632812 master-0 kubenswrapper[31420]: E0220 12:04:57.632766 31420 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.632965 master-0 kubenswrapper[31420]: E0220 12:04:57.632911 31420 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.633054 master-0 kubenswrapper[31420]: E0220 12:04:57.633038 31420 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.633310 master-0 kubenswrapper[31420]: E0220 12:04:57.633105 31420 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.633402 master-0 kubenswrapper[31420]: E0220 12:04:57.633358 31420 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.633466 master-0 kubenswrapper[31420]: E0220 12:04:57.633431 31420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.633926 master-0 kubenswrapper[31420]: E0220 12:04:57.632201 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c29fd426-7c89-434e-8332-1ca31075d4bf-serving-cert podName:c29fd426-7c89-434e-8332-1ca31075d4bf nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.132175642 +0000 UTC m=+2.851413963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c29fd426-7c89-434e-8332-1ca31075d4bf-serving-cert") pod "route-controller-manager-689d967cd5-ptpq6" (UID: "c29fd426-7c89-434e-8332-1ca31075d4bf") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.634007 master-0 kubenswrapper[31420]: E0220 12:04:57.633950 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cert podName:bd609bd3-2525-4b88-8f07-94a0418fb582 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.133917821 +0000 UTC m=+2.853156112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cert") pod "cluster-baremetal-operator-d6bb9bb76-k95mq" (UID: "bd609bd3-2525-4b88-8f07-94a0418fb582") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.634260 master-0 kubenswrapper[31420]: E0220 12:04:57.634153 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef18ace4-7316-4600-9be9-2adc792705e9-cloud-credential-operator-serving-cert podName:ef18ace4-7316-4600-9be9-2adc792705e9 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.134112307 +0000 UTC m=+2.853350578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/ef18ace4-7316-4600-9be9-2adc792705e9-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-fq68q" (UID: "ef18ace4-7316-4600-9be9-2adc792705e9") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.634362 master-0 kubenswrapper[31420]: E0220 12:04:57.634335 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-server-tls podName:6717f0b4-c2f6-4ed5-94fb-778e5c7c983c nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.134261311 +0000 UTC m=+2.853499592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-server-tls") pod "metrics-server-7dcc9fb5fb-2fx9l" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.634508 master-0 kubenswrapper[31420]: E0220 12:04:57.634372 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ef18ace4-7316-4600-9be9-2adc792705e9-cco-trusted-ca podName:ef18ace4-7316-4600-9be9-2adc792705e9 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.134359894 +0000 UTC m=+2.853598175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/ef18ace4-7316-4600-9be9-2adc792705e9-cco-trusted-ca") pod "cloud-credential-operator-6968c58f46-fq68q" (UID: "ef18ace4-7316-4600-9be9-2adc792705e9") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.634641 master-0 kubenswrapper[31420]: E0220 12:04:57.634613 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-webhook-cert podName:ae1fd116-6f63-4344-b7af-278665649e5a nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.134594931 +0000 UTC m=+2.853833212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-webhook-cert") pod "packageserver-795fd44d5c-t99pw" (UID: "ae1fd116-6f63-4344-b7af-278665649e5a") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.634727 master-0 kubenswrapper[31420]: E0220 12:04:57.634710 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client-kube-rbac-proxy-config podName:aae1df07-cf9f-47a3-b146-2a0adb182660 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.134695553 +0000 UTC m=+2.853933834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-796b9bd86f-sp4fc" (UID: "aae1df07-cf9f-47a3-b146-2a0adb182660") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.634856 master-0 kubenswrapper[31420]: E0220 12:04:57.634807 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21e8e44b-b883-4afb-af90-d6c1265edf34-control-plane-machine-set-operator-tls podName:21e8e44b-b883-4afb-af90-d6c1265edf34 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.134729894 +0000 UTC m=+2.853968175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/21e8e44b-b883-4afb-af90-d6c1265edf34-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-686847ff5f-fn7j5" (UID: "21e8e44b-b883-4afb-af90-d6c1265edf34") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.634948 master-0 kubenswrapper[31420]: E0220 12:04:57.634918 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-trusted-ca-bundle podName:aae1df07-cf9f-47a3-b146-2a0adb182660 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.134838927 +0000 UTC m=+2.854077328 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-trusted-ca-bundle") pod "telemeter-client-796b9bd86f-sp4fc" (UID: "aae1df07-cf9f-47a3-b146-2a0adb182660") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:57.635125 master-0 kubenswrapper[31420]: E0220 12:04:57.635036 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c104245-d078-4856-9a60-207bb6efcfe8-samples-operator-tls podName:5c104245-d078-4856-9a60-207bb6efcfe8 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.135012042 +0000 UTC m=+2.854250413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/5c104245-d078-4856-9a60-207bb6efcfe8-samples-operator-tls") pod "cluster-samples-operator-65c5c48b9b-2k7xj" (UID: "5c104245-d078-4856-9a60-207bb6efcfe8") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.635357 master-0 kubenswrapper[31420]: E0220 12:04:57.635292 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-kube-rbac-proxy-config podName:b9fe0660-fae4-4f97-8895-dbc4845cee40 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.135231769 +0000 UTC m=+2.854470040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-754bc4d665-5kbrl" (UID: "b9fe0660-fae4-4f97-8895-dbc4845cee40") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.635432 master-0 kubenswrapper[31420]: E0220 12:04:57.635373 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-tls podName:62ba4bae-a5e1-4c4d-b544-25d0e59eeac2 nodeName:}" failed. No retries permitted until 2026-02-20 12:04:58.135356972 +0000 UTC m=+2.854595243 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-tls") pod "node-exporter-8d7nc" (UID: "62ba4bae-a5e1-4c4d-b544-25d0e59eeac2") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:57.636282 master-0 kubenswrapper[31420]: I0220 12:04:57.636223 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 20 12:04:57.670913 master-0 kubenswrapper[31420]: I0220 12:04:57.669998 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 20 12:04:57.678470 master-0 kubenswrapper[31420]: I0220 12:04:57.678382 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 20 12:04:57.697019 master-0 kubenswrapper[31420]: I0220 12:04:57.696946 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 20 12:04:57.718493 master-0 kubenswrapper[31420]: I0220 12:04:57.718038 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 20 12:04:57.737502 master-0 kubenswrapper[31420]: I0220 12:04:57.737358 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-vncjl" Feb 20 12:04:57.757463 master-0 kubenswrapper[31420]: I0220 12:04:57.757394 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 20 12:04:57.777209 master-0 kubenswrapper[31420]: I0220 12:04:57.777055 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-nd6lj" Feb 20 12:04:57.797537 master-0 kubenswrapper[31420]: I0220 12:04:57.797464 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 20 12:04:57.817766 master-0 kubenswrapper[31420]: I0220 12:04:57.817699 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 20 12:04:57.837436 master-0 kubenswrapper[31420]: I0220 12:04:57.837377 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 20 12:04:57.857319 master-0 kubenswrapper[31420]: I0220 12:04:57.857269 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 20 12:04:57.867634 master-0 kubenswrapper[31420]: I0220 12:04:57.867590 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:04:57.867836 master-0 kubenswrapper[31420]: I0220 12:04:57.867783 31420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 12:04:57.877679 master-0 kubenswrapper[31420]: I0220 12:04:57.877638 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 12:04:57.896725 master-0 kubenswrapper[31420]: I0220 12:04:57.896667 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-69d2b" Feb 20 12:04:57.915954 master-0 kubenswrapper[31420]: I0220 12:04:57.915908 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 12:04:57.936550 master-0 kubenswrapper[31420]: I0220 12:04:57.936487 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 12:04:57.957000 master-0 kubenswrapper[31420]: I0220 12:04:57.956952 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 20 12:04:57.976839 master-0 kubenswrapper[31420]: I0220 12:04:57.976786 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 20 12:04:57.996150 master-0 kubenswrapper[31420]: I0220 12:04:57.996069 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 12:04:58.017361 master-0 kubenswrapper[31420]: I0220 12:04:58.017310 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 12:04:58.047769 master-0 kubenswrapper[31420]: I0220 12:04:58.047699 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 12:04:58.056745 master-0 kubenswrapper[31420]: I0220 12:04:58.056708 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 20 12:04:58.078236 master-0 kubenswrapper[31420]: I0220 12:04:58.078196 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xsh5v" Feb 20 12:04:58.097672 master-0 kubenswrapper[31420]: I0220 12:04:58.097623 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 20 12:04:58.112238 master-0 kubenswrapper[31420]: I0220 12:04:58.112214 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 20 12:04:58.116999 master-0 kubenswrapper[31420]: I0220 12:04:58.116955 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 20 12:04:58.137313 master-0 kubenswrapper[31420]: I0220 12:04:58.137222 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-lv5zr" Feb 20 12:04:58.158809 master-0 kubenswrapper[31420]: I0220 12:04:58.158704 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 20 12:04:58.178109 master-0 kubenswrapper[31420]: I0220 12:04:58.178041 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 20 12:04:58.197522 master-0 kubenswrapper[31420]: I0220 12:04:58.197448 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-jfq59" Feb 20 12:04:58.216752 master-0 kubenswrapper[31420]: I0220 12:04:58.216700 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 20 12:04:58.219647 master-0 kubenswrapper[31420]: I0220 12:04:58.219550 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ab951b1-6898-4357-b813-16365f3f89d5-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 12:04:58.219959 master-0 kubenswrapper[31420]: I0220 12:04:58.219903 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98226a59-5234-48f3-a9cd-21de305810dc-serving-cert\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:04:58.220068 master-0 kubenswrapper[31420]: I0220 12:04:58.220003 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-certs\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 12:04:58.220162 master-0 kubenswrapper[31420]: I0220 12:04:58.220121 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-metrics-server-audit-profiles\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:04:58.220252 master-0 kubenswrapper[31420]: I0220 12:04:58.220219 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:58.220316 master-0 kubenswrapper[31420]: I0220 12:04:58.220269 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-config\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 12:04:58.220376 master-0 kubenswrapper[31420]: I0220 12:04:58.220338 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-proxy-ca-bundles\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:04:58.220447 master-0 kubenswrapper[31420]: I0220 12:04:58.220376 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-metrics-client-ca\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:58.220447 master-0 kubenswrapper[31420]: I0220 12:04:58.220412 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:58.220447 master-0 kubenswrapper[31420]: I0220 12:04:58.220414 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98226a59-5234-48f3-a9cd-21de305810dc-serving-cert\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:04:58.220674 master-0 kubenswrapper[31420]: I0220 12:04:58.220465 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 12:04:58.220845 master-0 kubenswrapper[31420]: I0220 12:04:58.220794 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-config\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:04:58.220916 master-0 kubenswrapper[31420]: I0220 12:04:58.220852 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-federate-client-tls\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:58.220978 master-0 kubenswrapper[31420]: I0220 12:04:58.220910 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-metrics-client-ca\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:58.221038 master-0 kubenswrapper[31420]: I0220 12:04:58.220967 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ab951b1-6898-4357-b813-16365f3f89d5-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 12:04:58.221097 master-0 kubenswrapper[31420]: I0220 12:04:58.221043 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:58.221167 master-0 kubenswrapper[31420]: I0220 12:04:58.221129 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbdbadd9-eeaa-46ef-936e-5db8d395c118-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-d9vsg\" (UID: \"bbdbadd9-eeaa-46ef-936e-5db8d395c118\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 12:04:58.221229 master-0 kubenswrapper[31420]: I0220 12:04:58.221185 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-client-ca\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:04:58.221289 master-0 kubenswrapper[31420]: I0220 12:04:58.221268 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-service-ca-bundle\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 12:04:58.221352 master-0 kubenswrapper[31420]: I0220 12:04:58.221308 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/62fc400b-b3dd-4134-bd27-69dd8369153a-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 12:04:58.221352 master-0 kubenswrapper[31420]: I0220 12:04:58.221344 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-node-bootstrap-token\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 12:04:58.221646 master-0 kubenswrapper[31420]: I0220 12:04:58.221382 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-config\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:04:58.221646 master-0 kubenswrapper[31420]: I0220 12:04:58.221467 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-proxy-tls\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 12:04:58.221646 master-0 kubenswrapper[31420]: I0220 12:04:58.221502 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-serving-certs-ca-bundle\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:58.221815 master-0 kubenswrapper[31420]: I0220 12:04:58.221648 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 12:04:58.221815 master-0 kubenswrapper[31420]: I0220 12:04:58.221690 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 12:04:58.221815 master-0 kubenswrapper[31420]: I0220 12:04:58.221772 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf25ef5-8247-4dbb-bdc1-55104b1015b7-serving-cert\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 12:04:58.221985 master-0 kubenswrapper[31420]: I0220 12:04:58.221897 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:58.221985 master-0 kubenswrapper[31420]: I0220 12:04:58.221934 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-client-ca-bundle\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:04:58.222095 master-0 kubenswrapper[31420]: I0220 12:04:58.222019 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 12:04:58.222095 master-0 kubenswrapper[31420]: I0220 12:04:58.222076 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7635c0ff-4d40-4310-8187-230323e504e0-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 12:04:58.222215 master-0 kubenswrapper[31420]: I0220 12:04:58.222114 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 12:04:58.222215 master-0 kubenswrapper[31420]: I0220 12:04:58.222165 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-client-tls\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:58.222345 master-0 kubenswrapper[31420]: I0220 12:04:58.222322 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:04:58.222418 master-0 kubenswrapper[31420]: I0220 12:04:58.222381 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 12:04:58.222487 master-0 kubenswrapper[31420]: I0220 12:04:58.222433 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 12:04:58.222553 master-0 kubenswrapper[31420]: I0220 12:04:58.222504 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-server-tls\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:04:58.222646 master-0 kubenswrapper[31420]: I0220 12:04:58.222593 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef18ace4-7316-4600-9be9-2adc792705e9-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 12:04:58.222992 master-0 kubenswrapper[31420]: I0220 12:04:58.222937 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-config\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:04:58.223109 master-0 kubenswrapper[31420]: I0220 12:04:58.223076 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef18ace4-7316-4600-9be9-2adc792705e9-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 12:04:58.223548 master-0 kubenswrapper[31420]: I0220 12:04:58.223504 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-proxy-ca-bundles\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:04:58.223693 master-0 kubenswrapper[31420]: I0220 12:04:58.223653 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-tls\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:58.223693 master-0 kubenswrapper[31420]: I0220 12:04:58.223686 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 12:04:58.223819 master-0 kubenswrapper[31420]: I0220 12:04:58.223729 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef18ace4-7316-4600-9be9-2adc792705e9-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 12:04:58.223819 master-0 kubenswrapper[31420]: I0220 12:04:58.223751 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29fd426-7c89-434e-8332-1ca31075d4bf-serving-cert\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:04:58.223819 master-0 kubenswrapper[31420]: I0220 12:04:58.223793 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6479d88f-463f-48ed-846d-2747752a8abb-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-zxsc2\" (UID: \"6479d88f-463f-48ed-846d-2747752a8abb\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" Feb 20 12:04:58.224064 master-0 kubenswrapper[31420]: I0220 12:04:58.223840 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-webhook-cert\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 12:04:58.224064 master-0 kubenswrapper[31420]: I0220 12:04:58.223867 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/21e8e44b-b883-4afb-af90-d6c1265edf34-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-fn7j5\" (UID: \"21e8e44b-b883-4afb-af90-d6c1265edf34\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" Feb 20 12:04:58.224064 master-0 kubenswrapper[31420]: I0220 12:04:58.223910 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c104245-d078-4856-9a60-207bb6efcfe8-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-2k7xj\" (UID: \"5c104245-d078-4856-9a60-207bb6efcfe8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 12:04:58.224064 master-0 kubenswrapper[31420]: I0220 12:04:58.223928 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-client-ca\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:04:58.224064 master-0 kubenswrapper[31420]: I0220 12:04:58.223934 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:58.224064 master-0 kubenswrapper[31420]: I0220 12:04:58.224013 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-trusted-ca-bundle\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:58.224064 master-0 kubenswrapper[31420]: I0220 12:04:58.224058 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 12:04:58.224515 master-0 kubenswrapper[31420]: I0220 12:04:58.224100 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 12:04:58.224515 master-0 kubenswrapper[31420]: I0220 12:04:58.224138 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/89ed6373-78f8-4d77-82b2-1ab055b5b862-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 12:04:58.224515 master-0 kubenswrapper[31420]: I0220 12:04:58.224182 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 12:04:58.224515 master-0 kubenswrapper[31420]: I0220 12:04:58.224214 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9fe0660-fae4-4f97-8895-dbc4845cee40-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 12:04:58.224515 master-0 kubenswrapper[31420]: I0220 12:04:58.224265 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-client-ca\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:04:58.224515 master-0 kubenswrapper[31420]: I0220 12:04:58.224316 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 12:04:58.224515 master-0 kubenswrapper[31420]: I0220 12:04:58.224333 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-proxy-tls\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 12:04:58.224515 master-0 kubenswrapper[31420]: I0220 12:04:58.224372 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 12:04:58.224987 master-0 kubenswrapper[31420]: I0220 12:04:58.224541 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd609bd3-2525-4b88-8f07-94a0418fb582-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 12:04:58.224987 master-0 kubenswrapper[31420]: I0220 12:04:58.224588 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-config\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 12:04:58.224987 master-0 kubenswrapper[31420]: I0220 12:04:58.224633 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:58.224987 master-0 kubenswrapper[31420]: I0220 12:04:58.224653 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39790258-73bc-4c37-a935-e8d3c2a2d5c6-cert\") pod \"ingress-canary-f6xzr\" (UID: \"39790258-73bc-4c37-a935-e8d3c2a2d5c6\") " pod="openshift-ingress-canary/ingress-canary-f6xzr" Feb 20 12:04:58.224987 master-0 kubenswrapper[31420]: I0220 12:04:58.224671 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-images\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 12:04:58.224987 master-0 kubenswrapper[31420]: I0220 12:04:58.224697 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:58.224987 master-0 kubenswrapper[31420]: I0220 12:04:58.224716 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-apiservice-cert\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 12:04:58.224987 master-0 kubenswrapper[31420]: I0220 12:04:58.224738 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 12:04:58.224987 master-0 kubenswrapper[31420]: I0220 12:04:58.224760 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-client-certs\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:04:58.224987 master-0 kubenswrapper[31420]: I0220 12:04:58.224783 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 12:04:58.224987 master-0 kubenswrapper[31420]: I0220 12:04:58.224912 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-config\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:04:58.225666 master-0 kubenswrapper[31420]: I0220 12:04:58.225028 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef18ace4-7316-4600-9be9-2adc792705e9-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 12:04:58.225666 master-0 kubenswrapper[31420]: I0220 12:04:58.225272 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c104245-d078-4856-9a60-207bb6efcfe8-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-2k7xj\" (UID: \"5c104245-d078-4856-9a60-207bb6efcfe8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 12:04:58.225666 master-0 kubenswrapper[31420]: I0220 12:04:58.225430 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/21e8e44b-b883-4afb-af90-d6c1265edf34-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-fn7j5\" (UID: \"21e8e44b-b883-4afb-af90-d6c1265edf34\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" Feb 20 12:04:58.225666 master-0 kubenswrapper[31420]: I0220 12:04:58.225432 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-client-ca\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:04:58.226111 master-0 kubenswrapper[31420]: I0220 12:04:58.226056 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29fd426-7c89-434e-8332-1ca31075d4bf-serving-cert\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:04:58.236584 master-0 kubenswrapper[31420]: I0220 12:04:58.236507 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 20 12:04:58.245282 master-0 kubenswrapper[31420]: I0220 12:04:58.245213 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd609bd3-2525-4b88-8f07-94a0418fb582-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 12:04:58.246384 master-0 kubenswrapper[31420]: I0220 12:04:58.246287 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:58.258979 master-0 kubenswrapper[31420]: I0220 12:04:58.258903 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-mhfhg" Feb 20 12:04:58.277708 master-0 kubenswrapper[31420]: I0220 12:04:58.277600 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 20 12:04:58.281318 master-0 kubenswrapper[31420]: I0220 12:04:58.281252 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ab951b1-6898-4357-b813-16365f3f89d5-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 12:04:58.296879 master-0 kubenswrapper[31420]: I0220 12:04:58.296821 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 20 12:04:58.304281 master-0 kubenswrapper[31420]: I0220 12:04:58.304216 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ab951b1-6898-4357-b813-16365f3f89d5-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 12:04:58.316978 master-0 kubenswrapper[31420]: I0220 12:04:58.316923 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 20 12:04:58.324644 master-0 kubenswrapper[31420]: I0220 12:04:58.324534 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbdbadd9-eeaa-46ef-936e-5db8d395c118-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-d9vsg\" (UID: \"bbdbadd9-eeaa-46ef-936e-5db8d395c118\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 12:04:58.336802 master-0 kubenswrapper[31420]: I0220 12:04:58.336753 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-rjk9v" Feb 20 12:04:58.356576 master-0 kubenswrapper[31420]: I0220 12:04:58.356496 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 20 12:04:58.364208 master-0 kubenswrapper[31420]: I0220 12:04:58.364121 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-service-ca-bundle\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 12:04:58.377162 master-0 kubenswrapper[31420]: I0220 12:04:58.377099 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-8ksk5" Feb 20 12:04:58.397134 master-0 kubenswrapper[31420]: I0220 12:04:58.397071 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 20 12:04:58.403846 master-0 kubenswrapper[31420]: I0220 12:04:58.403787 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf25ef5-8247-4dbb-bdc1-55104b1015b7-serving-cert\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 12:04:58.416333 master-0 kubenswrapper[31420]: I0220 12:04:58.416268 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 20 12:04:58.437961 master-0 kubenswrapper[31420]: I0220 12:04:58.437873 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 20 12:04:58.464080 master-0 kubenswrapper[31420]: I0220 12:04:58.464005 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 20 12:04:58.468405 master-0 kubenswrapper[31420]: I0220 12:04:58.468332 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf25ef5-8247-4dbb-bdc1-55104b1015b7-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 12:04:58.478105 master-0 kubenswrapper[31420]: I0220 12:04:58.478061 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 20 12:04:58.485658 master-0 kubenswrapper[31420]: I0220 12:04:58.485612 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-webhook-cert\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 12:04:58.486009 master-0 kubenswrapper[31420]: I0220 12:04:58.485957 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae1fd116-6f63-4344-b7af-278665649e5a-apiservice-cert\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 12:04:58.495301 master-0 kubenswrapper[31420]: I0220 12:04:58.495249 31420 request.go:700] Waited for 1.993233827s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&limit=500&resourceVersion=0 Feb 20 12:04:58.497281 master-0 kubenswrapper[31420]: I0220 12:04:58.497198 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 20 12:04:58.506761 master-0 kubenswrapper[31420]: I0220 12:04:58.506720 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-images\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 12:04:58.516838 master-0 kubenswrapper[31420]: I0220 12:04:58.516770 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-r89nt" Feb 20 12:04:58.537386 master-0 kubenswrapper[31420]: I0220 12:04:58.537319 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-st2x9" Feb 20 12:04:58.556683 master-0 kubenswrapper[31420]: I0220 12:04:58.556616 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 20 12:04:58.564364 master-0 kubenswrapper[31420]: I0220 12:04:58.564265 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/62fc400b-b3dd-4134-bd27-69dd8369153a-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 12:04:58.577300 master-0 kubenswrapper[31420]: I0220 12:04:58.577215 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 20 12:04:58.584477 master-0 kubenswrapper[31420]: I0220 12:04:58.584428 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62fc400b-b3dd-4134-bd27-69dd8369153a-config\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 12:04:58.596765 master-0 kubenswrapper[31420]: I0220 12:04:58.596710 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 20 12:04:58.616440 master-0 kubenswrapper[31420]: I0220 12:04:58.616379 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 12:04:58.636271 master-0 kubenswrapper[31420]: I0220 12:04:58.636221 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 20 12:04:58.645846 master-0 kubenswrapper[31420]: I0220 12:04:58.645713 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 12:04:58.656731 master-0 kubenswrapper[31420]: I0220 12:04:58.656671 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 20 12:04:58.663919 master-0 kubenswrapper[31420]: I0220 12:04:58.663857 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 12:04:58.677320 master-0 kubenswrapper[31420]: I0220 12:04:58.677263 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 20 12:04:58.683941 master-0 kubenswrapper[31420]: I0220 12:04:58.683898 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 12:04:58.696353 master-0 kubenswrapper[31420]: I0220 12:04:58.696311 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 20 12:04:58.706089 master-0 kubenswrapper[31420]: I0220 12:04:58.706048 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-config\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 12:04:58.717200 master-0 kubenswrapper[31420]: I0220 12:04:58.717135 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-l5hc4" Feb 20 12:04:58.736280 master-0 kubenswrapper[31420]: I0220 12:04:58.736216 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 20 12:04:58.744364 master-0 kubenswrapper[31420]: I0220 12:04:58.744295 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7635c0ff-4d40-4310-8187-230323e504e0-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 12:04:58.756630 master-0 kubenswrapper[31420]: I0220 12:04:58.756420 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 20 12:04:58.767118 master-0 kubenswrapper[31420]: I0220 12:04:58.767029 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:04:58.777186 master-0 kubenswrapper[31420]: I0220 12:04:58.777122 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 20 12:04:58.797464 master-0 kubenswrapper[31420]: I0220 12:04:58.797384 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 20 12:04:58.805709 master-0 kubenswrapper[31420]: I0220 12:04:58.804972 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7635c0ff-4d40-4310-8187-230323e504e0-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 12:04:58.816776 master-0 kubenswrapper[31420]: I0220 12:04:58.816635 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-g5hcq" Feb 20 12:04:58.837260 master-0 kubenswrapper[31420]: I0220 12:04:58.837185 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 20 12:04:58.846470 master-0 kubenswrapper[31420]: I0220 12:04:58.846397 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-proxy-tls\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 12:04:58.857687 master-0 kubenswrapper[31420]: I0220 12:04:58.857619 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-g5hlk" Feb 20 12:04:58.876928 master-0 kubenswrapper[31420]: I0220 12:04:58.876818 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 20 12:04:58.884599 master-0 kubenswrapper[31420]: I0220 12:04:58.884516 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-certs\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 12:04:58.898016 master-0 kubenswrapper[31420]: I0220 12:04:58.897952 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 20 12:04:58.909629 master-0 kubenswrapper[31420]: I0220 12:04:58.909511 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:58.917641 master-0 kubenswrapper[31420]: I0220 12:04:58.917580 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 20 12:04:58.924636 master-0 kubenswrapper[31420]: I0220 12:04:58.924519 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-proxy-tls\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 12:04:58.937206 master-0 kubenswrapper[31420]: I0220 12:04:58.937155 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-92b9q" Feb 20 12:04:58.958316 master-0 kubenswrapper[31420]: I0220 12:04:58.958242 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 20 12:04:58.964699 master-0 kubenswrapper[31420]: I0220 12:04:58.964644 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2f9cd117-c84f-44c9-80a9-879a04d62934-node-bootstrap-token\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 12:04:58.976895 master-0 kubenswrapper[31420]: I0220 12:04:58.976824 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-k7mnd" Feb 20 12:04:58.996798 master-0 kubenswrapper[31420]: I0220 12:04:58.996738 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-jxm2z" Feb 20 12:04:59.016426 master-0 kubenswrapper[31420]: I0220 12:04:59.016331 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 20 12:04:59.024442 master-0 kubenswrapper[31420]: I0220 12:04:59.024201 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:59.037470 master-0 kubenswrapper[31420]: I0220 12:04:59.037396 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-46trq" Feb 20 12:04:59.056905 master-0 kubenswrapper[31420]: I0220 12:04:59.056849 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 20 12:04:59.061740 master-0 kubenswrapper[31420]: I0220 12:04:59.061686 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:59.077979 master-0 kubenswrapper[31420]: I0220 12:04:59.077932 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 20 12:04:59.081676 master-0 kubenswrapper[31420]: I0220 12:04:59.081630 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-metrics-client-ca\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:04:59.083958 master-0 kubenswrapper[31420]: I0220 12:04:59.083920 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/042d8457-04dc-4171-8b0f-f9e3de695c46-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:59.084056 master-0 kubenswrapper[31420]: I0220 12:04:59.083977 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-metrics-client-ca\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:59.086324 master-0 kubenswrapper[31420]: I0220 12:04:59.086281 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9fe0660-fae4-4f97-8895-dbc4845cee40-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 12:04:59.086408 master-0 kubenswrapper[31420]: I0220 12:04:59.086291 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/89ed6373-78f8-4d77-82b2-1ab055b5b862-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 12:04:59.096384 master-0 kubenswrapper[31420]: I0220 12:04:59.096324 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 20 12:04:59.104086 master-0 kubenswrapper[31420]: I0220 12:04:59.104039 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:59.116689 master-0 kubenswrapper[31420]: I0220 12:04:59.116636 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 20 12:04:59.126801 master-0 kubenswrapper[31420]: I0220 12:04:59.126755 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 12:04:59.137704 master-0 kubenswrapper[31420]: I0220 12:04:59.137641 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 20 12:04:59.143827 master-0 kubenswrapper[31420]: I0220 12:04:59.143767 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9fe0660-fae4-4f97-8895-dbc4845cee40-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 12:04:59.156347 master-0 kubenswrapper[31420]: I0220 12:04:59.156284 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-zxcjx" Feb 20 12:04:59.176913 master-0 kubenswrapper[31420]: I0220 12:04:59.176850 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 20 12:04:59.185088 master-0 kubenswrapper[31420]: I0220 12:04:59.185026 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-node-exporter-tls\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:59.197750 master-0 kubenswrapper[31420]: I0220 12:04:59.197690 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 20 12:04:59.206038 master-0 kubenswrapper[31420]: I0220 12:04:59.205983 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 12:04:59.217234 master-0 kubenswrapper[31420]: I0220 12:04:59.217185 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-l7xzb" Feb 20 12:04:59.223816 master-0 kubenswrapper[31420]: E0220 12:04:59.223774 31420 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.223917 master-0 kubenswrapper[31420]: E0220 12:04:59.223854 31420 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-cqc0j177hn3k9: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.223917 master-0 kubenswrapper[31420]: E0220 12:04:59.223892 31420 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.224001 master-0 kubenswrapper[31420]: E0220 12:04:59.223870 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-tls podName:89ed6373-78f8-4d77-82b2-1ab055b5b862 nodeName:}" failed. No retries permitted until 2026-02-20 12:05:00.223843849 +0000 UTC m=+4.943082120 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-tls") pod "openshift-state-metrics-6dbff8cb4c-zbh2z" (UID: "89ed6373-78f8-4d77-82b2-1ab055b5b862") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.224001 master-0 kubenswrapper[31420]: E0220 12:04:59.223993 31420 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.224105 master-0 kubenswrapper[31420]: E0220 12:04:59.223996 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-client-ca-bundle podName:6717f0b4-c2f6-4ed5-94fb-778e5c7c983c nodeName:}" failed. No retries permitted until 2026-02-20 12:05:00.223963042 +0000 UTC m=+4.943201323 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-client-ca-bundle") pod "metrics-server-7dcc9fb5fb-2fx9l" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.224105 master-0 kubenswrapper[31420]: E0220 12:04:59.224045 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client-kube-rbac-proxy-config podName:aae1df07-cf9f-47a3-b146-2a0adb182660 nodeName:}" failed. No retries permitted until 2026-02-20 12:05:00.224023304 +0000 UTC m=+4.943261555 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-796b9bd86f-sp4fc" (UID: "aae1df07-cf9f-47a3-b146-2a0adb182660") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.224105 master-0 kubenswrapper[31420]: E0220 12:04:59.224066 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-federate-client-tls podName:aae1df07-cf9f-47a3-b146-2a0adb182660 nodeName:}" failed. No retries permitted until 2026-02-20 12:05:00.224057395 +0000 UTC m=+4.943295646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-federate-client-tls") pod "telemeter-client-796b9bd86f-sp4fc" (UID: "aae1df07-cf9f-47a3-b146-2a0adb182660") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.224235 master-0 kubenswrapper[31420]: E0220 12:04:59.224148 31420 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:59.224291 master-0 kubenswrapper[31420]: E0220 12:04:59.224270 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-configmap-kubelet-serving-ca-bundle podName:6717f0b4-c2f6-4ed5-94fb-778e5c7c983c nodeName:}" failed. No retries permitted until 2026-02-20 12:05:00.22424026 +0000 UTC m=+4.943478541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-configmap-kubelet-serving-ca-bundle") pod "metrics-server-7dcc9fb5fb-2fx9l" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:59.224608 master-0 kubenswrapper[31420]: E0220 12:04:59.224538 31420 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:59.224608 master-0 kubenswrapper[31420]: E0220 12:04:59.224588 31420 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.224737 master-0 kubenswrapper[31420]: E0220 12:04:59.224650 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-serving-certs-ca-bundle podName:aae1df07-cf9f-47a3-b146-2a0adb182660 nodeName:}" failed. No retries permitted until 2026-02-20 12:05:00.224629191 +0000 UTC m=+4.943867462 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-serving-certs-ca-bundle") pod "telemeter-client-796b9bd86f-sp4fc" (UID: "aae1df07-cf9f-47a3-b146-2a0adb182660") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:59.224737 master-0 kubenswrapper[31420]: E0220 12:04:59.224688 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-server-tls podName:6717f0b4-c2f6-4ed5-94fb-778e5c7c983c nodeName:}" failed. No retries permitted until 2026-02-20 12:05:00.224670692 +0000 UTC m=+4.943908973 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-server-tls") pod "metrics-server-7dcc9fb5fb-2fx9l" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.225161 master-0 kubenswrapper[31420]: E0220 12:04:59.225115 31420 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.225233 master-0 kubenswrapper[31420]: E0220 12:04:59.225161 31420 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:59.225233 master-0 kubenswrapper[31420]: E0220 12:04:59.225205 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-client-tls podName:aae1df07-cf9f-47a3-b146-2a0adb182660 nodeName:}" failed. No retries permitted until 2026-02-20 12:05:00.225185657 +0000 UTC m=+4.944423998 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-client-tls") pod "telemeter-client-796b9bd86f-sp4fc" (UID: "aae1df07-cf9f-47a3-b146-2a0adb182660") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.225318 master-0 kubenswrapper[31420]: E0220 12:04:59.225238 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-metrics-server-audit-profiles podName:6717f0b4-c2f6-4ed5-94fb-778e5c7c983c nodeName:}" failed. No retries permitted until 2026-02-20 12:05:00.225220298 +0000 UTC m=+4.944458579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-metrics-server-audit-profiles") pod "metrics-server-7dcc9fb5fb-2fx9l" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:59.226409 master-0 kubenswrapper[31420]: E0220 12:04:59.226360 31420 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.226494 master-0 kubenswrapper[31420]: E0220 12:04:59.226411 31420 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.226494 master-0 kubenswrapper[31420]: E0220 12:04:59.226454 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6479d88f-463f-48ed-846d-2747752a8abb-webhook-certs podName:6479d88f-463f-48ed-846d-2747752a8abb nodeName:}" failed. No retries permitted until 2026-02-20 12:05:00.226429752 +0000 UTC m=+4.945668033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6479d88f-463f-48ed-846d-2747752a8abb-webhook-certs") pod "multus-admission-controller-5f54bf67d4-zxsc2" (UID: "6479d88f-463f-48ed-846d-2747752a8abb") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.226494 master-0 kubenswrapper[31420]: E0220 12:04:59.226458 31420 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.226494 master-0 kubenswrapper[31420]: E0220 12:04:59.226479 31420 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:59.226494 master-0 kubenswrapper[31420]: E0220 12:04:59.226493 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client podName:aae1df07-cf9f-47a3-b146-2a0adb182660 nodeName:}" failed. No retries permitted until 2026-02-20 12:05:00.226473783 +0000 UTC m=+4.945712064 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client") pod "telemeter-client-796b9bd86f-sp4fc" (UID: "aae1df07-cf9f-47a3-b146-2a0adb182660") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.226928 master-0 kubenswrapper[31420]: E0220 12:04:59.226485 31420 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.226928 master-0 kubenswrapper[31420]: E0220 12:04:59.226566 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-client-certs podName:6717f0b4-c2f6-4ed5-94fb-778e5c7c983c nodeName:}" failed. No retries permitted until 2026-02-20 12:05:00.226513414 +0000 UTC m=+4.945751815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-client-certs") pod "metrics-server-7dcc9fb5fb-2fx9l" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.226928 master-0 kubenswrapper[31420]: E0220 12:04:59.226604 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-trusted-ca-bundle podName:aae1df07-cf9f-47a3-b146-2a0adb182660 nodeName:}" failed. No retries permitted until 2026-02-20 12:05:00.226586717 +0000 UTC m=+4.945825138 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-trusted-ca-bundle") pod "telemeter-client-796b9bd86f-sp4fc" (UID: "aae1df07-cf9f-47a3-b146-2a0adb182660") : failed to sync configmap cache: timed out waiting for the condition Feb 20 12:04:59.226928 master-0 kubenswrapper[31420]: E0220 12:04:59.226646 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39790258-73bc-4c37-a935-e8d3c2a2d5c6-cert podName:39790258-73bc-4c37-a935-e8d3c2a2d5c6 nodeName:}" failed. No retries permitted until 2026-02-20 12:05:00.226630368 +0000 UTC m=+4.945868759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/39790258-73bc-4c37-a935-e8d3c2a2d5c6-cert") pod "ingress-canary-f6xzr" (UID: "39790258-73bc-4c37-a935-e8d3c2a2d5c6") : failed to sync secret cache: timed out waiting for the condition Feb 20 12:04:59.236890 master-0 kubenswrapper[31420]: I0220 12:04:59.236843 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 20 12:04:59.256230 master-0 kubenswrapper[31420]: I0220 12:04:59.256147 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-mr6l7" Feb 20 12:04:59.277668 master-0 kubenswrapper[31420]: I0220 12:04:59.277501 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 20 12:04:59.297780 master-0 kubenswrapper[31420]: I0220 12:04:59.297709 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 20 12:04:59.317741 master-0 kubenswrapper[31420]: I0220 12:04:59.317672 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 20 12:04:59.336009 master-0 kubenswrapper[31420]: I0220 12:04:59.335951 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 20 12:04:59.356968 master-0 kubenswrapper[31420]: I0220 12:04:59.356911 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 20 12:04:59.379416 master-0 kubenswrapper[31420]: I0220 12:04:59.379363 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 20 12:04:59.398718 master-0 kubenswrapper[31420]: I0220 12:04:59.397660 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-cqc0j177hn3k9" Feb 20 12:04:59.416391 master-0 kubenswrapper[31420]: I0220 12:04:59.416341 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 20 12:04:59.436029 master-0 kubenswrapper[31420]: I0220 12:04:59.435983 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-fgsdc" Feb 20 12:04:59.458245 master-0 kubenswrapper[31420]: I0220 12:04:59.458197 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 20 12:04:59.475747 master-0 kubenswrapper[31420]: I0220 12:04:59.475689 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-j7fmn" Feb 20 12:04:59.496694 master-0 kubenswrapper[31420]: I0220 12:04:59.496644 31420 request.go:700] Waited for 2.984456287s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dtelemeter-trusted-ca-bundle-8i12ta5c71j38&limit=500&resourceVersion=0 Feb 20 12:04:59.507790 master-0 kubenswrapper[31420]: I0220 12:04:59.507738 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 20 12:04:59.517410 master-0 kubenswrapper[31420]: I0220 12:04:59.517357 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 20 12:04:59.537861 master-0 kubenswrapper[31420]: I0220 12:04:59.537723 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 20 12:04:59.556601 master-0 kubenswrapper[31420]: I0220 12:04:59.556459 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-zhr86" Feb 20 12:04:59.577626 master-0 kubenswrapper[31420]: I0220 12:04:59.576144 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 20 12:04:59.597685 master-0 kubenswrapper[31420]: I0220 12:04:59.596286 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 20 12:04:59.613573 master-0 kubenswrapper[31420]: I0220 12:04:59.613464 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:59.616692 master-0 kubenswrapper[31420]: I0220 12:04:59.616645 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 20 12:04:59.617606 master-0 kubenswrapper[31420]: I0220 12:04:59.616868 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:04:59.657267 master-0 kubenswrapper[31420]: I0220 12:04:59.657185 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ts6s\" (UniqueName: \"kubernetes.io/projected/31969539-bfd1-466f-8697-f13cbbd957df-kube-api-access-7ts6s\") pod \"ovnkube-control-plane-5d8dfcdc87-p5qr8\" (UID: \"31969539-bfd1-466f-8697-f13cbbd957df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-p5qr8" Feb 20 12:04:59.671540 master-0 kubenswrapper[31420]: I0220 12:04:59.671465 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f64ql\" (UniqueName: \"kubernetes.io/projected/89ed6373-78f8-4d77-82b2-1ab055b5b862-kube-api-access-f64ql\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 12:04:59.692736 master-0 kubenswrapper[31420]: I0220 12:04:59.692669 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f98aeaf7-bf1a-46af-bf1b-85713baa4c67-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-9zp85\" (UID: \"f98aeaf7-bf1a-46af-bf1b-85713baa4c67\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-9zp85" Feb 20 12:04:59.735424 master-0 kubenswrapper[31420]: I0220 12:04:59.735365 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r85p\" (UniqueName: \"kubernetes.io/projected/b9fe0660-fae4-4f97-8895-dbc4845cee40-kube-api-access-7r85p\") pod \"prometheus-operator-754bc4d665-5kbrl\" (UID: \"b9fe0660-fae4-4f97-8895-dbc4845cee40\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-5kbrl" Feb 20 12:04:59.741048 master-0 kubenswrapper[31420]: I0220 12:04:59.741002 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqxhp\" (UniqueName: \"kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-kube-api-access-lqxhp\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 12:04:59.758172 master-0 kubenswrapper[31420]: I0220 12:04:59.758084 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dx69\" (UniqueName: \"kubernetes.io/projected/62ba4bae-a5e1-4c4d-b544-25d0e59eeac2-kube-api-access-2dx69\") pod \"node-exporter-8d7nc\" (UID: \"62ba4bae-a5e1-4c4d-b544-25d0e59eeac2\") " pod="openshift-monitoring/node-exporter-8d7nc" Feb 20 12:04:59.774377 master-0 kubenswrapper[31420]: I0220 12:04:59.774322 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxr6j\" (UniqueName: \"kubernetes.io/projected/29489539-68c6-49dd-bc1b-dcf0c7bb2ebe-kube-api-access-rxr6j\") pod \"machine-config-controller-54cb48566c-8m59n\" (UID: \"29489539-68c6-49dd-bc1b-dcf0c7bb2ebe\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-8m59n" Feb 20 12:04:59.789002 master-0 kubenswrapper[31420]: I0220 12:04:59.788870 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zmwm\" (UniqueName: \"kubernetes.io/projected/39ccf158-b40f-4dba-90e2-27b1409487b7-kube-api-access-4zmwm\") pod \"network-check-target-h5w2t\" (UID: \"39ccf158-b40f-4dba-90e2-27b1409487b7\") " pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 12:04:59.810926 master-0 kubenswrapper[31420]: I0220 12:04:59.810876 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpz9d\" (UniqueName: \"kubernetes.io/projected/042d8457-04dc-4171-8b0f-f9e3de695c46-kube-api-access-hpz9d\") pod \"kube-state-metrics-59584d565f-9fdgm\" (UID: \"042d8457-04dc-4171-8b0f-f9e3de695c46\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-9fdgm" Feb 20 12:04:59.831756 master-0 kubenswrapper[31420]: I0220 12:04:59.831699 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms8wk\" (UniqueName: \"kubernetes.io/projected/836a6d7e-9b26-425f-ae21-00422515d7fe-kube-api-access-ms8wk\") pod \"network-node-identity-psm4s\" (UID: \"836a6d7e-9b26-425f-ae21-00422515d7fe\") " pod="openshift-network-node-identity/network-node-identity-psm4s" Feb 20 12:04:59.857198 master-0 kubenswrapper[31420]: I0220 12:04:59.857078 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hxz5\" (UniqueName: \"kubernetes.io/projected/19cf75ed-6a4e-444d-8975-fa6ecba79f13-kube-api-access-7hxz5\") pod \"certified-operators-76v4z\" (UID: \"19cf75ed-6a4e-444d-8975-fa6ecba79f13\") " pod="openshift-marketplace/certified-operators-76v4z" Feb 20 12:04:59.874271 master-0 kubenswrapper[31420]: I0220 12:04:59.874212 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j4cs\" (UniqueName: \"kubernetes.io/projected/478be5e4-cf17-4ebf-a45a-c18cd2b69929-kube-api-access-5j4cs\") pod \"ovnkube-node-7l848\" (UID: \"478be5e4-cf17-4ebf-a45a-c18cd2b69929\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:04:59.889895 master-0 kubenswrapper[31420]: I0220 12:04:59.889839 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfzqt\" (UniqueName: \"kubernetes.io/projected/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-kube-api-access-kfzqt\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:04:59.912608 master-0 kubenswrapper[31420]: I0220 12:04:59.912538 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwgg6\" (UniqueName: \"kubernetes.io/projected/8df029f2-d0ec-4543-9371-7694b1e85a06-kube-api-access-kwgg6\") pod \"redhat-marketplace-89t2q\" (UID: \"8df029f2-d0ec-4543-9371-7694b1e85a06\") " pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 12:04:59.931927 master-0 kubenswrapper[31420]: I0220 12:04:59.931862 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4457\" (UniqueName: \"kubernetes.io/projected/6dfca740-0387-428a-b957-3e8a09c6e352-kube-api-access-d4457\") pod \"marketplace-operator-6f5488b997-nr4tg\" (UID: \"6dfca740-0387-428a-b957-3e8a09c6e352\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 12:04:59.952082 master-0 kubenswrapper[31420]: I0220 12:04:59.952018 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4j88\" (UniqueName: \"kubernetes.io/projected/bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4-kube-api-access-s4j88\") pod \"csi-snapshot-controller-6847bb4785-792hn\" (UID: \"bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-792hn" Feb 20 12:04:59.968138 master-0 kubenswrapper[31420]: I0220 12:04:59.968082 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdzzt\" (UniqueName: \"kubernetes.io/projected/8ab951b1-6898-4357-b813-16365f3f89d5-kube-api-access-xdzzt\") pod \"cluster-autoscaler-operator-86b8dc6d6-sksbt\" (UID: \"8ab951b1-6898-4357-b813-16365f3f89d5\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-sksbt" Feb 20 12:04:59.987112 master-0 kubenswrapper[31420]: I0220 12:04:59.987062 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zkbq\" (UniqueName: \"kubernetes.io/projected/6c3aa45a-44cc-48fb-a478-ce01a70c4b02-kube-api-access-2zkbq\") pod \"authentication-operator-5bd7c86784-vtcnw\" (UID: \"6c3aa45a-44cc-48fb-a478-ce01a70c4b02\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-vtcnw" Feb 20 12:05:00.007951 master-0 kubenswrapper[31420]: I0220 12:05:00.007894 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlcjf\" (UniqueName: \"kubernetes.io/projected/5c104245-d078-4856-9a60-207bb6efcfe8-kube-api-access-nlcjf\") pod \"cluster-samples-operator-65c5c48b9b-2k7xj\" (UID: \"5c104245-d078-4856-9a60-207bb6efcfe8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-2k7xj" Feb 20 12:05:00.027196 master-0 kubenswrapper[31420]: I0220 12:05:00.027148 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpk24\" (UniqueName: \"kubernetes.io/projected/d65a0af4-c96f-44f8-9384-6bae4585983b-kube-api-access-bpk24\") pod \"olm-operator-5499d7f7bb-6qtzc\" (UID: \"d65a0af4-c96f-44f8-9384-6bae4585983b\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 12:05:00.048402 master-0 kubenswrapper[31420]: I0220 12:05:00.048274 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2795m\" (UniqueName: \"kubernetes.io/projected/afa174b3-912c-4b56-b5eb-f3e3df012c11-kube-api-access-2795m\") pod \"node-resolver-jlp7n\" (UID: \"afa174b3-912c-4b56-b5eb-f3e3df012c11\") " pod="openshift-dns/node-resolver-jlp7n" Feb 20 12:05:00.067629 master-0 kubenswrapper[31420]: I0220 12:05:00.067543 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nd7r\" (UniqueName: \"kubernetes.io/projected/4cbb46f1-1c33-42fc-8371-6a1bea8c28ff-kube-api-access-8nd7r\") pod \"cluster-node-tuning-operator-bcf775fc9-gwpst\" (UID: \"4cbb46f1-1c33-42fc-8371-6a1bea8c28ff\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-gwpst" Feb 20 12:05:00.091196 master-0 kubenswrapper[31420]: I0220 12:05:00.091130 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpnmz\" (UniqueName: \"kubernetes.io/projected/312ca024-c8f0-4994-8f9a-b707607341fe-kube-api-access-bpnmz\") pod \"network-operator-7d7db75979-fv598\" (UID: \"312ca024-c8f0-4994-8f9a-b707607341fe\") " pod="openshift-network-operator/network-operator-7d7db75979-fv598" Feb 20 12:05:00.110367 master-0 kubenswrapper[31420]: I0220 12:05:00.110314 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp57v\" (UniqueName: \"kubernetes.io/projected/59c1cc61-8692-4a35-83fc-6bbef7086117-kube-api-access-mp57v\") pod \"apiserver-7666bb78cc-jxswr\" (UID: \"59c1cc61-8692-4a35-83fc-6bbef7086117\") " pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:05:00.141049 master-0 kubenswrapper[31420]: I0220 12:05:00.140970 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6td56\" (UniqueName: \"kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-kube-api-access-6td56\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 12:05:00.147981 master-0 kubenswrapper[31420]: I0220 12:05:00.147924 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksx6l\" (UniqueName: \"kubernetes.io/projected/e8c48a22-ed96-42c5-ac4a-dd7d4f204539-kube-api-access-ksx6l\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg\" (UID: \"e8c48a22-ed96-42c5-ac4a-dd7d4f204539\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg" Feb 20 12:05:00.168783 master-0 kubenswrapper[31420]: I0220 12:05:00.168724 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf682\" (UniqueName: \"kubernetes.io/projected/ae1fd116-6f63-4344-b7af-278665649e5a-kube-api-access-wf682\") pod \"packageserver-795fd44d5c-t99pw\" (UID: \"ae1fd116-6f63-4344-b7af-278665649e5a\") " pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 12:05:00.201506 master-0 kubenswrapper[31420]: I0220 12:05:00.201444 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqzpj\" (UniqueName: \"kubernetes.io/projected/aae1df07-cf9f-47a3-b146-2a0adb182660-kube-api-access-qqzpj\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:05:00.219152 master-0 kubenswrapper[31420]: I0220 12:05:00.219052 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78bqv\" (UniqueName: \"kubernetes.io/projected/daf25ef5-8247-4dbb-bdc1-55104b1015b7-kube-api-access-78bqv\") pod \"insights-operator-59b498fcfb-hsjr7\" (UID: \"daf25ef5-8247-4dbb-bdc1-55104b1015b7\") " pod="openshift-insights/insights-operator-59b498fcfb-hsjr7" Feb 20 12:05:00.237171 master-0 kubenswrapper[31420]: I0220 12:05:00.237127 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkn7h\" (UniqueName: \"kubernetes.io/projected/37cb3bb1-f5ba-4b7b-9af9-55bf61906a51-kube-api-access-qkn7h\") pod \"machine-config-daemon-mpwks\" (UID: \"37cb3bb1-f5ba-4b7b-9af9-55bf61906a51\") " pod="openshift-machine-config-operator/machine-config-daemon-mpwks" Feb 20 12:05:00.259910 master-0 kubenswrapper[31420]: I0220 12:05:00.259836 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxs4n\" (UniqueName: \"kubernetes.io/projected/d9f9442b-25b9-420f-b748-bb13423809fe-kube-api-access-kxs4n\") pod \"catalogd-controller-manager-84b8d9d697-k8vs5\" (UID: \"d9f9442b-25b9-420f-b748-bb13423809fe\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:05:00.273686 master-0 kubenswrapper[31420]: I0220 12:05:00.273635 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-metrics-server-audit-profiles\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:05:00.273869 master-0 kubenswrapper[31420]: I0220 12:05:00.273719 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-federate-client-tls\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:05:00.273869 master-0 kubenswrapper[31420]: I0220 12:05:00.273791 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-serving-certs-ca-bundle\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:05:00.274013 master-0 kubenswrapper[31420]: I0220 12:05:00.273926 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-client-ca-bundle\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:05:00.274013 master-0 kubenswrapper[31420]: I0220 12:05:00.273983 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 12:05:00.274144 master-0 kubenswrapper[31420]: I0220 12:05:00.274014 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-client-tls\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:05:00.274491 master-0 kubenswrapper[31420]: I0220 12:05:00.274454 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-metrics-server-audit-profiles\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:05:00.274665 master-0 kubenswrapper[31420]: I0220 12:05:00.274485 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-federate-client-tls\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:05:00.274934 master-0 kubenswrapper[31420]: I0220 12:05:00.274897 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-client-ca-bundle\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:05:00.275056 master-0 kubenswrapper[31420]: I0220 12:05:00.274945 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-client-tls\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:05:00.275056 master-0 kubenswrapper[31420]: I0220 12:05:00.274988 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/89ed6373-78f8-4d77-82b2-1ab055b5b862-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-zbh2z\" (UID: \"89ed6373-78f8-4d77-82b2-1ab055b5b862\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-zbh2z" Feb 20 12:05:00.275056 master-0 kubenswrapper[31420]: I0220 12:05:00.275010 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:05:00.275272 master-0 kubenswrapper[31420]: I0220 12:05:00.275220 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-server-tls\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:05:00.275344 master-0 kubenswrapper[31420]: I0220 12:05:00.275290 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:05:00.275344 master-0 kubenswrapper[31420]: I0220 12:05:00.275330 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6479d88f-463f-48ed-846d-2747752a8abb-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-zxsc2\" (UID: \"6479d88f-463f-48ed-846d-2747752a8abb\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" Feb 20 12:05:00.275477 master-0 kubenswrapper[31420]: I0220 12:05:00.275439 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:05:00.275581 master-0 kubenswrapper[31420]: I0220 12:05:00.275484 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-trusted-ca-bundle\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:05:00.276245 master-0 kubenswrapper[31420]: I0220 12:05:00.275606 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39790258-73bc-4c37-a935-e8d3c2a2d5c6-cert\") pod \"ingress-canary-f6xzr\" (UID: \"39790258-73bc-4c37-a935-e8d3c2a2d5c6\") " pod="openshift-ingress-canary/ingress-canary-f6xzr" Feb 20 12:05:00.276245 master-0 kubenswrapper[31420]: I0220 12:05:00.275659 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:05:00.276245 master-0 kubenswrapper[31420]: I0220 12:05:00.275707 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-client-certs\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:05:00.276245 master-0 kubenswrapper[31420]: I0220 12:05:00.275741 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6479d88f-463f-48ed-846d-2747752a8abb-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-zxsc2\" (UID: \"6479d88f-463f-48ed-846d-2747752a8abb\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" Feb 20 12:05:00.276245 master-0 kubenswrapper[31420]: I0220 12:05:00.275796 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-server-tls\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:05:00.276245 master-0 kubenswrapper[31420]: I0220 12:05:00.275948 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-telemeter-trusted-ca-bundle\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:05:00.276245 master-0 kubenswrapper[31420]: I0220 12:05:00.275952 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:05:00.276245 master-0 kubenswrapper[31420]: I0220 12:05:00.276100 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-client-certs\") pod \"metrics-server-7dcc9fb5fb-2fx9l\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:05:00.276245 master-0 kubenswrapper[31420]: I0220 12:05:00.276152 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39790258-73bc-4c37-a935-e8d3c2a2d5c6-cert\") pod \"ingress-canary-f6xzr\" (UID: \"39790258-73bc-4c37-a935-e8d3c2a2d5c6\") " pod="openshift-ingress-canary/ingress-canary-f6xzr" Feb 20 12:05:00.276960 master-0 kubenswrapper[31420]: I0220 12:05:00.276283 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/aae1df07-cf9f-47a3-b146-2a0adb182660-secret-telemeter-client\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:05:00.277164 master-0 kubenswrapper[31420]: I0220 12:05:00.277119 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae1df07-cf9f-47a3-b146-2a0adb182660-serving-certs-ca-bundle\") pod \"telemeter-client-796b9bd86f-sp4fc\" (UID: \"aae1df07-cf9f-47a3-b146-2a0adb182660\") " pod="openshift-monitoring/telemeter-client-796b9bd86f-sp4fc" Feb 20 12:05:00.281528 master-0 kubenswrapper[31420]: I0220 12:05:00.281462 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrwcs\" (UniqueName: \"kubernetes.io/projected/533fe3c7-504f-40aa-aab0-8d66ef27920f-kube-api-access-jrwcs\") pod \"multus-9qpc7\" (UID: \"533fe3c7-504f-40aa-aab0-8d66ef27920f\") " pod="openshift-multus/multus-9qpc7" Feb 20 12:05:00.294179 master-0 kubenswrapper[31420]: I0220 12:05:00.294129 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89383482-190e-4f74-a81e-b1547e5b9ae6-kube-api-access\") pod \"cluster-version-operator-57476485-dwvgg\" (UID: \"89383482-190e-4f74-a81e-b1547e5b9ae6\") " pod="openshift-cluster-version/cluster-version-operator-57476485-dwvgg" Feb 20 12:05:00.414903 master-0 kubenswrapper[31420]: I0220 12:05:00.414711 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cnvt\" (UniqueName: \"kubernetes.io/projected/fca78741-ca32-4867-b44f-483fd62f2942-kube-api-access-2cnvt\") pod \"network-check-source-58fb6744f5-gjgxv\" (UID: \"fca78741-ca32-4867-b44f-483fd62f2942\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-gjgxv" Feb 20 12:05:00.417113 master-0 kubenswrapper[31420]: I0220 12:05:00.417041 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k2dv\" (UniqueName: \"kubernetes.io/projected/02c6a0e7-6363-4d7e-a8eb-b4d38b74b145-kube-api-access-8k2dv\") pod \"openshift-config-operator-6f47d587d6-mk9fd\" (UID: \"02c6a0e7-6363-4d7e-a8eb-b4d38b74b145\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 12:05:00.425255 master-0 kubenswrapper[31420]: I0220 12:05:00.425197 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p4w6\" (UniqueName: \"kubernetes.io/projected/1fca5d50-eb5f-4dbb-bdf6-8e07231406f9-kube-api-access-8p4w6\") pod \"kube-storage-version-migrator-operator-fc889cfd5-stms8\" (UID: \"1fca5d50-eb5f-4dbb-bdf6-8e07231406f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-stms8" Feb 20 12:05:00.425944 master-0 kubenswrapper[31420]: I0220 12:05:00.425893 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j82z\" (UniqueName: \"kubernetes.io/projected/906307ef-d988-49e7-9d63-39116a2c4880-kube-api-access-5j82z\") pod \"iptables-alerter-gkxzr\" (UID: \"906307ef-d988-49e7-9d63-39116a2c4880\") " pod="openshift-network-operator/iptables-alerter-gkxzr" Feb 20 12:05:00.429103 master-0 kubenswrapper[31420]: I0220 12:05:00.429038 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k8n8\" (UniqueName: \"kubernetes.io/projected/ce2b6fde-de56-49c3-9bd6-e81c679b02bc-kube-api-access-2k8n8\") pod \"cluster-olm-operator-5bd7768f54-j5fsc\" (UID: \"ce2b6fde-de56-49c3-9bd6-e81c679b02bc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-j5fsc" Feb 20 12:05:00.430634 master-0 kubenswrapper[31420]: I0220 12:05:00.430516 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5z86\" (UniqueName: \"kubernetes.io/projected/11aaad8c-2f25-460f-b4af-f27d8bc682a0-kube-api-access-x5z86\") pod \"redhat-operators-q287t\" (UID: \"11aaad8c-2f25-460f-b4af-f27d8bc682a0\") " pod="openshift-marketplace/redhat-operators-q287t" Feb 20 12:05:00.438763 master-0 kubenswrapper[31420]: I0220 12:05:00.438715 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db2a7cb1-1d05-4b24-86ed-f823fad5013e-bound-sa-token\") pod \"ingress-operator-6569778c84-kw2v6\" (UID: \"db2a7cb1-1d05-4b24-86ed-f823fad5013e\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-kw2v6" Feb 20 12:05:00.452263 master-0 kubenswrapper[31420]: I0220 12:05:00.452208 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2qdb\" (UniqueName: \"kubernetes.io/projected/9c078827-3bdb-4509-aeb3-eb558df1f6e7-kube-api-access-x2qdb\") pod \"router-default-7b65dc9fcb-fkkd5\" (UID: \"9c078827-3bdb-4509-aeb3-eb558df1f6e7\") " pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:05:00.470002 master-0 kubenswrapper[31420]: I0220 12:05:00.469942 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7krn8\" (UniqueName: \"kubernetes.io/projected/fca213c3-42ca-4341-a2e6-a143b9389f9e-kube-api-access-7krn8\") pod \"apiserver-69fc79b84-rr6rh\" (UID: \"fca213c3-42ca-4341-a2e6-a143b9389f9e\") " pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:05:00.491311 master-0 kubenswrapper[31420]: I0220 12:05:00.491243 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc9wx\" (UniqueName: \"kubernetes.io/projected/b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1-kube-api-access-sc9wx\") pod \"operator-controller-controller-manager-9cc7d7bb-vs87f\" (UID: \"b2c2ee35-8ef2-4a79-a5c5-95cdd12653e1\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:05:00.511667 master-0 kubenswrapper[31420]: I0220 12:05:00.511604 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvxsh\" (UniqueName: \"kubernetes.io/projected/839bf5b1-b242-4bbd-bc09-cf6abcf7f734-kube-api-access-pvxsh\") pod \"csi-snapshot-controller-operator-6fb4df594f-8x7xw\" (UID: \"839bf5b1-b242-4bbd-bc09-cf6abcf7f734\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8x7xw" Feb 20 12:05:00.514990 master-0 kubenswrapper[31420]: I0220 12:05:00.514949 31420 request.go:700] Waited for 3.889214807s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-server/token Feb 20 12:05:00.532253 master-0 kubenswrapper[31420]: I0220 12:05:00.532196 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m98rt\" (UniqueName: \"kubernetes.io/projected/2f9cd117-c84f-44c9-80a9-879a04d62934-kube-api-access-m98rt\") pod \"machine-config-server-4wkh4\" (UID: \"2f9cd117-c84f-44c9-80a9-879a04d62934\") " pod="openshift-machine-config-operator/machine-config-server-4wkh4" Feb 20 12:05:00.549471 master-0 kubenswrapper[31420]: I0220 12:05:00.549423 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbsxw\" (UniqueName: \"kubernetes.io/projected/62fc400b-b3dd-4134-bd27-69dd8369153a-kube-api-access-zbsxw\") pod \"machine-api-operator-5c7cf458b4-dmvlr\" (UID: \"62fc400b-b3dd-4134-bd27-69dd8369153a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dmvlr" Feb 20 12:05:00.570023 master-0 kubenswrapper[31420]: I0220 12:05:00.569973 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dx9s\" (UniqueName: \"kubernetes.io/projected/34382460-b2d7-4154-87ba-c0347a4c0f1b-kube-api-access-5dx9s\") pod \"community-operators-7kn5q\" (UID: \"34382460-b2d7-4154-87ba-c0347a4c0f1b\") " pod="openshift-marketplace/community-operators-7kn5q" Feb 20 12:05:00.582391 master-0 kubenswrapper[31420]: I0220 12:05:00.582335 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:05:00.588319 master-0 kubenswrapper[31420]: I0220 12:05:00.588276 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vvm8\" (UniqueName: \"kubernetes.io/projected/5360f3f5-2d07-432f-af45-22659538c55e-kube-api-access-7vvm8\") pod \"openshift-controller-manager-operator-584cc7bcb5-qdb75\" (UID: \"5360f3f5-2d07-432f-af45-22659538c55e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-qdb75" Feb 20 12:05:00.607841 master-0 kubenswrapper[31420]: I0220 12:05:00.607784 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c9xz\" (UniqueName: \"kubernetes.io/projected/af18215b-e749-4565-bb6c-24e92c452817-kube-api-access-7c9xz\") pod \"dns-default-kx4ch\" (UID: \"af18215b-e749-4565-bb6c-24e92c452817\") " pod="openshift-dns/dns-default-kx4ch" Feb 20 12:05:00.619843 master-0 kubenswrapper[31420]: I0220 12:05:00.619802 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:05:00.633699 master-0 kubenswrapper[31420]: I0220 12:05:00.633647 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfmdd\" (UniqueName: \"kubernetes.io/projected/6479d88f-463f-48ed-846d-2747752a8abb-kube-api-access-mfmdd\") pod \"multus-admission-controller-5f54bf67d4-zxsc2\" (UID: \"6479d88f-463f-48ed-846d-2747752a8abb\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-zxsc2" Feb 20 12:05:00.639900 master-0 kubenswrapper[31420]: I0220 12:05:00.639849 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:05:00.651443 master-0 kubenswrapper[31420]: I0220 12:05:00.651398 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttmwx\" (UniqueName: \"kubernetes.io/projected/bbdbadd9-eeaa-46ef-936e-5db8d395c118-kube-api-access-ttmwx\") pod \"cluster-storage-operator-f94476f49-d9vsg\" (UID: \"bbdbadd9-eeaa-46ef-936e-5db8d395c118\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-d9vsg" Feb 20 12:05:00.675739 master-0 kubenswrapper[31420]: I0220 12:05:00.675596 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wnh5\" (UniqueName: \"kubernetes.io/projected/22bba1b3-587d-4802-b4ae-946827c3fa7a-kube-api-access-2wnh5\") pod \"cluster-monitoring-operator-6bb6d78bf-5zl5l\" (UID: \"22bba1b3-587d-4802-b4ae-946827c3fa7a\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-5zl5l" Feb 20 12:05:00.675739 master-0 kubenswrapper[31420]: I0220 12:05:00.675697 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 12:05:00.681300 master-0 kubenswrapper[31420]: I0220 12:05:00.681260 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-mk9fd" Feb 20 12:05:00.687437 master-0 kubenswrapper[31420]: I0220 12:05:00.687403 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:05:00.687708 master-0 kubenswrapper[31420]: I0220 12:05:00.687672 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-kx4ch" Feb 20 12:05:00.687768 master-0 kubenswrapper[31420]: I0220 12:05:00.687732 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:05:00.687811 master-0 kubenswrapper[31420]: I0220 12:05:00.687799 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:05:00.688186 master-0 kubenswrapper[31420]: I0220 12:05:00.688159 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-kx4ch" Feb 20 12:05:00.692477 master-0 kubenswrapper[31420]: I0220 12:05:00.692441 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8djgj\" (UniqueName: \"kubernetes.io/projected/1fb59696-1d5f-41bb-9211-b89c63b10840-kube-api-access-8djgj\") pod \"migrator-5c85bff57-j46n9\" (UID: \"1fb59696-1d5f-41bb-9211-b89c63b10840\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-j46n9" Feb 20 12:05:00.718999 master-0 kubenswrapper[31420]: I0220 12:05:00.718952 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk5sc\" (UniqueName: \"kubernetes.io/projected/eb135cff-1a2e-468d-80ab-f7db3f57552a-kube-api-access-tk5sc\") pod \"machine-config-operator-7f8c75f984-vvvjt\" (UID: \"eb135cff-1a2e-468d-80ab-f7db3f57552a\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-vvvjt" Feb 20 12:05:00.737908 master-0 kubenswrapper[31420]: I0220 12:05:00.737829 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7k2n\" (UniqueName: \"kubernetes.io/projected/c29fd426-7c89-434e-8332-1ca31075d4bf-kube-api-access-z7k2n\") pod \"route-controller-manager-689d967cd5-ptpq6\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:05:00.749218 master-0 kubenswrapper[31420]: I0220 12:05:00.749169 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2kct\" (UniqueName: \"kubernetes.io/projected/dbce6cdc-040a-48e1-8a81-b6ff9c180eba-kube-api-access-z2kct\") pod \"package-server-manager-5c75f78c8b-mr99g\" (UID: \"dbce6cdc-040a-48e1-8a81-b6ff9c180eba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 12:05:00.784685 master-0 kubenswrapper[31420]: I0220 12:05:00.784625 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq4ct\" (UniqueName: \"kubernetes.io/projected/8a97bbf5-7409-4f36-894b-b88284e1b6d0-kube-api-access-vq4ct\") pod \"service-ca-576b4d78bd-5fph4\" (UID: \"8a97bbf5-7409-4f36-894b-b88284e1b6d0\") " pod="openshift-service-ca/service-ca-576b4d78bd-5fph4" Feb 20 12:05:00.788132 master-0 kubenswrapper[31420]: I0220 12:05:00.788091 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2hwr\" (UniqueName: \"kubernetes.io/projected/98226a59-5234-48f3-a9cd-21de305810dc-kube-api-access-j2hwr\") pod \"controller-manager-599c7886f5-zltnd\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:05:00.808429 master-0 kubenswrapper[31420]: I0220 12:05:00.808379 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26x7b\" (UniqueName: \"kubernetes.io/projected/4d060bff-3c25-4eeb-bdd3-e20fb2687645-kube-api-access-26x7b\") pod \"catalog-operator-596f79dd6f-bjxbt\" (UID: \"4d060bff-3c25-4eeb-bdd3-e20fb2687645\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 12:05:00.839645 master-0 kubenswrapper[31420]: I0220 12:05:00.839504 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxm8p\" (UniqueName: \"kubernetes.io/projected/b9eb45bd-fc01-4707-87ea-64f07f72f6f9-kube-api-access-qxm8p\") pod \"tuned-z82cm\" (UID: \"b9eb45bd-fc01-4707-87ea-64f07f72f6f9\") " pod="openshift-cluster-node-tuning-operator/tuned-z82cm" Feb 20 12:05:00.859479 master-0 kubenswrapper[31420]: I0220 12:05:00.859406 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn7cs\" (UniqueName: \"kubernetes.io/projected/ef18ace4-7316-4600-9be9-2adc792705e9-kube-api-access-kn7cs\") pod \"cloud-credential-operator-6968c58f46-fq68q\" (UID: \"ef18ace4-7316-4600-9be9-2adc792705e9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fq68q" Feb 20 12:05:00.875673 master-0 kubenswrapper[31420]: I0220 12:05:00.875612 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk6hv\" (UniqueName: \"kubernetes.io/projected/21e8e44b-b883-4afb-af90-d6c1265edf34-kube-api-access-rk6hv\") pod \"control-plane-machine-set-operator-686847ff5f-fn7j5\" (UID: \"21e8e44b-b883-4afb-af90-d6c1265edf34\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-fn7j5" Feb 20 12:05:00.896488 master-0 kubenswrapper[31420]: I0220 12:05:00.896409 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-r9ntt\" (UID: \"7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-r9ntt" Feb 20 12:05:00.915284 master-0 kubenswrapper[31420]: I0220 12:05:00.915222 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 12:05:00.919327 master-0 kubenswrapper[31420]: I0220 12:05:00.919282 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1388469-5e55-4c1b-97c3-c88777f29ae7-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-lsxtj\" (UID: \"f1388469-5e55-4c1b-97c3-c88777f29ae7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lsxtj" Feb 20 12:05:00.934081 master-0 kubenswrapper[31420]: I0220 12:05:00.933964 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5pw4\" (UniqueName: \"kubernetes.io/projected/07281644-2789-424f-8429-aa4448dda01e-kube-api-access-l5pw4\") pod \"multus-additional-cni-plugins-f2l64\" (UID: \"07281644-2789-424f-8429-aa4448dda01e\") " pod="openshift-multus/multus-additional-cni-plugins-f2l64" Feb 20 12:05:00.963480 master-0 kubenswrapper[31420]: I0220 12:05:00.963377 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79j9f\" (UniqueName: \"kubernetes.io/projected/1709ef31-9ddd-42bf-9a95-4be4502a0828-kube-api-access-79j9f\") pod \"network-metrics-daemon-29622\" (UID: \"1709ef31-9ddd-42bf-9a95-4be4502a0828\") " pod="openshift-multus/network-metrics-daemon-29622" Feb 20 12:05:00.982994 master-0 kubenswrapper[31420]: I0220 12:05:00.982902 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcnmk\" (UniqueName: \"kubernetes.io/projected/b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8-kube-api-access-rcnmk\") pod \"dns-operator-8c7d49845-qhx9j\" (UID: \"b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8\") " pod="openshift-dns-operator/dns-operator-8c7d49845-qhx9j" Feb 20 12:05:01.000696 master-0 kubenswrapper[31420]: I0220 12:05:01.000592 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2tk7\" (UniqueName: \"kubernetes.io/projected/01e90033-9ddf-41b4-ab61-e89add6c2fde-kube-api-access-j2tk7\") pod \"service-ca-operator-c48c8bf7c-qwwbk\" (UID: \"01e90033-9ddf-41b4-ab61-e89add6c2fde\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-qwwbk" Feb 20 12:05:01.014752 master-0 kubenswrapper[31420]: I0220 12:05:01.014688 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5m78\" (UniqueName: \"kubernetes.io/projected/7635c0ff-4d40-4310-8187-230323e504e0-kube-api-access-p5m78\") pod \"machine-approver-7dd9c7d7b9-qg84l\" (UID: \"7635c0ff-4d40-4310-8187-230323e504e0\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-qg84l" Feb 20 12:05:01.046436 master-0 kubenswrapper[31420]: I0220 12:05:01.046332 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94lkp\" (UniqueName: \"kubernetes.io/projected/39790258-73bc-4c37-a935-e8d3c2a2d5c6-kube-api-access-94lkp\") pod \"ingress-canary-f6xzr\" (UID: \"39790258-73bc-4c37-a935-e8d3c2a2d5c6\") " pod="openshift-ingress-canary/ingress-canary-f6xzr" Feb 20 12:05:01.061788 master-0 kubenswrapper[31420]: I0220 12:05:01.061471 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mggv\" (UniqueName: \"kubernetes.io/projected/1df81fcc-f967-4874-ad16-1a89f0e7875a-kube-api-access-7mggv\") pod \"openshift-apiserver-operator-8586dccc9b-lfdtx\" (UID: \"1df81fcc-f967-4874-ad16-1a89f0e7875a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-lfdtx" Feb 20 12:05:01.085465 master-0 kubenswrapper[31420]: I0220 12:05:01.085376 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0b28c90-d5b6-44f3-867c-020ece32ac7d-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-kg75v\" (UID: \"e0b28c90-d5b6-44f3-867c-020ece32ac7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-kg75v" Feb 20 12:05:01.091581 master-0 kubenswrapper[31420]: I0220 12:05:01.091486 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvjcp\" (UniqueName: \"kubernetes.io/projected/1d3a36bb-9d11-48b3-a3b5-07b47738ef97-kube-api-access-lvjcp\") pod \"etcd-operator-545bf96f4d-d69w2\" (UID: \"1d3a36bb-9d11-48b3-a3b5-07b47738ef97\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-d69w2" Feb 20 12:05:01.120960 master-0 kubenswrapper[31420]: I0220 12:05:01.120850 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zztmz\" (UniqueName: \"kubernetes.io/projected/bd609bd3-2525-4b88-8f07-94a0418fb582-kube-api-access-zztmz\") pod \"cluster-baremetal-operator-d6bb9bb76-k95mq\" (UID: \"bd609bd3-2525-4b88-8f07-94a0418fb582\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k95mq" Feb 20 12:05:01.140098 master-0 kubenswrapper[31420]: E0220 12:05:01.139996 31420 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:01.140098 master-0 kubenswrapper[31420]: E0220 12:05:01.140061 31420 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:01.140396 master-0 kubenswrapper[31420]: E0220 12:05:01.140176 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access podName:97095f88-ee81-4a47-9bd7-1dbe71ec8d4d nodeName:}" failed. No retries permitted until 2026-02-20 12:05:01.640141336 +0000 UTC m=+6.359379647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access") pod "installer-3-master-0" (UID: "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:01.480991 master-0 kubenswrapper[31420]: I0220 12:05:01.480730 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:05:01.480991 master-0 kubenswrapper[31420]: I0220 12:05:01.481000 31420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 12:05:01.583008 master-0 kubenswrapper[31420]: I0220 12:05:01.582916 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:05:01.589251 master-0 kubenswrapper[31420]: I0220 12:05:01.589209 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:05:01.692298 master-0 kubenswrapper[31420]: I0220 12:05:01.692193 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:05:01.710465 master-0 kubenswrapper[31420]: I0220 12:05:01.710301 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:05:01.710743 master-0 kubenswrapper[31420]: E0220 12:05:01.710473 31420 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:01.710743 master-0 kubenswrapper[31420]: E0220 12:05:01.710505 31420 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:01.710743 master-0 kubenswrapper[31420]: E0220 12:05:01.710576 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access podName:97095f88-ee81-4a47-9bd7-1dbe71ec8d4d nodeName:}" failed. No retries permitted until 2026-02-20 12:05:02.710558099 +0000 UTC m=+7.429796340 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access") pod "installer-3-master-0" (UID: "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:01.902277 master-0 kubenswrapper[31420]: I0220 12:05:01.902200 31420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 12:05:01.902277 master-0 kubenswrapper[31420]: I0220 12:05:01.902238 31420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 12:05:02.115999 master-0 kubenswrapper[31420]: I0220 12:05:02.115884 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:05:02.428667 master-0 kubenswrapper[31420]: I0220 12:05:02.428551 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=11.428507327 podStartE2EDuration="11.428507327s" podCreationTimestamp="2026-02-20 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:05:02.427267302 +0000 UTC m=+7.146505553" watchObservedRunningTime="2026-02-20 12:05:02.428507327 +0000 UTC m=+7.147745568" Feb 20 12:05:02.435770 master-0 kubenswrapper[31420]: I0220 12:05:02.435684 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 12:05:02.440978 master-0 kubenswrapper[31420]: I0220 12:05:02.440908 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-795fd44d5c-t99pw" Feb 20 12:05:02.580022 master-0 kubenswrapper[31420]: I0220 12:05:02.579962 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 20 12:05:02.593886 master-0 kubenswrapper[31420]: I0220 12:05:02.593841 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 20 12:05:02.739646 master-0 kubenswrapper[31420]: I0220 12:05:02.739323 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:05:02.740026 master-0 kubenswrapper[31420]: E0220 12:05:02.740007 31420 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:02.740109 master-0 kubenswrapper[31420]: E0220 12:05:02.740098 31420 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:02.740272 master-0 kubenswrapper[31420]: E0220 12:05:02.740256 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access podName:97095f88-ee81-4a47-9bd7-1dbe71ec8d4d nodeName:}" failed. No retries permitted until 2026-02-20 12:05:04.740233543 +0000 UTC m=+9.459471794 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access") pod "installer-3-master-0" (UID: "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:02.927845 master-0 kubenswrapper[31420]: I0220 12:05:02.926176 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 20 12:05:03.054658 master-0 kubenswrapper[31420]: I0220 12:05:03.054593 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 12:05:03.055871 master-0 kubenswrapper[31420]: I0220 12:05:03.055807 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=7.055793517 podStartE2EDuration="7.055793517s" podCreationTimestamp="2026-02-20 12:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:05:03.054114149 +0000 UTC m=+7.773352410" watchObservedRunningTime="2026-02-20 12:05:03.055793517 +0000 UTC m=+7.775031758" Feb 20 12:05:03.059872 master-0 kubenswrapper[31420]: I0220 12:05:03.059790 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:05:03.064321 master-0 kubenswrapper[31420]: I0220 12:05:03.064276 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" Feb 20 12:05:03.066337 master-0 kubenswrapper[31420]: I0220 12:05:03.066304 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:05:03.104020 master-0 kubenswrapper[31420]: I0220 12:05:03.103910 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:05:03.104346 master-0 kubenswrapper[31420]: I0220 12:05:03.104170 31420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 12:05:03.107822 master-0 kubenswrapper[31420]: I0220 12:05:03.107783 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:05:03.382793 master-0 kubenswrapper[31420]: I0220 12:05:03.382636 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q287t" Feb 20 12:05:03.440061 master-0 kubenswrapper[31420]: I0220 12:05:03.439988 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q287t" Feb 20 12:05:04.099124 master-0 kubenswrapper[31420]: I0220 12:05:04.099050 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 12:05:04.099732 master-0 kubenswrapper[31420]: I0220 12:05:04.099217 31420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 12:05:04.105674 master-0 kubenswrapper[31420]: I0220 12:05:04.105640 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 12:05:04.770222 master-0 kubenswrapper[31420]: I0220 12:05:04.770122 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:05:04.770688 master-0 kubenswrapper[31420]: E0220 12:05:04.770336 31420 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:04.770688 master-0 kubenswrapper[31420]: E0220 12:05:04.770370 31420 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:04.770688 master-0 kubenswrapper[31420]: E0220 12:05:04.770431 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access podName:97095f88-ee81-4a47-9bd7-1dbe71ec8d4d nodeName:}" failed. No retries permitted until 2026-02-20 12:05:08.77040982 +0000 UTC m=+13.489648071 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access") pod "installer-3-master-0" (UID: "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:05.171134 master-0 kubenswrapper[31420]: I0220 12:05:05.171020 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:05:05.629483 master-0 kubenswrapper[31420]: I0220 12:05:05.629421 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:05:05.634709 master-0 kubenswrapper[31420]: I0220 12:05:05.634653 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-7666bb78cc-jxswr" Feb 20 12:05:05.699360 master-0 kubenswrapper[31420]: I0220 12:05:05.699293 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:05:05.705400 master-0 kubenswrapper[31420]: I0220 12:05:05.705333 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69fc79b84-rr6rh" Feb 20 12:05:06.075162 master-0 kubenswrapper[31420]: I0220 12:05:06.075087 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:05:06.075567 master-0 kubenswrapper[31420]: I0220 12:05:06.075483 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:05:06.076771 master-0 kubenswrapper[31420]: I0220 12:05:06.076741 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-vs87f" Feb 20 12:05:06.083568 master-0 kubenswrapper[31420]: I0220 12:05:06.083487 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:05:06.141807 master-0 kubenswrapper[31420]: I0220 12:05:06.141731 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:05:06.142205 master-0 kubenswrapper[31420]: I0220 12:05:06.142174 31420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 12:05:06.142270 master-0 kubenswrapper[31420]: I0220 12:05:06.142228 31420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 12:05:06.176364 master-0 kubenswrapper[31420]: I0220 12:05:06.176305 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:05:06.265033 master-0 kubenswrapper[31420]: I0220 12:05:06.264955 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn" Feb 20 12:05:06.270814 master-0 kubenswrapper[31420]: I0220 12:05:06.270761 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-s57jn" Feb 20 12:05:06.633786 master-0 kubenswrapper[31420]: I0220 12:05:06.633670 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 12:05:06.636340 master-0 kubenswrapper[31420]: I0220 12:05:06.636262 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-h5w2t" Feb 20 12:05:06.777609 master-0 kubenswrapper[31420]: I0220 12:05:06.777544 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 12:05:06.787185 master-0 kubenswrapper[31420]: I0220 12:05:06.785059 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-mr99g" Feb 20 12:05:06.942498 master-0 kubenswrapper[31420]: I0220 12:05:06.942337 31420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 12:05:06.947790 master-0 kubenswrapper[31420]: I0220 12:05:06.947747 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:05:06.994118 master-0 kubenswrapper[31420]: I0220 12:05:06.994017 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7kn5q" Feb 20 12:05:07.068246 master-0 kubenswrapper[31420]: I0220 12:05:07.068169 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7kn5q" Feb 20 12:05:07.339932 master-0 kubenswrapper[31420]: I0220 12:05:07.339866 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-76v4z" Feb 20 12:05:07.407042 master-0 kubenswrapper[31420]: I0220 12:05:07.406970 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-76v4z" Feb 20 12:05:07.813657 master-0 kubenswrapper[31420]: I0220 12:05:07.813562 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-76v4z" Feb 20 12:05:07.888976 master-0 kubenswrapper[31420]: I0220 12:05:07.888900 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-76v4z" Feb 20 12:05:07.975131 master-0 kubenswrapper[31420]: I0220 12:05:07.975076 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 12:05:07.977146 master-0 kubenswrapper[31420]: I0220 12:05:07.977096 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-nr4tg" Feb 20 12:05:08.251577 master-0 kubenswrapper[31420]: I0220 12:05:08.251437 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:05:08.277711 master-0 kubenswrapper[31420]: I0220 12:05:08.277640 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7kn5q" Feb 20 12:05:08.360106 master-0 kubenswrapper[31420]: I0220 12:05:08.360043 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7kn5q" Feb 20 12:05:08.556604 master-0 kubenswrapper[31420]: I0220 12:05:08.556023 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:05:08.558607 master-0 kubenswrapper[31420]: I0220 12:05:08.558557 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-k8vs5" Feb 20 12:05:08.604665 master-0 kubenswrapper[31420]: I0220 12:05:08.604031 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 12:05:08.777562 master-0 kubenswrapper[31420]: I0220 12:05:08.775202 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:05:08.777562 master-0 kubenswrapper[31420]: I0220 12:05:08.775313 31420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 12:05:08.790555 master-0 kubenswrapper[31420]: I0220 12:05:08.789719 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7b65dc9fcb-fkkd5" Feb 20 12:05:08.845623 master-0 kubenswrapper[31420]: I0220 12:05:08.844504 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:05:08.845623 master-0 kubenswrapper[31420]: E0220 12:05:08.844739 31420 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:08.845623 master-0 kubenswrapper[31420]: E0220 12:05:08.844758 31420 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:08.845623 master-0 kubenswrapper[31420]: E0220 12:05:08.844813 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access podName:97095f88-ee81-4a47-9bd7-1dbe71ec8d4d nodeName:}" failed. No retries permitted until 2026-02-20 12:05:16.844800173 +0000 UTC m=+21.564038414 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access") pod "installer-3-master-0" (UID: "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:08.901144 master-0 kubenswrapper[31420]: I0220 12:05:08.901082 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 12:05:08.912378 master-0 kubenswrapper[31420]: I0220 12:05:08.912317 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-6qtzc" Feb 20 12:05:09.113691 master-0 kubenswrapper[31420]: I0220 12:05:09.113484 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q287t" Feb 20 12:05:09.160278 master-0 kubenswrapper[31420]: I0220 12:05:09.160209 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q287t" Feb 20 12:05:09.638861 master-0 kubenswrapper[31420]: I0220 12:05:09.638789 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:05:09.645050 master-0 kubenswrapper[31420]: I0220 12:05:09.644986 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:05:11.003637 master-0 kubenswrapper[31420]: I0220 12:05:11.003553 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 12:05:11.076409 master-0 kubenswrapper[31420]: I0220 12:05:11.076329 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-89t2q" Feb 20 12:05:16.307960 master-0 kubenswrapper[31420]: I0220 12:05:16.307870 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:05:16.308879 master-0 kubenswrapper[31420]: I0220 12:05:16.308180 31420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 12:05:16.314201 master-0 kubenswrapper[31420]: I0220 12:05:16.314120 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:05:16.331416 master-0 kubenswrapper[31420]: I0220 12:05:16.331245 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7l848" Feb 20 12:05:16.873793 master-0 kubenswrapper[31420]: I0220 12:05:16.873694 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:05:16.874105 master-0 kubenswrapper[31420]: E0220 12:05:16.873954 31420 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:16.874105 master-0 kubenswrapper[31420]: E0220 12:05:16.873986 31420 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:16.874105 master-0 kubenswrapper[31420]: E0220 12:05:16.874070 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access podName:97095f88-ee81-4a47-9bd7-1dbe71ec8d4d nodeName:}" failed. No retries permitted until 2026-02-20 12:05:32.874045091 +0000 UTC m=+37.593283362 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access") pod "installer-3-master-0" (UID: "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:17.622639 master-0 kubenswrapper[31420]: I0220 12:05:17.622562 31420 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 20 12:05:17.623638 master-0 kubenswrapper[31420]: I0220 12:05:17.622858 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" containerID="cri-o://fb183355686e4afc132c4d4de7e53c26823b10e5e50f94804dcb7abd86778e66" gracePeriod=5 Feb 20 12:05:23.080331 master-0 kubenswrapper[31420]: I0220 12:05:23.080239 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5c4f5d60772fa42f26e9c219bffa62b9/startup-monitor/0.log" Feb 20 12:05:23.081196 master-0 kubenswrapper[31420]: I0220 12:05:23.080339 31420 generic.go:334] "Generic (PLEG): container finished" podID="5c4f5d60772fa42f26e9c219bffa62b9" containerID="fb183355686e4afc132c4d4de7e53c26823b10e5e50f94804dcb7abd86778e66" exitCode=137 Feb 20 12:05:23.223034 master-0 kubenswrapper[31420]: I0220 12:05:23.222941 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5c4f5d60772fa42f26e9c219bffa62b9/startup-monitor/0.log" Feb 20 12:05:23.223288 master-0 kubenswrapper[31420]: I0220 12:05:23.223076 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:05:23.271923 master-0 kubenswrapper[31420]: I0220 12:05:23.271837 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 20 12:05:23.271923 master-0 kubenswrapper[31420]: I0220 12:05:23.271922 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 20 12:05:23.272262 master-0 kubenswrapper[31420]: I0220 12:05:23.272016 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests" (OuterVolumeSpecName: "manifests") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:05:23.272262 master-0 kubenswrapper[31420]: I0220 12:05:23.272054 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 20 12:05:23.272262 master-0 kubenswrapper[31420]: I0220 12:05:23.272088 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 20 12:05:23.272262 master-0 kubenswrapper[31420]: I0220 12:05:23.272105 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:05:23.272262 master-0 kubenswrapper[31420]: I0220 12:05:23.272162 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 20 12:05:23.272262 master-0 kubenswrapper[31420]: I0220 12:05:23.272172 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:05:23.272674 master-0 kubenswrapper[31420]: I0220 12:05:23.272418 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log" (OuterVolumeSpecName: "var-log") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:05:23.272885 master-0 kubenswrapper[31420]: I0220 12:05:23.272834 31420 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:05:23.272885 master-0 kubenswrapper[31420]: I0220 12:05:23.272875 31420 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 12:05:23.273028 master-0 kubenswrapper[31420]: I0220 12:05:23.272895 31420 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") on node \"master-0\" DevicePath \"\"" Feb 20 12:05:23.273028 master-0 kubenswrapper[31420]: I0220 12:05:23.272912 31420 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") on node \"master-0\" DevicePath \"\"" Feb 20 12:05:23.280625 master-0 kubenswrapper[31420]: I0220 12:05:23.280561 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:05:23.374869 master-0 kubenswrapper[31420]: I0220 12:05:23.374712 31420 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:05:23.531183 master-0 kubenswrapper[31420]: I0220 12:05:23.531079 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c4f5d60772fa42f26e9c219bffa62b9" path="/var/lib/kubelet/pods/5c4f5d60772fa42f26e9c219bffa62b9/volumes" Feb 20 12:05:23.531664 master-0 kubenswrapper[31420]: I0220 12:05:23.531610 31420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 20 12:05:23.551035 master-0 kubenswrapper[31420]: I0220 12:05:23.550422 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 20 12:05:23.551035 master-0 kubenswrapper[31420]: I0220 12:05:23.550583 31420 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="1712eba5-66d7-462b-baad-ef4fa19b45c5" Feb 20 12:05:23.555410 master-0 kubenswrapper[31420]: I0220 12:05:23.555335 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 20 12:05:23.555410 master-0 kubenswrapper[31420]: I0220 12:05:23.555374 31420 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="1712eba5-66d7-462b-baad-ef4fa19b45c5" Feb 20 12:05:24.091456 master-0 kubenswrapper[31420]: I0220 12:05:24.091380 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5c4f5d60772fa42f26e9c219bffa62b9/startup-monitor/0.log" Feb 20 12:05:24.092250 master-0 kubenswrapper[31420]: I0220 12:05:24.091500 31420 scope.go:117] "RemoveContainer" containerID="fb183355686e4afc132c4d4de7e53c26823b10e5e50f94804dcb7abd86778e66" Feb 20 12:05:24.092250 master-0 kubenswrapper[31420]: I0220 12:05:24.091620 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:05:25.180684 master-0 kubenswrapper[31420]: I0220 12:05:25.180560 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:05:25.188063 master-0 kubenswrapper[31420]: I0220 12:05:25.187986 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:05:32.933462 master-0 kubenswrapper[31420]: I0220 12:05:32.933369 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:05:32.935019 master-0 kubenswrapper[31420]: E0220 12:05:32.933769 31420 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:32.935019 master-0 kubenswrapper[31420]: E0220 12:05:32.933839 31420 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:32.935019 master-0 kubenswrapper[31420]: E0220 12:05:32.933939 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access podName:97095f88-ee81-4a47-9bd7-1dbe71ec8d4d nodeName:}" failed. No retries permitted until 2026-02-20 12:06:04.933902903 +0000 UTC m=+69.653141184 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access") pod "installer-3-master-0" (UID: "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 12:05:43.079969 master-0 kubenswrapper[31420]: I0220 12:05:43.079830 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 20 12:05:43.080997 master-0 kubenswrapper[31420]: E0220 12:05:43.080630 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb93420d-7c5a-4492-bd16-0104104406b4" containerName="installer" Feb 20 12:05:43.080997 master-0 kubenswrapper[31420]: I0220 12:05:43.080692 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb93420d-7c5a-4492-bd16-0104104406b4" containerName="installer" Feb 20 12:05:43.080997 master-0 kubenswrapper[31420]: E0220 12:05:43.080777 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e9ba02-39d0-41fb-aed1-39923698bc0b" containerName="installer" Feb 20 12:05:43.080997 master-0 kubenswrapper[31420]: I0220 12:05:43.080795 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e9ba02-39d0-41fb-aed1-39923698bc0b" containerName="installer" Feb 20 12:05:43.080997 master-0 kubenswrapper[31420]: E0220 12:05:43.080839 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd4430b-8dbc-46df-9efe-49d520a7c75a" containerName="installer" Feb 20 12:05:43.080997 master-0 kubenswrapper[31420]: I0220 12:05:43.080854 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd4430b-8dbc-46df-9efe-49d520a7c75a" containerName="installer" Feb 20 12:05:43.080997 master-0 kubenswrapper[31420]: E0220 12:05:43.080882 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305f625e-16b0-4840-a9e2-25571b49ad2a" containerName="installer" Feb 20 12:05:43.080997 master-0 kubenswrapper[31420]: I0220 12:05:43.080896 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="305f625e-16b0-4840-a9e2-25571b49ad2a" containerName="installer" Feb 20 12:05:43.080997 master-0 kubenswrapper[31420]: E0220 12:05:43.080924 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" containerName="assisted-installer-controller" Feb 20 12:05:43.080997 master-0 kubenswrapper[31420]: I0220 12:05:43.080937 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" containerName="assisted-installer-controller" Feb 20 12:05:43.080997 master-0 kubenswrapper[31420]: E0220 12:05:43.080960 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41b23ca-9eed-4eb9-95dc-92418a6f4e86" containerName="installer" Feb 20 12:05:43.080997 master-0 kubenswrapper[31420]: I0220 12:05:43.080976 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41b23ca-9eed-4eb9-95dc-92418a6f4e86" containerName="installer" Feb 20 12:05:43.080997 master-0 kubenswrapper[31420]: E0220 12:05:43.080998 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5710eb66-9717-4beb-a8b2-19f6886376b3" containerName="installer" Feb 20 12:05:43.080997 master-0 kubenswrapper[31420]: I0220 12:05:43.081013 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="5710eb66-9717-4beb-a8b2-19f6886376b3" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: E0220 12:05:43.081052 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97095f88-ee81-4a47-9bd7-1dbe71ec8d4d" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.081068 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="97095f88-ee81-4a47-9bd7-1dbe71ec8d4d" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: E0220 12:05:43.081115 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.081130 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: E0220 12:05:43.081168 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de8fb9d-34f7-49bc-867d-827a0f9a11e7" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.081186 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de8fb9d-34f7-49bc-867d-827a0f9a11e7" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: E0220 12:05:43.081219 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148cc321-3a17-4852-a75a-e8ac95139eb8" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.081232 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="148cc321-3a17-4852-a75a-e8ac95139eb8" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: E0220 12:05:43.081264 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5827049e-6178-46cf-83c5-cff55daac768" containerName="collect-profiles" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.081278 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="5827049e-6178-46cf-83c5-cff55daac768" containerName="collect-profiles" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: E0220 12:05:43.081305 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35310285-fff9-43d6-ad9a-5d959ef116ec" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.081318 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="35310285-fff9-43d6-ad9a-5d959ef116ec" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.081674 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="5827049e-6178-46cf-83c5-cff55daac768" containerName="collect-profiles" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.081756 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e9ba02-39d0-41fb-aed1-39923698bc0b" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.081780 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41b23ca-9eed-4eb9-95dc-92418a6f4e86" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.081818 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="305f625e-16b0-4840-a9e2-25571b49ad2a" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.081861 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="148cc321-3a17-4852-a75a-e8ac95139eb8" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.081885 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="5710eb66-9717-4beb-a8b2-19f6886376b3" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.081926 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="7de8fb9d-34f7-49bc-867d-827a0f9a11e7" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.081951 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd4430b-8dbc-46df-9efe-49d520a7c75a" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.081988 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="35310285-fff9-43d6-ad9a-5d959ef116ec" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.082010 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb93420d-7c5a-4492-bd16-0104104406b4" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.082041 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="97095f88-ee81-4a47-9bd7-1dbe71ec8d4d" containerName="installer" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.082066 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd" containerName="assisted-installer-controller" Feb 20 12:05:43.082311 master-0 kubenswrapper[31420]: I0220 12:05:43.082093 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" Feb 20 12:05:43.083779 master-0 kubenswrapper[31420]: I0220 12:05:43.083684 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 12:05:43.087050 master-0 kubenswrapper[31420]: I0220 12:05:43.086980 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 20 12:05:43.092104 master-0 kubenswrapper[31420]: I0220 12:05:43.092042 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-9fz4f" Feb 20 12:05:43.096215 master-0 kubenswrapper[31420]: I0220 12:05:43.096154 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 20 12:05:43.105812 master-0 kubenswrapper[31420]: I0220 12:05:43.105751 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-kube-api-access\") pod \"installer-4-master-0\" (UID: \"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 12:05:43.105936 master-0 kubenswrapper[31420]: I0220 12:05:43.105858 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-var-lock\") pod \"installer-4-master-0\" (UID: \"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 12:05:43.106007 master-0 kubenswrapper[31420]: I0220 12:05:43.105956 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 12:05:43.207250 master-0 kubenswrapper[31420]: I0220 12:05:43.207162 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-var-lock\") pod \"installer-4-master-0\" (UID: \"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 12:05:43.207614 master-0 kubenswrapper[31420]: I0220 12:05:43.207307 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 12:05:43.207614 master-0 kubenswrapper[31420]: I0220 12:05:43.207345 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-var-lock\") pod \"installer-4-master-0\" (UID: \"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 12:05:43.207770 master-0 kubenswrapper[31420]: I0220 12:05:43.207645 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-kube-api-access\") pod \"installer-4-master-0\" (UID: \"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 12:05:43.207846 master-0 kubenswrapper[31420]: I0220 12:05:43.207653 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 12:05:43.232523 master-0 kubenswrapper[31420]: I0220 12:05:43.232391 31420 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 20 12:05:43.237506 master-0 kubenswrapper[31420]: I0220 12:05:43.237425 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-kube-api-access\") pod \"installer-4-master-0\" (UID: \"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 12:05:43.436028 master-0 kubenswrapper[31420]: I0220 12:05:43.435847 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 12:05:43.969612 master-0 kubenswrapper[31420]: I0220 12:05:43.969502 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 20 12:05:43.976738 master-0 kubenswrapper[31420]: W0220 12:05:43.976652 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode54c2fd5_aea1_4dc7_ba21_43b2b0901cbf.slice/crio-03cbee9db9320a7db5be817245d061f0a2b3cfef190a9b306e992ee85a0c319d WatchSource:0}: Error finding container 03cbee9db9320a7db5be817245d061f0a2b3cfef190a9b306e992ee85a0c319d: Status 404 returned error can't find the container with id 03cbee9db9320a7db5be817245d061f0a2b3cfef190a9b306e992ee85a0c319d Feb 20 12:05:44.277496 master-0 kubenswrapper[31420]: I0220 12:05:44.277412 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf","Type":"ContainerStarted","Data":"03cbee9db9320a7db5be817245d061f0a2b3cfef190a9b306e992ee85a0c319d"} Feb 20 12:05:44.862360 master-0 kubenswrapper[31420]: I0220 12:05:44.862256 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-599c7886f5-zltnd"] Feb 20 12:05:44.862810 master-0 kubenswrapper[31420]: I0220 12:05:44.862750 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" podUID="98226a59-5234-48f3-a9cd-21de305810dc" containerName="controller-manager" containerID="cri-o://b979af759c905f991e94f3acb27f25df10266c61c084ab82e4e30ab77b2ee843" gracePeriod=30 Feb 20 12:05:44.877827 master-0 kubenswrapper[31420]: I0220 12:05:44.877771 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6"] Feb 20 12:05:44.878039 master-0 kubenswrapper[31420]: I0220 12:05:44.878013 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" podUID="c29fd426-7c89-434e-8332-1ca31075d4bf" containerName="route-controller-manager" containerID="cri-o://c19eec66d34e5d17a3e186a00fcaa04150b49fcc6bd52c6714edcc3b79452483" gracePeriod=30 Feb 20 12:05:45.312211 master-0 kubenswrapper[31420]: I0220 12:05:45.311998 31420 generic.go:334] "Generic (PLEG): container finished" podID="98226a59-5234-48f3-a9cd-21de305810dc" containerID="b979af759c905f991e94f3acb27f25df10266c61c084ab82e4e30ab77b2ee843" exitCode=0 Feb 20 12:05:45.312211 master-0 kubenswrapper[31420]: I0220 12:05:45.312070 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" event={"ID":"98226a59-5234-48f3-a9cd-21de305810dc","Type":"ContainerDied","Data":"b979af759c905f991e94f3acb27f25df10266c61c084ab82e4e30ab77b2ee843"} Feb 20 12:05:45.312899 master-0 kubenswrapper[31420]: I0220 12:05:45.312225 31420 scope.go:117] "RemoveContainer" containerID="1b7b0cda43f9601273f5b828026cbdd290a92a99bdd94b1cd74e1268067e317e" Feb 20 12:05:45.320892 master-0 kubenswrapper[31420]: I0220 12:05:45.320863 31420 generic.go:334] "Generic (PLEG): container finished" podID="c29fd426-7c89-434e-8332-1ca31075d4bf" containerID="c19eec66d34e5d17a3e186a00fcaa04150b49fcc6bd52c6714edcc3b79452483" exitCode=0 Feb 20 12:05:45.320984 master-0 kubenswrapper[31420]: I0220 12:05:45.320915 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" event={"ID":"c29fd426-7c89-434e-8332-1ca31075d4bf","Type":"ContainerDied","Data":"c19eec66d34e5d17a3e186a00fcaa04150b49fcc6bd52c6714edcc3b79452483"} Feb 20 12:05:45.324763 master-0 kubenswrapper[31420]: I0220 12:05:45.324723 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf","Type":"ContainerStarted","Data":"987d146e296a86cf49d3637e41d250b91b8351b024e4ca2354fe527e28079306"} Feb 20 12:05:45.356716 master-0 kubenswrapper[31420]: I0220 12:05:45.356582 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.356547846 podStartE2EDuration="2.356547846s" podCreationTimestamp="2026-02-20 12:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:05:45.343457534 +0000 UTC m=+50.062695775" watchObservedRunningTime="2026-02-20 12:05:45.356547846 +0000 UTC m=+50.075786087" Feb 20 12:05:45.383519 master-0 kubenswrapper[31420]: I0220 12:05:45.383210 31420 scope.go:117] "RemoveContainer" containerID="b4292dccd690e9143e933dee29f59d01786a2f035fd7b57469d300f2f8a55365" Feb 20 12:05:45.592600 master-0 kubenswrapper[31420]: I0220 12:05:45.592504 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:05:45.759561 master-0 kubenswrapper[31420]: I0220 12:05:45.758962 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-client-ca\") pod \"c29fd426-7c89-434e-8332-1ca31075d4bf\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " Feb 20 12:05:45.759561 master-0 kubenswrapper[31420]: I0220 12:05:45.759052 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29fd426-7c89-434e-8332-1ca31075d4bf-serving-cert\") pod \"c29fd426-7c89-434e-8332-1ca31075d4bf\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " Feb 20 12:05:45.759561 master-0 kubenswrapper[31420]: I0220 12:05:45.759084 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7k2n\" (UniqueName: \"kubernetes.io/projected/c29fd426-7c89-434e-8332-1ca31075d4bf-kube-api-access-z7k2n\") pod \"c29fd426-7c89-434e-8332-1ca31075d4bf\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " Feb 20 12:05:45.759561 master-0 kubenswrapper[31420]: I0220 12:05:45.759111 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-config\") pod \"c29fd426-7c89-434e-8332-1ca31075d4bf\" (UID: \"c29fd426-7c89-434e-8332-1ca31075d4bf\") " Feb 20 12:05:45.760265 master-0 kubenswrapper[31420]: I0220 12:05:45.760058 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-config" (OuterVolumeSpecName: "config") pod "c29fd426-7c89-434e-8332-1ca31075d4bf" (UID: "c29fd426-7c89-434e-8332-1ca31075d4bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:05:45.760598 master-0 kubenswrapper[31420]: I0220 12:05:45.760566 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-client-ca" (OuterVolumeSpecName: "client-ca") pod "c29fd426-7c89-434e-8332-1ca31075d4bf" (UID: "c29fd426-7c89-434e-8332-1ca31075d4bf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:05:45.763511 master-0 kubenswrapper[31420]: I0220 12:05:45.763429 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29fd426-7c89-434e-8332-1ca31075d4bf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c29fd426-7c89-434e-8332-1ca31075d4bf" (UID: "c29fd426-7c89-434e-8332-1ca31075d4bf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:05:45.766922 master-0 kubenswrapper[31420]: I0220 12:05:45.766833 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29fd426-7c89-434e-8332-1ca31075d4bf-kube-api-access-z7k2n" (OuterVolumeSpecName: "kube-api-access-z7k2n") pod "c29fd426-7c89-434e-8332-1ca31075d4bf" (UID: "c29fd426-7c89-434e-8332-1ca31075d4bf"). InnerVolumeSpecName "kube-api-access-z7k2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:05:45.814179 master-0 kubenswrapper[31420]: I0220 12:05:45.814074 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:05:45.860746 master-0 kubenswrapper[31420]: I0220 12:05:45.860687 31420 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 12:05:45.860964 master-0 kubenswrapper[31420]: I0220 12:05:45.860755 31420 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c29fd426-7c89-434e-8332-1ca31075d4bf-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:05:45.860964 master-0 kubenswrapper[31420]: I0220 12:05:45.860774 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7k2n\" (UniqueName: \"kubernetes.io/projected/c29fd426-7c89-434e-8332-1ca31075d4bf-kube-api-access-z7k2n\") on node \"master-0\" DevicePath \"\"" Feb 20 12:05:45.860964 master-0 kubenswrapper[31420]: I0220 12:05:45.860787 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29fd426-7c89-434e-8332-1ca31075d4bf-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:05:45.961768 master-0 kubenswrapper[31420]: I0220 12:05:45.961689 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98226a59-5234-48f3-a9cd-21de305810dc-serving-cert\") pod \"98226a59-5234-48f3-a9cd-21de305810dc\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " Feb 20 12:05:45.962056 master-0 kubenswrapper[31420]: I0220 12:05:45.961871 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-proxy-ca-bundles\") pod \"98226a59-5234-48f3-a9cd-21de305810dc\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " Feb 20 12:05:45.962913 master-0 kubenswrapper[31420]: I0220 12:05:45.962853 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "98226a59-5234-48f3-a9cd-21de305810dc" (UID: "98226a59-5234-48f3-a9cd-21de305810dc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:05:45.963092 master-0 kubenswrapper[31420]: I0220 12:05:45.963057 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-client-ca\") pod \"98226a59-5234-48f3-a9cd-21de305810dc\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " Feb 20 12:05:45.963931 master-0 kubenswrapper[31420]: I0220 12:05:45.963882 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-client-ca" (OuterVolumeSpecName: "client-ca") pod "98226a59-5234-48f3-a9cd-21de305810dc" (UID: "98226a59-5234-48f3-a9cd-21de305810dc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:05:45.964079 master-0 kubenswrapper[31420]: I0220 12:05:45.964029 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-config\") pod \"98226a59-5234-48f3-a9cd-21de305810dc\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " Feb 20 12:05:45.965178 master-0 kubenswrapper[31420]: I0220 12:05:45.965116 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-config" (OuterVolumeSpecName: "config") pod "98226a59-5234-48f3-a9cd-21de305810dc" (UID: "98226a59-5234-48f3-a9cd-21de305810dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:05:45.965389 master-0 kubenswrapper[31420]: I0220 12:05:45.965317 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2hwr\" (UniqueName: \"kubernetes.io/projected/98226a59-5234-48f3-a9cd-21de305810dc-kube-api-access-j2hwr\") pod \"98226a59-5234-48f3-a9cd-21de305810dc\" (UID: \"98226a59-5234-48f3-a9cd-21de305810dc\") " Feb 20 12:05:45.966625 master-0 kubenswrapper[31420]: I0220 12:05:45.966574 31420 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 20 12:05:45.966678 master-0 kubenswrapper[31420]: I0220 12:05:45.966627 31420 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 12:05:45.966678 master-0 kubenswrapper[31420]: I0220 12:05:45.966654 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98226a59-5234-48f3-a9cd-21de305810dc-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:05:45.969399 master-0 kubenswrapper[31420]: I0220 12:05:45.969335 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98226a59-5234-48f3-a9cd-21de305810dc-kube-api-access-j2hwr" (OuterVolumeSpecName: "kube-api-access-j2hwr") pod "98226a59-5234-48f3-a9cd-21de305810dc" (UID: "98226a59-5234-48f3-a9cd-21de305810dc"). InnerVolumeSpecName "kube-api-access-j2hwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:05:45.973058 master-0 kubenswrapper[31420]: I0220 12:05:45.972959 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98226a59-5234-48f3-a9cd-21de305810dc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "98226a59-5234-48f3-a9cd-21de305810dc" (UID: "98226a59-5234-48f3-a9cd-21de305810dc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:05:46.070083 master-0 kubenswrapper[31420]: I0220 12:05:46.069912 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2hwr\" (UniqueName: \"kubernetes.io/projected/98226a59-5234-48f3-a9cd-21de305810dc-kube-api-access-j2hwr\") on node \"master-0\" DevicePath \"\"" Feb 20 12:05:46.070083 master-0 kubenswrapper[31420]: I0220 12:05:46.069992 31420 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98226a59-5234-48f3-a9cd-21de305810dc-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:05:46.341156 master-0 kubenswrapper[31420]: I0220 12:05:46.340931 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" event={"ID":"c29fd426-7c89-434e-8332-1ca31075d4bf","Type":"ContainerDied","Data":"8341254b8ef7faec187b8fe415e34b54bbc9e2b3da20b0d37f8005ee126bc089"} Feb 20 12:05:46.341156 master-0 kubenswrapper[31420]: I0220 12:05:46.341071 31420 scope.go:117] "RemoveContainer" containerID="c19eec66d34e5d17a3e186a00fcaa04150b49fcc6bd52c6714edcc3b79452483" Feb 20 12:05:46.342142 master-0 kubenswrapper[31420]: I0220 12:05:46.341294 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6" Feb 20 12:05:46.348985 master-0 kubenswrapper[31420]: I0220 12:05:46.348909 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" event={"ID":"98226a59-5234-48f3-a9cd-21de305810dc","Type":"ContainerDied","Data":"e668bf18622f735aba88fe56630f792fd4bf653bbe4e51d87240b3f22f8d64bd"} Feb 20 12:05:46.348985 master-0 kubenswrapper[31420]: I0220 12:05:46.348930 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-599c7886f5-zltnd" Feb 20 12:05:46.372250 master-0 kubenswrapper[31420]: I0220 12:05:46.370852 31420 scope.go:117] "RemoveContainer" containerID="b979af759c905f991e94f3acb27f25df10266c61c084ab82e4e30ab77b2ee843" Feb 20 12:05:46.422811 master-0 kubenswrapper[31420]: I0220 12:05:46.422714 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6"] Feb 20 12:05:46.431322 master-0 kubenswrapper[31420]: I0220 12:05:46.428450 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-689d967cd5-ptpq6"] Feb 20 12:05:46.441680 master-0 kubenswrapper[31420]: I0220 12:05:46.441602 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-599c7886f5-zltnd"] Feb 20 12:05:46.445858 master-0 kubenswrapper[31420]: I0220 12:05:46.445705 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-599c7886f5-zltnd"] Feb 20 12:05:47.510113 master-0 kubenswrapper[31420]: I0220 12:05:47.510009 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98226a59-5234-48f3-a9cd-21de305810dc" path="/var/lib/kubelet/pods/98226a59-5234-48f3-a9cd-21de305810dc/volumes" Feb 20 12:05:47.511302 master-0 kubenswrapper[31420]: I0220 12:05:47.511245 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29fd426-7c89-434e-8332-1ca31075d4bf" path="/var/lib/kubelet/pods/c29fd426-7c89-434e-8332-1ca31075d4bf/volumes" Feb 20 12:06:05.005067 master-0 kubenswrapper[31420]: I0220 12:06:05.004947 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:06:05.010098 master-0 kubenswrapper[31420]: I0220 12:06:05.010031 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 12:06:05.106002 master-0 kubenswrapper[31420]: I0220 12:06:05.105919 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") pod \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\" (UID: \"97095f88-ee81-4a47-9bd7-1dbe71ec8d4d\") " Feb 20 12:06:05.109331 master-0 kubenswrapper[31420]: I0220 12:06:05.109241 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d" (UID: "97095f88-ee81-4a47-9bd7-1dbe71ec8d4d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:06:05.208892 master-0 kubenswrapper[31420]: I0220 12:06:05.208731 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97095f88-ee81-4a47-9bd7-1dbe71ec8d4d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 12:06:12.677147 master-0 kubenswrapper[31420]: I0220 12:06:12.677012 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 20 12:06:12.678237 master-0 kubenswrapper[31420]: I0220 12:06:12.677402 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-0" podUID="e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf" containerName="installer" containerID="cri-o://987d146e296a86cf49d3637e41d250b91b8351b024e4ca2354fe527e28079306" gracePeriod=30 Feb 20 12:06:15.649490 master-0 kubenswrapper[31420]: I0220 12:06:15.649314 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf/installer/0.log" Feb 20 12:06:15.649490 master-0 kubenswrapper[31420]: I0220 12:06:15.649415 31420 generic.go:334] "Generic (PLEG): container finished" podID="e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf" containerID="987d146e296a86cf49d3637e41d250b91b8351b024e4ca2354fe527e28079306" exitCode=1 Feb 20 12:06:15.649490 master-0 kubenswrapper[31420]: I0220 12:06:15.649461 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf","Type":"ContainerDied","Data":"987d146e296a86cf49d3637e41d250b91b8351b024e4ca2354fe527e28079306"} Feb 20 12:06:15.828514 master-0 kubenswrapper[31420]: I0220 12:06:15.828451 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf/installer/0.log" Feb 20 12:06:15.828742 master-0 kubenswrapper[31420]: I0220 12:06:15.828555 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 12:06:15.888198 master-0 kubenswrapper[31420]: I0220 12:06:15.888126 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-kube-api-access\") pod \"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf\" (UID: \"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf\") " Feb 20 12:06:15.888198 master-0 kubenswrapper[31420]: I0220 12:06:15.888189 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-var-lock\") pod \"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf\" (UID: \"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf\") " Feb 20 12:06:15.888740 master-0 kubenswrapper[31420]: I0220 12:06:15.888369 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-var-lock" (OuterVolumeSpecName: "var-lock") pod "e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf" (UID: "e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:06:15.888740 master-0 kubenswrapper[31420]: I0220 12:06:15.888402 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-kubelet-dir\") pod \"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf\" (UID: \"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf\") " Feb 20 12:06:15.888740 master-0 kubenswrapper[31420]: I0220 12:06:15.888557 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf" (UID: "e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:06:15.888740 master-0 kubenswrapper[31420]: I0220 12:06:15.888742 31420 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 12:06:15.889030 master-0 kubenswrapper[31420]: I0220 12:06:15.888758 31420 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:06:15.892766 master-0 kubenswrapper[31420]: I0220 12:06:15.892683 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf" (UID: "e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:06:15.991065 master-0 kubenswrapper[31420]: I0220 12:06:15.990980 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 12:06:16.661865 master-0 kubenswrapper[31420]: I0220 12:06:16.661770 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf/installer/0.log" Feb 20 12:06:16.661865 master-0 kubenswrapper[31420]: I0220 12:06:16.661858 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf","Type":"ContainerDied","Data":"03cbee9db9320a7db5be817245d061f0a2b3cfef190a9b306e992ee85a0c319d"} Feb 20 12:06:16.662955 master-0 kubenswrapper[31420]: I0220 12:06:16.661915 31420 scope.go:117] "RemoveContainer" containerID="987d146e296a86cf49d3637e41d250b91b8351b024e4ca2354fe527e28079306" Feb 20 12:06:16.662955 master-0 kubenswrapper[31420]: I0220 12:06:16.662001 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 12:06:16.719501 master-0 kubenswrapper[31420]: I0220 12:06:16.719359 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 20 12:06:16.725340 master-0 kubenswrapper[31420]: I0220 12:06:16.725274 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 20 12:06:17.513779 master-0 kubenswrapper[31420]: I0220 12:06:17.513704 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf" path="/var/lib/kubelet/pods/e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf/volumes" Feb 20 12:06:17.882836 master-0 kubenswrapper[31420]: I0220 12:06:17.882603 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 20 12:06:17.886325 master-0 kubenswrapper[31420]: E0220 12:06:17.884010 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29fd426-7c89-434e-8332-1ca31075d4bf" containerName="route-controller-manager" Feb 20 12:06:17.886325 master-0 kubenswrapper[31420]: I0220 12:06:17.884081 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29fd426-7c89-434e-8332-1ca31075d4bf" containerName="route-controller-manager" Feb 20 12:06:17.886325 master-0 kubenswrapper[31420]: E0220 12:06:17.884116 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf" containerName="installer" Feb 20 12:06:17.886325 master-0 kubenswrapper[31420]: I0220 12:06:17.884123 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf" containerName="installer" Feb 20 12:06:17.886325 master-0 kubenswrapper[31420]: E0220 12:06:17.884165 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29fd426-7c89-434e-8332-1ca31075d4bf" containerName="route-controller-manager" Feb 20 12:06:17.886325 master-0 kubenswrapper[31420]: I0220 12:06:17.884173 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29fd426-7c89-434e-8332-1ca31075d4bf" containerName="route-controller-manager" Feb 20 12:06:17.886325 master-0 kubenswrapper[31420]: E0220 12:06:17.884198 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98226a59-5234-48f3-a9cd-21de305810dc" containerName="controller-manager" Feb 20 12:06:17.886325 master-0 kubenswrapper[31420]: I0220 12:06:17.884206 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="98226a59-5234-48f3-a9cd-21de305810dc" containerName="controller-manager" Feb 20 12:06:17.886325 master-0 kubenswrapper[31420]: I0220 12:06:17.884441 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="98226a59-5234-48f3-a9cd-21de305810dc" containerName="controller-manager" Feb 20 12:06:17.886325 master-0 kubenswrapper[31420]: I0220 12:06:17.884464 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="98226a59-5234-48f3-a9cd-21de305810dc" containerName="controller-manager" Feb 20 12:06:17.886325 master-0 kubenswrapper[31420]: I0220 12:06:17.884507 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29fd426-7c89-434e-8332-1ca31075d4bf" containerName="route-controller-manager" Feb 20 12:06:17.886325 master-0 kubenswrapper[31420]: I0220 12:06:17.884581 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54c2fd5-aea1-4dc7-ba21-43b2b0901cbf" containerName="installer" Feb 20 12:06:17.886325 master-0 kubenswrapper[31420]: I0220 12:06:17.885600 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 12:06:17.888641 master-0 kubenswrapper[31420]: I0220 12:06:17.888574 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-9fz4f" Feb 20 12:06:17.888902 master-0 kubenswrapper[31420]: I0220 12:06:17.888828 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 20 12:06:17.906765 master-0 kubenswrapper[31420]: I0220 12:06:17.905509 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 20 12:06:17.920673 master-0 kubenswrapper[31420]: I0220 12:06:17.920588 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-kube-api-access\") pod \"installer-5-master-0\" (UID: \"e9cd2982-6b46-4ced-9e1d-78b60cbd6391\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 12:06:17.920927 master-0 kubenswrapper[31420]: I0220 12:06:17.920780 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"e9cd2982-6b46-4ced-9e1d-78b60cbd6391\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 12:06:17.920927 master-0 kubenswrapper[31420]: I0220 12:06:17.920867 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-var-lock\") pod \"installer-5-master-0\" (UID: \"e9cd2982-6b46-4ced-9e1d-78b60cbd6391\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 12:06:18.022451 master-0 kubenswrapper[31420]: I0220 12:06:18.022347 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-var-lock\") pod \"installer-5-master-0\" (UID: \"e9cd2982-6b46-4ced-9e1d-78b60cbd6391\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 12:06:18.022761 master-0 kubenswrapper[31420]: I0220 12:06:18.022575 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-var-lock\") pod \"installer-5-master-0\" (UID: \"e9cd2982-6b46-4ced-9e1d-78b60cbd6391\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 12:06:18.022761 master-0 kubenswrapper[31420]: I0220 12:06:18.022678 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-kube-api-access\") pod \"installer-5-master-0\" (UID: \"e9cd2982-6b46-4ced-9e1d-78b60cbd6391\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 12:06:18.022921 master-0 kubenswrapper[31420]: I0220 12:06:18.022893 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"e9cd2982-6b46-4ced-9e1d-78b60cbd6391\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 12:06:18.023077 master-0 kubenswrapper[31420]: I0220 12:06:18.023035 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"e9cd2982-6b46-4ced-9e1d-78b60cbd6391\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 12:06:18.054113 master-0 kubenswrapper[31420]: I0220 12:06:18.054029 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-kube-api-access\") pod \"installer-5-master-0\" (UID: \"e9cd2982-6b46-4ced-9e1d-78b60cbd6391\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 12:06:18.281084 master-0 kubenswrapper[31420]: I0220 12:06:18.280993 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 12:06:18.791587 master-0 kubenswrapper[31420]: I0220 12:06:18.791490 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 20 12:06:19.696765 master-0 kubenswrapper[31420]: I0220 12:06:19.696686 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"e9cd2982-6b46-4ced-9e1d-78b60cbd6391","Type":"ContainerStarted","Data":"43b7427e5d8486dc18092763d632c45c4dafd6fa320475f95ea95bb6b7625d45"} Feb 20 12:06:19.696765 master-0 kubenswrapper[31420]: I0220 12:06:19.696762 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"e9cd2982-6b46-4ced-9e1d-78b60cbd6391","Type":"ContainerStarted","Data":"f8a215a4ba0b20f3e959b0b731e5793137ea4c5abc28f3f055038954db2265e4"} Feb 20 12:06:19.727616 master-0 kubenswrapper[31420]: I0220 12:06:19.727447 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=2.727412794 podStartE2EDuration="2.727412794s" podCreationTimestamp="2026-02-20 12:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:06:19.716706349 +0000 UTC m=+84.435944670" watchObservedRunningTime="2026-02-20 12:06:19.727412794 +0000 UTC m=+84.446651075" Feb 20 12:06:35.763069 master-0 kubenswrapper[31420]: I0220 12:06:35.762434 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 20 12:06:35.763840 master-0 kubenswrapper[31420]: I0220 12:06:35.763694 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-5-master-0" podUID="e9cd2982-6b46-4ced-9e1d-78b60cbd6391" containerName="installer" containerID="cri-o://43b7427e5d8486dc18092763d632c45c4dafd6fa320475f95ea95bb6b7625d45" gracePeriod=30 Feb 20 12:06:36.663370 master-0 kubenswrapper[31420]: I0220 12:06:36.661717 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cdbd68d56-hw588"] Feb 20 12:06:36.663370 master-0 kubenswrapper[31420]: E0220 12:06:36.662103 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98226a59-5234-48f3-a9cd-21de305810dc" containerName="controller-manager" Feb 20 12:06:36.663370 master-0 kubenswrapper[31420]: I0220 12:06:36.662121 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="98226a59-5234-48f3-a9cd-21de305810dc" containerName="controller-manager" Feb 20 12:06:36.663370 master-0 kubenswrapper[31420]: I0220 12:06:36.662271 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29fd426-7c89-434e-8332-1ca31075d4bf" containerName="route-controller-manager" Feb 20 12:06:36.663370 master-0 kubenswrapper[31420]: I0220 12:06:36.662762 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.667731 master-0 kubenswrapper[31420]: I0220 12:06:36.666379 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 12:06:36.667731 master-0 kubenswrapper[31420]: I0220 12:06:36.666789 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-69d2b" Feb 20 12:06:36.667731 master-0 kubenswrapper[31420]: I0220 12:06:36.667008 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 12:06:36.667731 master-0 kubenswrapper[31420]: I0220 12:06:36.667706 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl"] Feb 20 12:06:36.668642 master-0 kubenswrapper[31420]: I0220 12:06:36.668002 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 12:06:36.668869 master-0 kubenswrapper[31420]: I0220 12:06:36.668821 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:36.671170 master-0 kubenswrapper[31420]: I0220 12:06:36.671107 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 12:06:36.671170 master-0 kubenswrapper[31420]: I0220 12:06:36.671135 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-nd6lj" Feb 20 12:06:36.671501 master-0 kubenswrapper[31420]: I0220 12:06:36.671284 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 20 12:06:36.671501 master-0 kubenswrapper[31420]: I0220 12:06:36.671337 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 12:06:36.671501 master-0 kubenswrapper[31420]: I0220 12:06:36.671438 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 20 12:06:36.671943 master-0 kubenswrapper[31420]: I0220 12:06:36.671772 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 20 12:06:36.672273 master-0 kubenswrapper[31420]: I0220 12:06:36.672197 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 20 12:06:36.675903 master-0 kubenswrapper[31420]: I0220 12:06:36.673334 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 20 12:06:36.675903 master-0 kubenswrapper[31420]: I0220 12:06:36.673795 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8p77l"] Feb 20 12:06:36.677469 master-0 kubenswrapper[31420]: I0220 12:06:36.676769 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8p77l" Feb 20 12:06:36.677469 master-0 kubenswrapper[31420]: I0220 12:06:36.677227 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l"] Feb 20 12:06:36.677469 master-0 kubenswrapper[31420]: I0220 12:06:36.677401 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 12:06:36.678568 master-0 kubenswrapper[31420]: I0220 12:06:36.677987 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.678800 master-0 kubenswrapper[31420]: I0220 12:06:36.678758 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk"] Feb 20 12:06:36.679562 master-0 kubenswrapper[31420]: I0220 12:06:36.679518 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" Feb 20 12:06:36.685663 master-0 kubenswrapper[31420]: I0220 12:06:36.681006 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 20 12:06:36.685663 master-0 kubenswrapper[31420]: I0220 12:06:36.681073 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-27tf7" Feb 20 12:06:36.685663 master-0 kubenswrapper[31420]: I0220 12:06:36.681764 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 20 12:06:36.685663 master-0 kubenswrapper[31420]: I0220 12:06:36.681995 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 20 12:06:36.685663 master-0 kubenswrapper[31420]: I0220 12:06:36.682017 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 20 12:06:36.685663 master-0 kubenswrapper[31420]: I0220 12:06:36.682064 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 20 12:06:36.685663 master-0 kubenswrapper[31420]: I0220 12:06:36.682694 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kmfb6" Feb 20 12:06:36.685663 master-0 kubenswrapper[31420]: I0220 12:06:36.682888 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 20 12:06:36.685663 master-0 kubenswrapper[31420]: I0220 12:06:36.683001 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 20 12:06:36.685663 master-0 kubenswrapper[31420]: I0220 12:06:36.683101 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 20 12:06:36.685663 master-0 kubenswrapper[31420]: I0220 12:06:36.683246 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 20 12:06:36.686508 master-0 kubenswrapper[31420]: I0220 12:06:36.686435 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-q8nx7" Feb 20 12:06:36.686608 master-0 kubenswrapper[31420]: I0220 12:06:36.686547 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 20 12:06:36.686608 master-0 kubenswrapper[31420]: I0220 12:06:36.686568 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 20 12:06:36.686717 master-0 kubenswrapper[31420]: I0220 12:06:36.686682 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 20 12:06:36.686761 master-0 kubenswrapper[31420]: I0220 12:06:36.686748 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 20 12:06:36.687559 master-0 kubenswrapper[31420]: I0220 12:06:36.687513 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6cf879bbbd-4fqq9"] Feb 20 12:06:36.688616 master-0 kubenswrapper[31420]: I0220 12:06:36.688594 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6cf879bbbd-4fqq9" Feb 20 12:06:36.701495 master-0 kubenswrapper[31420]: I0220 12:06:36.695885 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-lc26c" Feb 20 12:06:36.701495 master-0 kubenswrapper[31420]: I0220 12:06:36.696833 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 20 12:06:36.701495 master-0 kubenswrapper[31420]: I0220 12:06:36.699249 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hpqwd"] Feb 20 12:06:36.701495 master-0 kubenswrapper[31420]: I0220 12:06:36.700163 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:36.704913 master-0 kubenswrapper[31420]: I0220 12:06:36.704274 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 20 12:06:36.704913 master-0 kubenswrapper[31420]: I0220 12:06:36.704403 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cdbd68d56-hw588"] Feb 20 12:06:36.704913 master-0 kubenswrapper[31420]: I0220 12:06:36.704481 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-fdbqz" Feb 20 12:06:36.706470 master-0 kubenswrapper[31420]: I0220 12:06:36.706441 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 20 12:06:36.709741 master-0 kubenswrapper[31420]: I0220 12:06:36.709704 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl"] Feb 20 12:06:36.710477 master-0 kubenswrapper[31420]: I0220 12:06:36.710220 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 20 12:06:36.713187 master-0 kubenswrapper[31420]: I0220 12:06:36.712858 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l"] Feb 20 12:06:36.718139 master-0 kubenswrapper[31420]: I0220 12:06:36.718092 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6cf879bbbd-4fqq9"] Feb 20 12:06:36.721404 master-0 kubenswrapper[31420]: I0220 12:06:36.721376 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk"] Feb 20 12:06:36.768179 master-0 kubenswrapper[31420]: I0220 12:06:36.768135 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dda773b-6ab7-436a-8fec-e849e2eaf0d4-config\") pod \"controller-manager-6cdbd68d56-hw588\" (UID: \"8dda773b-6ab7-436a-8fec-e849e2eaf0d4\") " pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.768588 master-0 kubenswrapper[31420]: I0220 12:06:36.768225 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vn6r\" (UniqueName: \"kubernetes.io/projected/f4156c45-4529-4c5a-b691-33bc78ea64ae-kube-api-access-6vn6r\") pod \"cni-sysctl-allowlist-ds-hpqwd\" (UID: \"f4156c45-4529-4c5a-b691-33bc78ea64ae\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:36.768588 master-0 kubenswrapper[31420]: I0220 12:06:36.768258 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dda773b-6ab7-436a-8fec-e849e2eaf0d4-client-ca\") pod \"controller-manager-6cdbd68d56-hw588\" (UID: \"8dda773b-6ab7-436a-8fec-e849e2eaf0d4\") " pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.768588 master-0 kubenswrapper[31420]: I0220 12:06:36.768275 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-audit-policies\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.768588 master-0 kubenswrapper[31420]: I0220 12:06:36.768290 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/034ed75f-05ba-4a92-8fba-40b9ee2155bf-host\") pod \"node-ca-8p77l\" (UID: \"034ed75f-05ba-4a92-8fba-40b9ee2155bf\") " pod="openshift-image-registry/node-ca-8p77l" Feb 20 12:06:36.768588 master-0 kubenswrapper[31420]: I0220 12:06:36.768312 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-audit-dir\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.768588 master-0 kubenswrapper[31420]: I0220 12:06:36.768325 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srv5n\" (UniqueName: \"kubernetes.io/projected/034ed75f-05ba-4a92-8fba-40b9ee2155bf-kube-api-access-srv5n\") pod \"node-ca-8p77l\" (UID: \"034ed75f-05ba-4a92-8fba-40b9ee2155bf\") " pod="openshift-image-registry/node-ca-8p77l" Feb 20 12:06:36.768588 master-0 kubenswrapper[31420]: I0220 12:06:36.768343 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-router-certs\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.768588 master-0 kubenswrapper[31420]: I0220 12:06:36.768367 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6e272535-93c8-4259-8775-f61f62b07be7-monitoring-plugin-cert\") pod \"monitoring-plugin-6cf879bbbd-4fqq9\" (UID: \"6e272535-93c8-4259-8775-f61f62b07be7\") " pod="openshift-monitoring/monitoring-plugin-6cf879bbbd-4fqq9" Feb 20 12:06:36.768588 master-0 kubenswrapper[31420]: I0220 12:06:36.768386 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-error\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.768588 master-0 kubenswrapper[31420]: I0220 12:06:36.768412 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81492110-a8ec-4393-9df3-5fcb8c6c092e-config\") pod \"route-controller-manager-5848d6679c-7phcl\" (UID: \"81492110-a8ec-4393-9df3-5fcb8c6c092e\") " pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:36.768588 master-0 kubenswrapper[31420]: I0220 12:06:36.768539 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn4p8\" (UniqueName: \"kubernetes.io/projected/8dda773b-6ab7-436a-8fec-e849e2eaf0d4-kube-api-access-hn4p8\") pod \"controller-manager-6cdbd68d56-hw588\" (UID: \"8dda773b-6ab7-436a-8fec-e849e2eaf0d4\") " pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.769172 master-0 kubenswrapper[31420]: I0220 12:06:36.768610 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-session\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.769172 master-0 kubenswrapper[31420]: I0220 12:06:36.768682 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.769172 master-0 kubenswrapper[31420]: I0220 12:06:36.768776 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f4156c45-4529-4c5a-b691-33bc78ea64ae-ready\") pod \"cni-sysctl-allowlist-ds-hpqwd\" (UID: \"f4156c45-4529-4c5a-b691-33bc78ea64ae\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:36.769172 master-0 kubenswrapper[31420]: I0220 12:06:36.768871 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-675vc\" (UniqueName: \"kubernetes.io/projected/cec8d018-dd5b-4607-b3d7-1c824aa9a193-kube-api-access-675vc\") pod \"collect-profiles-29526480-9s2sk\" (UID: \"cec8d018-dd5b-4607-b3d7-1c824aa9a193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" Feb 20 12:06:36.769172 master-0 kubenswrapper[31420]: I0220 12:06:36.768917 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.769172 master-0 kubenswrapper[31420]: I0220 12:06:36.769051 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-login\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.769172 master-0 kubenswrapper[31420]: I0220 12:06:36.769172 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.769464 master-0 kubenswrapper[31420]: I0220 12:06:36.769196 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81492110-a8ec-4393-9df3-5fcb8c6c092e-client-ca\") pod \"route-controller-manager-5848d6679c-7phcl\" (UID: \"81492110-a8ec-4393-9df3-5fcb8c6c092e\") " pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:36.769712 master-0 kubenswrapper[31420]: I0220 12:06:36.769674 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dda773b-6ab7-436a-8fec-e849e2eaf0d4-serving-cert\") pod \"controller-manager-6cdbd68d56-hw588\" (UID: \"8dda773b-6ab7-436a-8fec-e849e2eaf0d4\") " pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.769770 master-0 kubenswrapper[31420]: I0220 12:06:36.769723 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cec8d018-dd5b-4607-b3d7-1c824aa9a193-secret-volume\") pod \"collect-profiles-29526480-9s2sk\" (UID: \"cec8d018-dd5b-4607-b3d7-1c824aa9a193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" Feb 20 12:06:36.769838 master-0 kubenswrapper[31420]: I0220 12:06:36.769767 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pg2j\" (UniqueName: \"kubernetes.io/projected/81492110-a8ec-4393-9df3-5fcb8c6c092e-kube-api-access-8pg2j\") pod \"route-controller-manager-5848d6679c-7phcl\" (UID: \"81492110-a8ec-4393-9df3-5fcb8c6c092e\") " pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:36.769838 master-0 kubenswrapper[31420]: I0220 12:06:36.769788 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cec8d018-dd5b-4607-b3d7-1c824aa9a193-config-volume\") pod \"collect-profiles-29526480-9s2sk\" (UID: \"cec8d018-dd5b-4607-b3d7-1c824aa9a193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" Feb 20 12:06:36.769838 master-0 kubenswrapper[31420]: I0220 12:06:36.769805 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.769838 master-0 kubenswrapper[31420]: I0220 12:06:36.769827 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.769999 master-0 kubenswrapper[31420]: I0220 12:06:36.769859 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dda773b-6ab7-436a-8fec-e849e2eaf0d4-proxy-ca-bundles\") pod \"controller-manager-6cdbd68d56-hw588\" (UID: \"8dda773b-6ab7-436a-8fec-e849e2eaf0d4\") " pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.769999 master-0 kubenswrapper[31420]: I0220 12:06:36.769895 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f4156c45-4529-4c5a-b691-33bc78ea64ae-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hpqwd\" (UID: \"f4156c45-4529-4c5a-b691-33bc78ea64ae\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:36.769999 master-0 kubenswrapper[31420]: I0220 12:06:36.769915 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl4dd\" (UniqueName: \"kubernetes.io/projected/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-kube-api-access-gl4dd\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.769999 master-0 kubenswrapper[31420]: I0220 12:06:36.769984 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-service-ca\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.770156 master-0 kubenswrapper[31420]: I0220 12:06:36.770002 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f4156c45-4529-4c5a-b691-33bc78ea64ae-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hpqwd\" (UID: \"f4156c45-4529-4c5a-b691-33bc78ea64ae\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:36.770156 master-0 kubenswrapper[31420]: I0220 12:06:36.770028 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81492110-a8ec-4393-9df3-5fcb8c6c092e-serving-cert\") pod \"route-controller-manager-5848d6679c-7phcl\" (UID: \"81492110-a8ec-4393-9df3-5fcb8c6c092e\") " pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:36.770156 master-0 kubenswrapper[31420]: I0220 12:06:36.770078 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/034ed75f-05ba-4a92-8fba-40b9ee2155bf-serviceca\") pod \"node-ca-8p77l\" (UID: \"034ed75f-05ba-4a92-8fba-40b9ee2155bf\") " pod="openshift-image-registry/node-ca-8p77l" Feb 20 12:06:36.871221 master-0 kubenswrapper[31420]: I0220 12:06:36.871168 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-audit-policies\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.872376 master-0 kubenswrapper[31420]: I0220 12:06:36.872330 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dda773b-6ab7-436a-8fec-e849e2eaf0d4-client-ca\") pod \"controller-manager-6cdbd68d56-hw588\" (UID: \"8dda773b-6ab7-436a-8fec-e849e2eaf0d4\") " pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.872439 master-0 kubenswrapper[31420]: I0220 12:06:36.872392 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/034ed75f-05ba-4a92-8fba-40b9ee2155bf-host\") pod \"node-ca-8p77l\" (UID: \"034ed75f-05ba-4a92-8fba-40b9ee2155bf\") " pod="openshift-image-registry/node-ca-8p77l" Feb 20 12:06:36.872475 master-0 kubenswrapper[31420]: I0220 12:06:36.872443 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-audit-dir\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.872475 master-0 kubenswrapper[31420]: I0220 12:06:36.872466 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srv5n\" (UniqueName: \"kubernetes.io/projected/034ed75f-05ba-4a92-8fba-40b9ee2155bf-kube-api-access-srv5n\") pod \"node-ca-8p77l\" (UID: \"034ed75f-05ba-4a92-8fba-40b9ee2155bf\") " pod="openshift-image-registry/node-ca-8p77l" Feb 20 12:06:36.872559 master-0 kubenswrapper[31420]: I0220 12:06:36.872239 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-audit-policies\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.872712 master-0 kubenswrapper[31420]: I0220 12:06:36.872634 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/034ed75f-05ba-4a92-8fba-40b9ee2155bf-host\") pod \"node-ca-8p77l\" (UID: \"034ed75f-05ba-4a92-8fba-40b9ee2155bf\") " pod="openshift-image-registry/node-ca-8p77l" Feb 20 12:06:36.872815 master-0 kubenswrapper[31420]: I0220 12:06:36.872764 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-router-certs\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.872929 master-0 kubenswrapper[31420]: I0220 12:06:36.872889 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-error\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.872987 master-0 kubenswrapper[31420]: I0220 12:06:36.872957 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6e272535-93c8-4259-8775-f61f62b07be7-monitoring-plugin-cert\") pod \"monitoring-plugin-6cf879bbbd-4fqq9\" (UID: \"6e272535-93c8-4259-8775-f61f62b07be7\") " pod="openshift-monitoring/monitoring-plugin-6cf879bbbd-4fqq9" Feb 20 12:06:36.873051 master-0 kubenswrapper[31420]: I0220 12:06:36.873018 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81492110-a8ec-4393-9df3-5fcb8c6c092e-config\") pod \"route-controller-manager-5848d6679c-7phcl\" (UID: \"81492110-a8ec-4393-9df3-5fcb8c6c092e\") " pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:36.873147 master-0 kubenswrapper[31420]: I0220 12:06:36.873116 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn4p8\" (UniqueName: \"kubernetes.io/projected/8dda773b-6ab7-436a-8fec-e849e2eaf0d4-kube-api-access-hn4p8\") pod \"controller-manager-6cdbd68d56-hw588\" (UID: \"8dda773b-6ab7-436a-8fec-e849e2eaf0d4\") " pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.873179 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-session\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.873230 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.873280 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f4156c45-4529-4c5a-b691-33bc78ea64ae-ready\") pod \"cni-sysctl-allowlist-ds-hpqwd\" (UID: \"f4156c45-4529-4c5a-b691-33bc78ea64ae\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.873329 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-675vc\" (UniqueName: \"kubernetes.io/projected/cec8d018-dd5b-4607-b3d7-1c824aa9a193-kube-api-access-675vc\") pod \"collect-profiles-29526480-9s2sk\" (UID: \"cec8d018-dd5b-4607-b3d7-1c824aa9a193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.873386 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.873443 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-login\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.873517 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.873610 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81492110-a8ec-4393-9df3-5fcb8c6c092e-client-ca\") pod \"route-controller-manager-5848d6679c-7phcl\" (UID: \"81492110-a8ec-4393-9df3-5fcb8c6c092e\") " pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.873678 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dda773b-6ab7-436a-8fec-e849e2eaf0d4-serving-cert\") pod \"controller-manager-6cdbd68d56-hw588\" (UID: \"8dda773b-6ab7-436a-8fec-e849e2eaf0d4\") " pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.873748 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cec8d018-dd5b-4607-b3d7-1c824aa9a193-secret-volume\") pod \"collect-profiles-29526480-9s2sk\" (UID: \"cec8d018-dd5b-4607-b3d7-1c824aa9a193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.873862 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pg2j\" (UniqueName: \"kubernetes.io/projected/81492110-a8ec-4393-9df3-5fcb8c6c092e-kube-api-access-8pg2j\") pod \"route-controller-manager-5848d6679c-7phcl\" (UID: \"81492110-a8ec-4393-9df3-5fcb8c6c092e\") " pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.873904 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cec8d018-dd5b-4607-b3d7-1c824aa9a193-config-volume\") pod \"collect-profiles-29526480-9s2sk\" (UID: \"cec8d018-dd5b-4607-b3d7-1c824aa9a193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.873950 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.874004 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.874054 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dda773b-6ab7-436a-8fec-e849e2eaf0d4-proxy-ca-bundles\") pod \"controller-manager-6cdbd68d56-hw588\" (UID: \"8dda773b-6ab7-436a-8fec-e849e2eaf0d4\") " pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.874100 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f4156c45-4529-4c5a-b691-33bc78ea64ae-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hpqwd\" (UID: \"f4156c45-4529-4c5a-b691-33bc78ea64ae\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.874135 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl4dd\" (UniqueName: \"kubernetes.io/projected/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-kube-api-access-gl4dd\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.874181 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-service-ca\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.874217 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f4156c45-4529-4c5a-b691-33bc78ea64ae-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hpqwd\" (UID: \"f4156c45-4529-4c5a-b691-33bc78ea64ae\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.874256 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81492110-a8ec-4393-9df3-5fcb8c6c092e-serving-cert\") pod \"route-controller-manager-5848d6679c-7phcl\" (UID: \"81492110-a8ec-4393-9df3-5fcb8c6c092e\") " pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.874323 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/034ed75f-05ba-4a92-8fba-40b9ee2155bf-serviceca\") pod \"node-ca-8p77l\" (UID: \"034ed75f-05ba-4a92-8fba-40b9ee2155bf\") " pod="openshift-image-registry/node-ca-8p77l" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.874372 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dda773b-6ab7-436a-8fec-e849e2eaf0d4-config\") pod \"controller-manager-6cdbd68d56-hw588\" (UID: \"8dda773b-6ab7-436a-8fec-e849e2eaf0d4\") " pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.874434 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vn6r\" (UniqueName: \"kubernetes.io/projected/f4156c45-4529-4c5a-b691-33bc78ea64ae-kube-api-access-6vn6r\") pod \"cni-sysctl-allowlist-ds-hpqwd\" (UID: \"f4156c45-4529-4c5a-b691-33bc78ea64ae\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:36.874462 master-0 kubenswrapper[31420]: I0220 12:06:36.874449 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.874543 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81492110-a8ec-4393-9df3-5fcb8c6c092e-config\") pod \"route-controller-manager-5848d6679c-7phcl\" (UID: \"81492110-a8ec-4393-9df3-5fcb8c6c092e\") " pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.874940 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dda773b-6ab7-436a-8fec-e849e2eaf0d4-client-ca\") pod \"controller-manager-6cdbd68d56-hw588\" (UID: \"8dda773b-6ab7-436a-8fec-e849e2eaf0d4\") " pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.873959 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f4156c45-4529-4c5a-b691-33bc78ea64ae-ready\") pod \"cni-sysctl-allowlist-ds-hpqwd\" (UID: \"f4156c45-4529-4c5a-b691-33bc78ea64ae\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.875725 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-audit-dir\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.876362 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-error\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.877088 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cec8d018-dd5b-4607-b3d7-1c824aa9a193-config-volume\") pod \"collect-profiles-29526480-9s2sk\" (UID: \"cec8d018-dd5b-4607-b3d7-1c824aa9a193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.877550 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/034ed75f-05ba-4a92-8fba-40b9ee2155bf-serviceca\") pod \"node-ca-8p77l\" (UID: \"034ed75f-05ba-4a92-8fba-40b9ee2155bf\") " pod="openshift-image-registry/node-ca-8p77l" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.877604 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f4156c45-4529-4c5a-b691-33bc78ea64ae-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hpqwd\" (UID: \"f4156c45-4529-4c5a-b691-33bc78ea64ae\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.878354 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f4156c45-4529-4c5a-b691-33bc78ea64ae-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hpqwd\" (UID: \"f4156c45-4529-4c5a-b691-33bc78ea64ae\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.878514 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-service-ca\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.878699 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dda773b-6ab7-436a-8fec-e849e2eaf0d4-config\") pod \"controller-manager-6cdbd68d56-hw588\" (UID: \"8dda773b-6ab7-436a-8fec-e849e2eaf0d4\") " pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.878697 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81492110-a8ec-4393-9df3-5fcb8c6c092e-client-ca\") pod \"route-controller-manager-5848d6679c-7phcl\" (UID: \"81492110-a8ec-4393-9df3-5fcb8c6c092e\") " pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.879627 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.880135 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dda773b-6ab7-436a-8fec-e849e2eaf0d4-proxy-ca-bundles\") pod \"controller-manager-6cdbd68d56-hw588\" (UID: \"8dda773b-6ab7-436a-8fec-e849e2eaf0d4\") " pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.880645 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-login\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.881360 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-router-certs\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.882054 master-0 kubenswrapper[31420]: I0220 12:06:36.881384 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81492110-a8ec-4393-9df3-5fcb8c6c092e-serving-cert\") pod \"route-controller-manager-5848d6679c-7phcl\" (UID: \"81492110-a8ec-4393-9df3-5fcb8c6c092e\") " pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:36.883121 master-0 kubenswrapper[31420]: I0220 12:06:36.882337 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.883378 master-0 kubenswrapper[31420]: I0220 12:06:36.883309 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6e272535-93c8-4259-8775-f61f62b07be7-monitoring-plugin-cert\") pod \"monitoring-plugin-6cf879bbbd-4fqq9\" (UID: \"6e272535-93c8-4259-8775-f61f62b07be7\") " pod="openshift-monitoring/monitoring-plugin-6cf879bbbd-4fqq9" Feb 20 12:06:36.884176 master-0 kubenswrapper[31420]: I0220 12:06:36.884134 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-session\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.885186 master-0 kubenswrapper[31420]: I0220 12:06:36.885146 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cec8d018-dd5b-4607-b3d7-1c824aa9a193-secret-volume\") pod \"collect-profiles-29526480-9s2sk\" (UID: \"cec8d018-dd5b-4607-b3d7-1c824aa9a193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" Feb 20 12:06:36.885515 master-0 kubenswrapper[31420]: I0220 12:06:36.885479 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.886700 master-0 kubenswrapper[31420]: I0220 12:06:36.886640 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dda773b-6ab7-436a-8fec-e849e2eaf0d4-serving-cert\") pod \"controller-manager-6cdbd68d56-hw588\" (UID: \"8dda773b-6ab7-436a-8fec-e849e2eaf0d4\") " pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.887481 master-0 kubenswrapper[31420]: I0220 12:06:36.887434 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.888222 master-0 kubenswrapper[31420]: I0220 12:06:36.888180 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn4p8\" (UniqueName: \"kubernetes.io/projected/8dda773b-6ab7-436a-8fec-e849e2eaf0d4-kube-api-access-hn4p8\") pod \"controller-manager-6cdbd68d56-hw588\" (UID: \"8dda773b-6ab7-436a-8fec-e849e2eaf0d4\") " pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:36.890662 master-0 kubenswrapper[31420]: I0220 12:06:36.890630 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srv5n\" (UniqueName: \"kubernetes.io/projected/034ed75f-05ba-4a92-8fba-40b9ee2155bf-kube-api-access-srv5n\") pod \"node-ca-8p77l\" (UID: \"034ed75f-05ba-4a92-8fba-40b9ee2155bf\") " pod="openshift-image-registry/node-ca-8p77l" Feb 20 12:06:36.898791 master-0 kubenswrapper[31420]: I0220 12:06:36.898723 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-675vc\" (UniqueName: \"kubernetes.io/projected/cec8d018-dd5b-4607-b3d7-1c824aa9a193-kube-api-access-675vc\") pod \"collect-profiles-29526480-9s2sk\" (UID: \"cec8d018-dd5b-4607-b3d7-1c824aa9a193\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" Feb 20 12:06:36.900600 master-0 kubenswrapper[31420]: I0220 12:06:36.900568 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vn6r\" (UniqueName: \"kubernetes.io/projected/f4156c45-4529-4c5a-b691-33bc78ea64ae-kube-api-access-6vn6r\") pod \"cni-sysctl-allowlist-ds-hpqwd\" (UID: \"f4156c45-4529-4c5a-b691-33bc78ea64ae\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:36.900600 master-0 kubenswrapper[31420]: I0220 12:06:36.900570 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl4dd\" (UniqueName: \"kubernetes.io/projected/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-kube-api-access-gl4dd\") pod \"oauth-openshift-64ddc49fd6-t9s4l\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:36.904232 master-0 kubenswrapper[31420]: I0220 12:06:36.904189 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pg2j\" (UniqueName: \"kubernetes.io/projected/81492110-a8ec-4393-9df3-5fcb8c6c092e-kube-api-access-8pg2j\") pod \"route-controller-manager-5848d6679c-7phcl\" (UID: \"81492110-a8ec-4393-9df3-5fcb8c6c092e\") " pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:36.999752 master-0 kubenswrapper[31420]: I0220 12:06:36.998988 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:37.011152 master-0 kubenswrapper[31420]: I0220 12:06:37.011106 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:37.034082 master-0 kubenswrapper[31420]: I0220 12:06:37.033801 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8p77l" Feb 20 12:06:37.057816 master-0 kubenswrapper[31420]: I0220 12:06:37.057096 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:37.076019 master-0 kubenswrapper[31420]: I0220 12:06:37.075972 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" Feb 20 12:06:37.085740 master-0 kubenswrapper[31420]: I0220 12:06:37.085677 31420 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 12:06:37.090396 master-0 kubenswrapper[31420]: I0220 12:06:37.090016 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6cf879bbbd-4fqq9" Feb 20 12:06:37.106586 master-0 kubenswrapper[31420]: I0220 12:06:37.105960 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:37.486062 master-0 kubenswrapper[31420]: I0220 12:06:37.486000 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cdbd68d56-hw588"] Feb 20 12:06:37.503441 master-0 kubenswrapper[31420]: W0220 12:06:37.503387 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dda773b_6ab7_436a_8fec_e849e2eaf0d4.slice/crio-60f85cbbdbb57cbda4a63ed2515a0d0b269ced412e581b3462599c699ddd19dd WatchSource:0}: Error finding container 60f85cbbdbb57cbda4a63ed2515a0d0b269ced412e581b3462599c699ddd19dd: Status 404 returned error can't find the container with id 60f85cbbdbb57cbda4a63ed2515a0d0b269ced412e581b3462599c699ddd19dd Feb 20 12:06:37.594729 master-0 kubenswrapper[31420]: I0220 12:06:37.594656 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl"] Feb 20 12:06:37.603196 master-0 kubenswrapper[31420]: W0220 12:06:37.603129 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81492110_a8ec_4393_9df3_5fcb8c6c092e.slice/crio-868cb831230d5fdc3dbade44c1959dc833ce4bf74ce89cb0905b2fc06b1b6743 WatchSource:0}: Error finding container 868cb831230d5fdc3dbade44c1959dc833ce4bf74ce89cb0905b2fc06b1b6743: Status 404 returned error can't find the container with id 868cb831230d5fdc3dbade44c1959dc833ce4bf74ce89cb0905b2fc06b1b6743 Feb 20 12:06:37.614581 master-0 kubenswrapper[31420]: I0220 12:06:37.614508 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l"] Feb 20 12:06:37.619057 master-0 kubenswrapper[31420]: W0220 12:06:37.619005 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf47a071f_b3f1_4b00_9e67_d39b3e89eaac.slice/crio-0d48f808852dd525b6997a8f44f34e27f45897e6032be5ebde82135fe3e20334 WatchSource:0}: Error finding container 0d48f808852dd525b6997a8f44f34e27f45897e6032be5ebde82135fe3e20334: Status 404 returned error can't find the container with id 0d48f808852dd525b6997a8f44f34e27f45897e6032be5ebde82135fe3e20334 Feb 20 12:06:37.710576 master-0 kubenswrapper[31420]: I0220 12:06:37.710502 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk"] Feb 20 12:06:37.712171 master-0 kubenswrapper[31420]: I0220 12:06:37.712130 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6cf879bbbd-4fqq9"] Feb 20 12:06:37.827549 master-0 kubenswrapper[31420]: I0220 12:06:37.826889 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" event={"ID":"81492110-a8ec-4393-9df3-5fcb8c6c092e","Type":"ContainerStarted","Data":"a9296634b4e2d7847ddd8dbbff255742ee3fedddd4fe191b77b17be6b3c0b1cc"} Feb 20 12:06:37.827549 master-0 kubenswrapper[31420]: I0220 12:06:37.826953 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" event={"ID":"81492110-a8ec-4393-9df3-5fcb8c6c092e","Type":"ContainerStarted","Data":"868cb831230d5fdc3dbade44c1959dc833ce4bf74ce89cb0905b2fc06b1b6743"} Feb 20 12:06:37.827549 master-0 kubenswrapper[31420]: I0220 12:06:37.827123 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:37.829166 master-0 kubenswrapper[31420]: I0220 12:06:37.828306 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" event={"ID":"cec8d018-dd5b-4607-b3d7-1c824aa9a193","Type":"ContainerStarted","Data":"216a5c99cdb73a3458ea653372da06dcbfe1e28103f5130721c17fde33e61926"} Feb 20 12:06:37.830630 master-0 kubenswrapper[31420]: I0220 12:06:37.830203 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" event={"ID":"f4156c45-4529-4c5a-b691-33bc78ea64ae","Type":"ContainerStarted","Data":"34f81334064b69cbec1e471ee990053711ca80d7079c1fa6e1d33adc809187e8"} Feb 20 12:06:37.830630 master-0 kubenswrapper[31420]: I0220 12:06:37.830245 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" event={"ID":"f4156c45-4529-4c5a-b691-33bc78ea64ae","Type":"ContainerStarted","Data":"d525d8256c9f413be34bb397253a55968ece90fa44dd7b2d784e0b86f2613488"} Feb 20 12:06:37.830975 master-0 kubenswrapper[31420]: I0220 12:06:37.830862 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:37.832315 master-0 kubenswrapper[31420]: I0220 12:06:37.832270 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" event={"ID":"8dda773b-6ab7-436a-8fec-e849e2eaf0d4","Type":"ContainerStarted","Data":"05741a391d33ab7f22118a7367316dcbb6723c5386195adc01dcee47d373d103"} Feb 20 12:06:37.832315 master-0 kubenswrapper[31420]: I0220 12:06:37.832309 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" event={"ID":"8dda773b-6ab7-436a-8fec-e849e2eaf0d4","Type":"ContainerStarted","Data":"60f85cbbdbb57cbda4a63ed2515a0d0b269ced412e581b3462599c699ddd19dd"} Feb 20 12:06:37.832487 master-0 kubenswrapper[31420]: I0220 12:06:37.832449 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:37.833896 master-0 kubenswrapper[31420]: I0220 12:06:37.833617 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" event={"ID":"f47a071f-b3f1-4b00-9e67-d39b3e89eaac","Type":"ContainerStarted","Data":"0d48f808852dd525b6997a8f44f34e27f45897e6032be5ebde82135fe3e20334"} Feb 20 12:06:37.835275 master-0 kubenswrapper[31420]: I0220 12:06:37.835240 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8p77l" event={"ID":"034ed75f-05ba-4a92-8fba-40b9ee2155bf","Type":"ContainerStarted","Data":"db324fe0b743c96f4461c25a688ebd68d03a3608d33269dca9bf4a73aba27554"} Feb 20 12:06:37.838832 master-0 kubenswrapper[31420]: I0220 12:06:37.838783 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6cf879bbbd-4fqq9" event={"ID":"6e272535-93c8-4259-8775-f61f62b07be7","Type":"ContainerStarted","Data":"ab29dc0cd31522a932556218947998a4a859a782bb18a4d1619c08172c5af497"} Feb 20 12:06:37.839447 master-0 kubenswrapper[31420]: I0220 12:06:37.839404 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" Feb 20 12:06:37.882859 master-0 kubenswrapper[31420]: I0220 12:06:37.882616 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" podStartSLOduration=53.882589727 podStartE2EDuration="53.882589727s" podCreationTimestamp="2026-02-20 12:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:06:37.880031415 +0000 UTC m=+102.599269676" watchObservedRunningTime="2026-02-20 12:06:37.882589727 +0000 UTC m=+102.601827978" Feb 20 12:06:38.095023 master-0 kubenswrapper[31420]: I0220 12:06:38.094157 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cdbd68d56-hw588" podStartSLOduration=54.094138194 podStartE2EDuration="54.094138194s" podCreationTimestamp="2026-02-20 12:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:06:38.093928178 +0000 UTC m=+102.813166419" watchObservedRunningTime="2026-02-20 12:06:38.094138194 +0000 UTC m=+102.813376435" Feb 20 12:06:38.095498 master-0 kubenswrapper[31420]: I0220 12:06:38.095469 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" podStartSLOduration=159.095462872 podStartE2EDuration="2m39.095462872s" podCreationTimestamp="2026-02-20 12:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:06:37.935734479 +0000 UTC m=+102.654972730" watchObservedRunningTime="2026-02-20 12:06:38.095462872 +0000 UTC m=+102.814701113" Feb 20 12:06:38.374773 master-0 kubenswrapper[31420]: I0220 12:06:38.374658 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5848d6679c-7phcl" Feb 20 12:06:38.856601 master-0 kubenswrapper[31420]: I0220 12:06:38.856489 31420 generic.go:334] "Generic (PLEG): container finished" podID="cec8d018-dd5b-4607-b3d7-1c824aa9a193" containerID="9c597a65a05cee62b3e0960e640acde8f3c03a2720e3886a29813cc02d33c3b4" exitCode=0 Feb 20 12:06:38.857881 master-0 kubenswrapper[31420]: I0220 12:06:38.856735 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" event={"ID":"cec8d018-dd5b-4607-b3d7-1c824aa9a193","Type":"ContainerDied","Data":"9c597a65a05cee62b3e0960e640acde8f3c03a2720e3886a29813cc02d33c3b4"} Feb 20 12:06:38.877880 master-0 kubenswrapper[31420]: I0220 12:06:38.876618 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 20 12:06:38.878465 master-0 kubenswrapper[31420]: I0220 12:06:38.878330 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 20 12:06:38.898925 master-0 kubenswrapper[31420]: I0220 12:06:38.898869 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 20 12:06:38.916182 master-0 kubenswrapper[31420]: I0220 12:06:38.915756 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-hpqwd" Feb 20 12:06:39.044656 master-0 kubenswrapper[31420]: I0220 12:06:39.044582 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-kube-api-access\") pod \"installer-6-master-0\" (UID: \"bf6108c5-19ba-4f99-9f75-6e02fa5876f2\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 20 12:06:39.044656 master-0 kubenswrapper[31420]: I0220 12:06:39.044643 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"bf6108c5-19ba-4f99-9f75-6e02fa5876f2\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 20 12:06:39.044934 master-0 kubenswrapper[31420]: I0220 12:06:39.044771 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-var-lock\") pod \"installer-6-master-0\" (UID: \"bf6108c5-19ba-4f99-9f75-6e02fa5876f2\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 20 12:06:39.146053 master-0 kubenswrapper[31420]: I0220 12:06:39.145946 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-var-lock\") pod \"installer-6-master-0\" (UID: \"bf6108c5-19ba-4f99-9f75-6e02fa5876f2\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 20 12:06:39.146053 master-0 kubenswrapper[31420]: I0220 12:06:39.146019 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-kube-api-access\") pod \"installer-6-master-0\" (UID: \"bf6108c5-19ba-4f99-9f75-6e02fa5876f2\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 20 12:06:39.146267 master-0 kubenswrapper[31420]: I0220 12:06:39.146120 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-var-lock\") pod \"installer-6-master-0\" (UID: \"bf6108c5-19ba-4f99-9f75-6e02fa5876f2\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 20 12:06:39.146267 master-0 kubenswrapper[31420]: I0220 12:06:39.146219 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"bf6108c5-19ba-4f99-9f75-6e02fa5876f2\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 20 12:06:39.146344 master-0 kubenswrapper[31420]: I0220 12:06:39.146324 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"bf6108c5-19ba-4f99-9f75-6e02fa5876f2\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 20 12:06:39.161169 master-0 kubenswrapper[31420]: I0220 12:06:39.161116 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-kube-api-access\") pod \"installer-6-master-0\" (UID: \"bf6108c5-19ba-4f99-9f75-6e02fa5876f2\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 20 12:06:39.217599 master-0 kubenswrapper[31420]: I0220 12:06:39.217544 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 20 12:06:40.427997 master-0 kubenswrapper[31420]: I0220 12:06:40.427950 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" Feb 20 12:06:40.570306 master-0 kubenswrapper[31420]: I0220 12:06:40.570240 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cec8d018-dd5b-4607-b3d7-1c824aa9a193-secret-volume\") pod \"cec8d018-dd5b-4607-b3d7-1c824aa9a193\" (UID: \"cec8d018-dd5b-4607-b3d7-1c824aa9a193\") " Feb 20 12:06:40.570515 master-0 kubenswrapper[31420]: I0220 12:06:40.570319 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cec8d018-dd5b-4607-b3d7-1c824aa9a193-config-volume\") pod \"cec8d018-dd5b-4607-b3d7-1c824aa9a193\" (UID: \"cec8d018-dd5b-4607-b3d7-1c824aa9a193\") " Feb 20 12:06:40.570515 master-0 kubenswrapper[31420]: I0220 12:06:40.570441 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-675vc\" (UniqueName: \"kubernetes.io/projected/cec8d018-dd5b-4607-b3d7-1c824aa9a193-kube-api-access-675vc\") pod \"cec8d018-dd5b-4607-b3d7-1c824aa9a193\" (UID: \"cec8d018-dd5b-4607-b3d7-1c824aa9a193\") " Feb 20 12:06:40.571827 master-0 kubenswrapper[31420]: I0220 12:06:40.571786 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cec8d018-dd5b-4607-b3d7-1c824aa9a193-config-volume" (OuterVolumeSpecName: "config-volume") pod "cec8d018-dd5b-4607-b3d7-1c824aa9a193" (UID: "cec8d018-dd5b-4607-b3d7-1c824aa9a193"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:06:40.575964 master-0 kubenswrapper[31420]: I0220 12:06:40.575904 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec8d018-dd5b-4607-b3d7-1c824aa9a193-kube-api-access-675vc" (OuterVolumeSpecName: "kube-api-access-675vc") pod "cec8d018-dd5b-4607-b3d7-1c824aa9a193" (UID: "cec8d018-dd5b-4607-b3d7-1c824aa9a193"). InnerVolumeSpecName "kube-api-access-675vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:06:40.576971 master-0 kubenswrapper[31420]: I0220 12:06:40.576927 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec8d018-dd5b-4607-b3d7-1c824aa9a193-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cec8d018-dd5b-4607-b3d7-1c824aa9a193" (UID: "cec8d018-dd5b-4607-b3d7-1c824aa9a193"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:06:40.672697 master-0 kubenswrapper[31420]: I0220 12:06:40.672309 31420 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cec8d018-dd5b-4607-b3d7-1c824aa9a193-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 12:06:40.672697 master-0 kubenswrapper[31420]: I0220 12:06:40.672358 31420 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cec8d018-dd5b-4607-b3d7-1c824aa9a193-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 12:06:40.672697 master-0 kubenswrapper[31420]: I0220 12:06:40.672372 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-675vc\" (UniqueName: \"kubernetes.io/projected/cec8d018-dd5b-4607-b3d7-1c824aa9a193-kube-api-access-675vc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:06:40.832752 master-0 kubenswrapper[31420]: I0220 12:06:40.832696 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 20 12:06:40.841364 master-0 kubenswrapper[31420]: W0220 12:06:40.841309 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbf6108c5_19ba_4f99_9f75_6e02fa5876f2.slice/crio-9caae01ec23401038dfdcd6e44fd8a4a86387fffcd6ee59fd76b10a1cd2b21c5 WatchSource:0}: Error finding container 9caae01ec23401038dfdcd6e44fd8a4a86387fffcd6ee59fd76b10a1cd2b21c5: Status 404 returned error can't find the container with id 9caae01ec23401038dfdcd6e44fd8a4a86387fffcd6ee59fd76b10a1cd2b21c5 Feb 20 12:06:40.871938 master-0 kubenswrapper[31420]: I0220 12:06:40.871892 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8p77l" event={"ID":"034ed75f-05ba-4a92-8fba-40b9ee2155bf","Type":"ContainerStarted","Data":"23bb41ed6ce4477eb1d52a8f9c8a7bf1e6c51385576f706f3caefe7d443b0095"} Feb 20 12:06:40.874232 master-0 kubenswrapper[31420]: I0220 12:06:40.874205 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6cf879bbbd-4fqq9" event={"ID":"6e272535-93c8-4259-8775-f61f62b07be7","Type":"ContainerStarted","Data":"5205c53679c19ed035532f972a4129a1e3eb0b3a847ff01d5fe737eb1730a3ff"} Feb 20 12:06:40.874884 master-0 kubenswrapper[31420]: I0220 12:06:40.874841 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6cf879bbbd-4fqq9" Feb 20 12:06:40.881756 master-0 kubenswrapper[31420]: I0220 12:06:40.881666 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" event={"ID":"cec8d018-dd5b-4607-b3d7-1c824aa9a193","Type":"ContainerDied","Data":"216a5c99cdb73a3458ea653372da06dcbfe1e28103f5130721c17fde33e61926"} Feb 20 12:06:40.881756 master-0 kubenswrapper[31420]: I0220 12:06:40.881728 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="216a5c99cdb73a3458ea653372da06dcbfe1e28103f5130721c17fde33e61926" Feb 20 12:06:40.883624 master-0 kubenswrapper[31420]: I0220 12:06:40.883587 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk" Feb 20 12:06:40.888560 master-0 kubenswrapper[31420]: I0220 12:06:40.886970 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" event={"ID":"f47a071f-b3f1-4b00-9e67-d39b3e89eaac","Type":"ContainerStarted","Data":"09673be12905973c2b1790af259ba0c225fcb1ae41753e836a740deaba11a687"} Feb 20 12:06:40.888560 master-0 kubenswrapper[31420]: I0220 12:06:40.887045 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6cf879bbbd-4fqq9" Feb 20 12:06:40.888560 master-0 kubenswrapper[31420]: I0220 12:06:40.887241 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:40.890606 master-0 kubenswrapper[31420]: I0220 12:06:40.889808 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"bf6108c5-19ba-4f99-9f75-6e02fa5876f2","Type":"ContainerStarted","Data":"9caae01ec23401038dfdcd6e44fd8a4a86387fffcd6ee59fd76b10a1cd2b21c5"} Feb 20 12:06:40.899944 master-0 kubenswrapper[31420]: I0220 12:06:40.899797 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8p77l" podStartSLOduration=10.563525142 podStartE2EDuration="13.899717072s" podCreationTimestamp="2026-02-20 12:06:27 +0000 UTC" firstStartedPulling="2026-02-20 12:06:37.08556171 +0000 UTC m=+101.804799961" lastFinishedPulling="2026-02-20 12:06:40.42175364 +0000 UTC m=+105.140991891" observedRunningTime="2026-02-20 12:06:40.895124032 +0000 UTC m=+105.614362273" watchObservedRunningTime="2026-02-20 12:06:40.899717072 +0000 UTC m=+105.618955393" Feb 20 12:06:40.974866 master-0 kubenswrapper[31420]: I0220 12:06:40.974761 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" podStartSLOduration=39.153027348 podStartE2EDuration="41.974739676s" podCreationTimestamp="2026-02-20 12:05:59 +0000 UTC" firstStartedPulling="2026-02-20 12:06:37.629341815 +0000 UTC m=+102.348580096" lastFinishedPulling="2026-02-20 12:06:40.451054163 +0000 UTC m=+105.170292424" observedRunningTime="2026-02-20 12:06:40.945017501 +0000 UTC m=+105.664255762" watchObservedRunningTime="2026-02-20 12:06:40.974739676 +0000 UTC m=+105.693977917" Feb 20 12:06:40.975244 master-0 kubenswrapper[31420]: I0220 12:06:40.974906 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6cf879bbbd-4fqq9" podStartSLOduration=69.274080301 podStartE2EDuration="1m11.974902101s" podCreationTimestamp="2026-02-20 12:05:29 +0000 UTC" firstStartedPulling="2026-02-20 12:06:37.725720776 +0000 UTC m=+102.444959037" lastFinishedPulling="2026-02-20 12:06:40.426542586 +0000 UTC m=+105.145780837" observedRunningTime="2026-02-20 12:06:40.974731196 +0000 UTC m=+105.693969457" watchObservedRunningTime="2026-02-20 12:06:40.974902101 +0000 UTC m=+105.694140342" Feb 20 12:06:41.147969 master-0 kubenswrapper[31420]: I0220 12:06:41.147903 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:06:41.901276 master-0 kubenswrapper[31420]: I0220 12:06:41.901181 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"bf6108c5-19ba-4f99-9f75-6e02fa5876f2","Type":"ContainerStarted","Data":"5f7740e2b085fb2bcbfef55b938e905a363001bf504c40676876a5b22364533c"} Feb 20 12:06:41.925920 master-0 kubenswrapper[31420]: I0220 12:06:41.925823 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=3.9257976340000003 podStartE2EDuration="3.925797634s" podCreationTimestamp="2026-02-20 12:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:06:41.924472257 +0000 UTC m=+106.643710528" watchObservedRunningTime="2026-02-20 12:06:41.925797634 +0000 UTC m=+106.645035885" Feb 20 12:06:50.772580 master-0 kubenswrapper[31420]: I0220 12:06:50.772468 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_e9cd2982-6b46-4ced-9e1d-78b60cbd6391/installer/0.log" Feb 20 12:06:50.773344 master-0 kubenswrapper[31420]: I0220 12:06:50.772647 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 12:06:50.863452 master-0 kubenswrapper[31420]: I0220 12:06:50.862969 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-kubelet-dir\") pod \"e9cd2982-6b46-4ced-9e1d-78b60cbd6391\" (UID: \"e9cd2982-6b46-4ced-9e1d-78b60cbd6391\") " Feb 20 12:06:50.863452 master-0 kubenswrapper[31420]: I0220 12:06:50.863101 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e9cd2982-6b46-4ced-9e1d-78b60cbd6391" (UID: "e9cd2982-6b46-4ced-9e1d-78b60cbd6391"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:06:50.863452 master-0 kubenswrapper[31420]: I0220 12:06:50.863125 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-var-lock\") pod \"e9cd2982-6b46-4ced-9e1d-78b60cbd6391\" (UID: \"e9cd2982-6b46-4ced-9e1d-78b60cbd6391\") " Feb 20 12:06:50.863452 master-0 kubenswrapper[31420]: I0220 12:06:50.863214 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-var-lock" (OuterVolumeSpecName: "var-lock") pod "e9cd2982-6b46-4ced-9e1d-78b60cbd6391" (UID: "e9cd2982-6b46-4ced-9e1d-78b60cbd6391"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:06:50.863452 master-0 kubenswrapper[31420]: I0220 12:06:50.863276 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-kube-api-access\") pod \"e9cd2982-6b46-4ced-9e1d-78b60cbd6391\" (UID: \"e9cd2982-6b46-4ced-9e1d-78b60cbd6391\") " Feb 20 12:06:50.864504 master-0 kubenswrapper[31420]: I0220 12:06:50.864451 31420 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:06:50.864504 master-0 kubenswrapper[31420]: I0220 12:06:50.864489 31420 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 12:06:50.867267 master-0 kubenswrapper[31420]: I0220 12:06:50.867210 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e9cd2982-6b46-4ced-9e1d-78b60cbd6391" (UID: "e9cd2982-6b46-4ced-9e1d-78b60cbd6391"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:06:50.965638 master-0 kubenswrapper[31420]: I0220 12:06:50.965510 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9cd2982-6b46-4ced-9e1d-78b60cbd6391-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 12:06:51.002510 master-0 kubenswrapper[31420]: I0220 12:06:51.002174 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_e9cd2982-6b46-4ced-9e1d-78b60cbd6391/installer/0.log" Feb 20 12:06:51.002510 master-0 kubenswrapper[31420]: I0220 12:06:51.002260 31420 generic.go:334] "Generic (PLEG): container finished" podID="e9cd2982-6b46-4ced-9e1d-78b60cbd6391" containerID="43b7427e5d8486dc18092763d632c45c4dafd6fa320475f95ea95bb6b7625d45" exitCode=1 Feb 20 12:06:51.002510 master-0 kubenswrapper[31420]: I0220 12:06:51.002302 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"e9cd2982-6b46-4ced-9e1d-78b60cbd6391","Type":"ContainerDied","Data":"43b7427e5d8486dc18092763d632c45c4dafd6fa320475f95ea95bb6b7625d45"} Feb 20 12:06:51.002510 master-0 kubenswrapper[31420]: I0220 12:06:51.002340 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"e9cd2982-6b46-4ced-9e1d-78b60cbd6391","Type":"ContainerDied","Data":"f8a215a4ba0b20f3e959b0b731e5793137ea4c5abc28f3f055038954db2265e4"} Feb 20 12:06:51.002510 master-0 kubenswrapper[31420]: I0220 12:06:51.002370 31420 scope.go:117] "RemoveContainer" containerID="43b7427e5d8486dc18092763d632c45c4dafd6fa320475f95ea95bb6b7625d45" Feb 20 12:06:51.002510 master-0 kubenswrapper[31420]: I0220 12:06:51.002434 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 12:06:51.031147 master-0 kubenswrapper[31420]: I0220 12:06:51.031038 31420 scope.go:117] "RemoveContainer" containerID="43b7427e5d8486dc18092763d632c45c4dafd6fa320475f95ea95bb6b7625d45" Feb 20 12:06:51.031613 master-0 kubenswrapper[31420]: E0220 12:06:51.031523 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43b7427e5d8486dc18092763d632c45c4dafd6fa320475f95ea95bb6b7625d45\": container with ID starting with 43b7427e5d8486dc18092763d632c45c4dafd6fa320475f95ea95bb6b7625d45 not found: ID does not exist" containerID="43b7427e5d8486dc18092763d632c45c4dafd6fa320475f95ea95bb6b7625d45" Feb 20 12:06:51.031613 master-0 kubenswrapper[31420]: I0220 12:06:51.031570 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b7427e5d8486dc18092763d632c45c4dafd6fa320475f95ea95bb6b7625d45"} err="failed to get container status \"43b7427e5d8486dc18092763d632c45c4dafd6fa320475f95ea95bb6b7625d45\": rpc error: code = NotFound desc = could not find container \"43b7427e5d8486dc18092763d632c45c4dafd6fa320475f95ea95bb6b7625d45\": container with ID starting with 43b7427e5d8486dc18092763d632c45c4dafd6fa320475f95ea95bb6b7625d45 not found: ID does not exist" Feb 20 12:06:51.060800 master-0 kubenswrapper[31420]: I0220 12:06:51.060720 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 20 12:06:51.065613 master-0 kubenswrapper[31420]: I0220 12:06:51.065485 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 20 12:06:51.511503 master-0 kubenswrapper[31420]: I0220 12:06:51.511410 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9cd2982-6b46-4ced-9e1d-78b60cbd6391" path="/var/lib/kubelet/pods/e9cd2982-6b46-4ced-9e1d-78b60cbd6391/volumes" Feb 20 12:07:29.226055 master-0 kubenswrapper[31420]: I0220 12:07:29.225919 31420 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.226662 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" containerID="cri-o://98cdcc382cdaf9d32531fded311eebe18429997b139f27fb5370e4a6029e108d" gracePeriod=15 Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.226732 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1e3248e967546eaa2d899edca8d1f7a776ac31561d08f2c141ee5bf1df67a989" gracePeriod=15 Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.226887 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://47dcc57de81019756b69aa0cf77c795b704b232ab0a7c095b93f80ca1a705412" gracePeriod=15 Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.226882 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" containerID="cri-o://cc54e902a495db0a20ab369e2d2afe374c42435a5041faf1f245a36239c276fb" gracePeriod=15 Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.226897 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f59e5b8432d51685db9583bb02bd7e9ee26b994dd372cb6fcd8949b7311e8f4c" gracePeriod=15 Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.229401 31420 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: E0220 12:07:29.229994 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9cd2982-6b46-4ced-9e1d-78b60cbd6391" containerName="installer" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230028 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9cd2982-6b46-4ced-9e1d-78b60cbd6391" containerName="installer" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: E0220 12:07:29.230060 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230076 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: E0220 12:07:29.230111 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cec8d018-dd5b-4607-b3d7-1c824aa9a193" containerName="collect-profiles" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230128 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="cec8d018-dd5b-4607-b3d7-1c824aa9a193" containerName="collect-profiles" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: E0220 12:07:29.230160 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230178 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: E0220 12:07:29.230220 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230236 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: E0220 12:07:29.230255 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230274 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: E0220 12:07:29.230295 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230311 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: E0220 12:07:29.230340 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230356 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: E0220 12:07:29.230379 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="setup" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230395 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="setup" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230694 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230747 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230799 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9cd2982-6b46-4ced-9e1d-78b60cbd6391" containerName="installer" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230833 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230866 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230930 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" Feb 20 12:07:29.231719 master-0 kubenswrapper[31420]: I0220 12:07:29.230958 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="cec8d018-dd5b-4607-b3d7-1c824aa9a193" containerName="collect-profiles" Feb 20 12:07:29.237168 master-0 kubenswrapper[31420]: I0220 12:07:29.231918 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 20 12:07:29.237168 master-0 kubenswrapper[31420]: I0220 12:07:29.237107 31420 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 20 12:07:29.239047 master-0 kubenswrapper[31420]: I0220 12:07:29.238956 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.246864 master-0 kubenswrapper[31420]: I0220 12:07:29.246748 31420 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="eb342c942d3d92fd08ed7cf68fafb94c" podUID="2202ebf88dd4d5cadde1ad8cb2bbaddc" Feb 20 12:07:29.289183 master-0 kubenswrapper[31420]: I0220 12:07:29.286734 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.289183 master-0 kubenswrapper[31420]: I0220 12:07:29.286933 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.289183 master-0 kubenswrapper[31420]: I0220 12:07:29.286987 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2202ebf88dd4d5cadde1ad8cb2bbaddc-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"2202ebf88dd4d5cadde1ad8cb2bbaddc\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:29.289183 master-0 kubenswrapper[31420]: I0220 12:07:29.287025 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2202ebf88dd4d5cadde1ad8cb2bbaddc-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"2202ebf88dd4d5cadde1ad8cb2bbaddc\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:29.289183 master-0 kubenswrapper[31420]: I0220 12:07:29.287171 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.289183 master-0 kubenswrapper[31420]: I0220 12:07:29.287234 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.289183 master-0 kubenswrapper[31420]: I0220 12:07:29.287279 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.289183 master-0 kubenswrapper[31420]: I0220 12:07:29.287358 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2202ebf88dd4d5cadde1ad8cb2bbaddc-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"2202ebf88dd4d5cadde1ad8cb2bbaddc\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:29.388284 master-0 kubenswrapper[31420]: I0220 12:07:29.388209 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.388472 master-0 kubenswrapper[31420]: I0220 12:07:29.388308 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.388472 master-0 kubenswrapper[31420]: I0220 12:07:29.388465 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2202ebf88dd4d5cadde1ad8cb2bbaddc-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"2202ebf88dd4d5cadde1ad8cb2bbaddc\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:29.388652 master-0 kubenswrapper[31420]: I0220 12:07:29.388494 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2202ebf88dd4d5cadde1ad8cb2bbaddc-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"2202ebf88dd4d5cadde1ad8cb2bbaddc\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:29.388652 master-0 kubenswrapper[31420]: I0220 12:07:29.388453 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.388652 master-0 kubenswrapper[31420]: I0220 12:07:29.388602 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2202ebf88dd4d5cadde1ad8cb2bbaddc-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"2202ebf88dd4d5cadde1ad8cb2bbaddc\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:29.389141 master-0 kubenswrapper[31420]: I0220 12:07:29.388669 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.389141 master-0 kubenswrapper[31420]: I0220 12:07:29.388639 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.389141 master-0 kubenswrapper[31420]: I0220 12:07:29.388654 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2202ebf88dd4d5cadde1ad8cb2bbaddc-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"2202ebf88dd4d5cadde1ad8cb2bbaddc\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:29.389141 master-0 kubenswrapper[31420]: I0220 12:07:29.388763 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.389141 master-0 kubenswrapper[31420]: I0220 12:07:29.388881 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.389141 master-0 kubenswrapper[31420]: I0220 12:07:29.388869 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.389141 master-0 kubenswrapper[31420]: I0220 12:07:29.388897 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.389141 master-0 kubenswrapper[31420]: I0220 12:07:29.388948 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:29.389141 master-0 kubenswrapper[31420]: I0220 12:07:29.389031 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2202ebf88dd4d5cadde1ad8cb2bbaddc-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"2202ebf88dd4d5cadde1ad8cb2bbaddc\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:29.389141 master-0 kubenswrapper[31420]: I0220 12:07:29.389122 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2202ebf88dd4d5cadde1ad8cb2bbaddc-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"2202ebf88dd4d5cadde1ad8cb2bbaddc\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:30.347910 master-0 kubenswrapper[31420]: I0220 12:07:30.347797 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-check-endpoints/0.log" Feb 20 12:07:30.350468 master-0 kubenswrapper[31420]: I0220 12:07:30.350407 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 20 12:07:30.351820 master-0 kubenswrapper[31420]: I0220 12:07:30.351744 31420 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="1e3248e967546eaa2d899edca8d1f7a776ac31561d08f2c141ee5bf1df67a989" exitCode=0 Feb 20 12:07:30.351820 master-0 kubenswrapper[31420]: I0220 12:07:30.351797 31420 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="47dcc57de81019756b69aa0cf77c795b704b232ab0a7c095b93f80ca1a705412" exitCode=0 Feb 20 12:07:30.351820 master-0 kubenswrapper[31420]: I0220 12:07:30.351814 31420 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="f59e5b8432d51685db9583bb02bd7e9ee26b994dd372cb6fcd8949b7311e8f4c" exitCode=0 Feb 20 12:07:30.351820 master-0 kubenswrapper[31420]: I0220 12:07:30.351830 31420 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="cc54e902a495db0a20ab369e2d2afe374c42435a5041faf1f245a36239c276fb" exitCode=2 Feb 20 12:07:30.352154 master-0 kubenswrapper[31420]: I0220 12:07:30.351886 31420 scope.go:117] "RemoveContainer" containerID="ac763378dacfc4363dcfb084085dbc52f6dc5edd975cf1b421f17f519d7cca40" Feb 20 12:07:31.365117 master-0 kubenswrapper[31420]: I0220 12:07:31.364934 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 20 12:07:34.398019 master-0 kubenswrapper[31420]: I0220 12:07:34.397909 31420 generic.go:334] "Generic (PLEG): container finished" podID="bf6108c5-19ba-4f99-9f75-6e02fa5876f2" containerID="5f7740e2b085fb2bcbfef55b938e905a363001bf504c40676876a5b22364533c" exitCode=0 Feb 20 12:07:34.399798 master-0 kubenswrapper[31420]: I0220 12:07:34.397977 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"bf6108c5-19ba-4f99-9f75-6e02fa5876f2","Type":"ContainerDied","Data":"5f7740e2b085fb2bcbfef55b938e905a363001bf504c40676876a5b22364533c"} Feb 20 12:07:35.873750 master-0 kubenswrapper[31420]: I0220 12:07:35.873677 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 20 12:07:36.007109 master-0 kubenswrapper[31420]: I0220 12:07:36.007025 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-var-lock\") pod \"bf6108c5-19ba-4f99-9f75-6e02fa5876f2\" (UID: \"bf6108c5-19ba-4f99-9f75-6e02fa5876f2\") " Feb 20 12:07:36.007400 master-0 kubenswrapper[31420]: I0220 12:07:36.007122 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-kubelet-dir\") pod \"bf6108c5-19ba-4f99-9f75-6e02fa5876f2\" (UID: \"bf6108c5-19ba-4f99-9f75-6e02fa5876f2\") " Feb 20 12:07:36.007400 master-0 kubenswrapper[31420]: I0220 12:07:36.007179 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-var-lock" (OuterVolumeSpecName: "var-lock") pod "bf6108c5-19ba-4f99-9f75-6e02fa5876f2" (UID: "bf6108c5-19ba-4f99-9f75-6e02fa5876f2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:07:36.007400 master-0 kubenswrapper[31420]: I0220 12:07:36.007327 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-kube-api-access\") pod \"bf6108c5-19ba-4f99-9f75-6e02fa5876f2\" (UID: \"bf6108c5-19ba-4f99-9f75-6e02fa5876f2\") " Feb 20 12:07:36.007400 master-0 kubenswrapper[31420]: I0220 12:07:36.007383 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bf6108c5-19ba-4f99-9f75-6e02fa5876f2" (UID: "bf6108c5-19ba-4f99-9f75-6e02fa5876f2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:07:36.007958 master-0 kubenswrapper[31420]: I0220 12:07:36.007911 31420 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 12:07:36.007958 master-0 kubenswrapper[31420]: I0220 12:07:36.007947 31420 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:07:36.012042 master-0 kubenswrapper[31420]: I0220 12:07:36.011986 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bf6108c5-19ba-4f99-9f75-6e02fa5876f2" (UID: "bf6108c5-19ba-4f99-9f75-6e02fa5876f2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:07:36.109407 master-0 kubenswrapper[31420]: I0220 12:07:36.109229 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf6108c5-19ba-4f99-9f75-6e02fa5876f2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 12:07:36.420053 master-0 kubenswrapper[31420]: I0220 12:07:36.419833 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"bf6108c5-19ba-4f99-9f75-6e02fa5876f2","Type":"ContainerDied","Data":"9caae01ec23401038dfdcd6e44fd8a4a86387fffcd6ee59fd76b10a1cd2b21c5"} Feb 20 12:07:36.420053 master-0 kubenswrapper[31420]: I0220 12:07:36.419921 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 20 12:07:36.420371 master-0 kubenswrapper[31420]: I0220 12:07:36.419929 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9caae01ec23401038dfdcd6e44fd8a4a86387fffcd6ee59fd76b10a1cd2b21c5" Feb 20 12:07:38.232473 master-0 kubenswrapper[31420]: I0220 12:07:38.232414 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 20 12:07:38.233632 master-0 kubenswrapper[31420]: I0220 12:07:38.233596 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:38.261703 master-0 kubenswrapper[31420]: I0220 12:07:38.261650 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"eb342c942d3d92fd08ed7cf68fafb94c\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " Feb 20 12:07:38.261796 master-0 kubenswrapper[31420]: I0220 12:07:38.261714 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"eb342c942d3d92fd08ed7cf68fafb94c\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " Feb 20 12:07:38.261796 master-0 kubenswrapper[31420]: I0220 12:07:38.261749 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"eb342c942d3d92fd08ed7cf68fafb94c\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " Feb 20 12:07:38.261914 master-0 kubenswrapper[31420]: I0220 12:07:38.261848 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "eb342c942d3d92fd08ed7cf68fafb94c" (UID: "eb342c942d3d92fd08ed7cf68fafb94c"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:07:38.261960 master-0 kubenswrapper[31420]: I0220 12:07:38.261901 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "eb342c942d3d92fd08ed7cf68fafb94c" (UID: "eb342c942d3d92fd08ed7cf68fafb94c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:07:38.262053 master-0 kubenswrapper[31420]: I0220 12:07:38.262009 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "eb342c942d3d92fd08ed7cf68fafb94c" (UID: "eb342c942d3d92fd08ed7cf68fafb94c"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:07:38.262474 master-0 kubenswrapper[31420]: I0220 12:07:38.262436 31420 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:07:38.262474 master-0 kubenswrapper[31420]: I0220 12:07:38.262468 31420 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:07:38.262650 master-0 kubenswrapper[31420]: I0220 12:07:38.262486 31420 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:07:38.445815 master-0 kubenswrapper[31420]: I0220 12:07:38.444747 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 20 12:07:38.446078 master-0 kubenswrapper[31420]: I0220 12:07:38.446028 31420 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="98cdcc382cdaf9d32531fded311eebe18429997b139f27fb5370e4a6029e108d" exitCode=0 Feb 20 12:07:38.446195 master-0 kubenswrapper[31420]: I0220 12:07:38.446123 31420 scope.go:117] "RemoveContainer" containerID="1e3248e967546eaa2d899edca8d1f7a776ac31561d08f2c141ee5bf1df67a989" Feb 20 12:07:38.446302 master-0 kubenswrapper[31420]: I0220 12:07:38.446213 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:38.471562 master-0 kubenswrapper[31420]: I0220 12:07:38.471506 31420 scope.go:117] "RemoveContainer" containerID="47dcc57de81019756b69aa0cf77c795b704b232ab0a7c095b93f80ca1a705412" Feb 20 12:07:38.486746 master-0 kubenswrapper[31420]: I0220 12:07:38.486682 31420 scope.go:117] "RemoveContainer" containerID="f59e5b8432d51685db9583bb02bd7e9ee26b994dd372cb6fcd8949b7311e8f4c" Feb 20 12:07:38.502235 master-0 kubenswrapper[31420]: I0220 12:07:38.502173 31420 scope.go:117] "RemoveContainer" containerID="cc54e902a495db0a20ab369e2d2afe374c42435a5041faf1f245a36239c276fb" Feb 20 12:07:38.531893 master-0 kubenswrapper[31420]: I0220 12:07:38.531828 31420 scope.go:117] "RemoveContainer" containerID="98cdcc382cdaf9d32531fded311eebe18429997b139f27fb5370e4a6029e108d" Feb 20 12:07:38.556239 master-0 kubenswrapper[31420]: I0220 12:07:38.556169 31420 scope.go:117] "RemoveContainer" containerID="af8794e46bca44f5295255350b5f789a307ef0b49c6359ff00d86023682622b0" Feb 20 12:07:38.588547 master-0 kubenswrapper[31420]: I0220 12:07:38.588475 31420 scope.go:117] "RemoveContainer" containerID="1e3248e967546eaa2d899edca8d1f7a776ac31561d08f2c141ee5bf1df67a989" Feb 20 12:07:38.589068 master-0 kubenswrapper[31420]: E0220 12:07:38.589007 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e3248e967546eaa2d899edca8d1f7a776ac31561d08f2c141ee5bf1df67a989\": container with ID starting with 1e3248e967546eaa2d899edca8d1f7a776ac31561d08f2c141ee5bf1df67a989 not found: ID does not exist" containerID="1e3248e967546eaa2d899edca8d1f7a776ac31561d08f2c141ee5bf1df67a989" Feb 20 12:07:38.589148 master-0 kubenswrapper[31420]: I0220 12:07:38.589064 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e3248e967546eaa2d899edca8d1f7a776ac31561d08f2c141ee5bf1df67a989"} err="failed to get container status \"1e3248e967546eaa2d899edca8d1f7a776ac31561d08f2c141ee5bf1df67a989\": rpc error: code = NotFound desc = could not find container \"1e3248e967546eaa2d899edca8d1f7a776ac31561d08f2c141ee5bf1df67a989\": container with ID starting with 1e3248e967546eaa2d899edca8d1f7a776ac31561d08f2c141ee5bf1df67a989 not found: ID does not exist" Feb 20 12:07:38.589148 master-0 kubenswrapper[31420]: I0220 12:07:38.589101 31420 scope.go:117] "RemoveContainer" containerID="47dcc57de81019756b69aa0cf77c795b704b232ab0a7c095b93f80ca1a705412" Feb 20 12:07:38.589978 master-0 kubenswrapper[31420]: E0220 12:07:38.589885 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47dcc57de81019756b69aa0cf77c795b704b232ab0a7c095b93f80ca1a705412\": container with ID starting with 47dcc57de81019756b69aa0cf77c795b704b232ab0a7c095b93f80ca1a705412 not found: ID does not exist" containerID="47dcc57de81019756b69aa0cf77c795b704b232ab0a7c095b93f80ca1a705412" Feb 20 12:07:38.590222 master-0 kubenswrapper[31420]: I0220 12:07:38.589962 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47dcc57de81019756b69aa0cf77c795b704b232ab0a7c095b93f80ca1a705412"} err="failed to get container status \"47dcc57de81019756b69aa0cf77c795b704b232ab0a7c095b93f80ca1a705412\": rpc error: code = NotFound desc = could not find container \"47dcc57de81019756b69aa0cf77c795b704b232ab0a7c095b93f80ca1a705412\": container with ID starting with 47dcc57de81019756b69aa0cf77c795b704b232ab0a7c095b93f80ca1a705412 not found: ID does not exist" Feb 20 12:07:38.590222 master-0 kubenswrapper[31420]: I0220 12:07:38.590018 31420 scope.go:117] "RemoveContainer" containerID="f59e5b8432d51685db9583bb02bd7e9ee26b994dd372cb6fcd8949b7311e8f4c" Feb 20 12:07:38.590545 master-0 kubenswrapper[31420]: E0220 12:07:38.590469 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59e5b8432d51685db9583bb02bd7e9ee26b994dd372cb6fcd8949b7311e8f4c\": container with ID starting with f59e5b8432d51685db9583bb02bd7e9ee26b994dd372cb6fcd8949b7311e8f4c not found: ID does not exist" containerID="f59e5b8432d51685db9583bb02bd7e9ee26b994dd372cb6fcd8949b7311e8f4c" Feb 20 12:07:38.590611 master-0 kubenswrapper[31420]: I0220 12:07:38.590516 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59e5b8432d51685db9583bb02bd7e9ee26b994dd372cb6fcd8949b7311e8f4c"} err="failed to get container status \"f59e5b8432d51685db9583bb02bd7e9ee26b994dd372cb6fcd8949b7311e8f4c\": rpc error: code = NotFound desc = could not find container \"f59e5b8432d51685db9583bb02bd7e9ee26b994dd372cb6fcd8949b7311e8f4c\": container with ID starting with f59e5b8432d51685db9583bb02bd7e9ee26b994dd372cb6fcd8949b7311e8f4c not found: ID does not exist" Feb 20 12:07:38.590611 master-0 kubenswrapper[31420]: I0220 12:07:38.590574 31420 scope.go:117] "RemoveContainer" containerID="cc54e902a495db0a20ab369e2d2afe374c42435a5041faf1f245a36239c276fb" Feb 20 12:07:38.591062 master-0 kubenswrapper[31420]: E0220 12:07:38.590999 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc54e902a495db0a20ab369e2d2afe374c42435a5041faf1f245a36239c276fb\": container with ID starting with cc54e902a495db0a20ab369e2d2afe374c42435a5041faf1f245a36239c276fb not found: ID does not exist" containerID="cc54e902a495db0a20ab369e2d2afe374c42435a5041faf1f245a36239c276fb" Feb 20 12:07:38.591118 master-0 kubenswrapper[31420]: I0220 12:07:38.591076 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc54e902a495db0a20ab369e2d2afe374c42435a5041faf1f245a36239c276fb"} err="failed to get container status \"cc54e902a495db0a20ab369e2d2afe374c42435a5041faf1f245a36239c276fb\": rpc error: code = NotFound desc = could not find container \"cc54e902a495db0a20ab369e2d2afe374c42435a5041faf1f245a36239c276fb\": container with ID starting with cc54e902a495db0a20ab369e2d2afe374c42435a5041faf1f245a36239c276fb not found: ID does not exist" Feb 20 12:07:38.591176 master-0 kubenswrapper[31420]: I0220 12:07:38.591123 31420 scope.go:117] "RemoveContainer" containerID="98cdcc382cdaf9d32531fded311eebe18429997b139f27fb5370e4a6029e108d" Feb 20 12:07:38.591694 master-0 kubenswrapper[31420]: E0220 12:07:38.591619 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98cdcc382cdaf9d32531fded311eebe18429997b139f27fb5370e4a6029e108d\": container with ID starting with 98cdcc382cdaf9d32531fded311eebe18429997b139f27fb5370e4a6029e108d not found: ID does not exist" containerID="98cdcc382cdaf9d32531fded311eebe18429997b139f27fb5370e4a6029e108d" Feb 20 12:07:38.591791 master-0 kubenswrapper[31420]: I0220 12:07:38.591677 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98cdcc382cdaf9d32531fded311eebe18429997b139f27fb5370e4a6029e108d"} err="failed to get container status \"98cdcc382cdaf9d32531fded311eebe18429997b139f27fb5370e4a6029e108d\": rpc error: code = NotFound desc = could not find container \"98cdcc382cdaf9d32531fded311eebe18429997b139f27fb5370e4a6029e108d\": container with ID starting with 98cdcc382cdaf9d32531fded311eebe18429997b139f27fb5370e4a6029e108d not found: ID does not exist" Feb 20 12:07:38.591791 master-0 kubenswrapper[31420]: I0220 12:07:38.591716 31420 scope.go:117] "RemoveContainer" containerID="af8794e46bca44f5295255350b5f789a307ef0b49c6359ff00d86023682622b0" Feb 20 12:07:38.592201 master-0 kubenswrapper[31420]: E0220 12:07:38.592157 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af8794e46bca44f5295255350b5f789a307ef0b49c6359ff00d86023682622b0\": container with ID starting with af8794e46bca44f5295255350b5f789a307ef0b49c6359ff00d86023682622b0 not found: ID does not exist" containerID="af8794e46bca44f5295255350b5f789a307ef0b49c6359ff00d86023682622b0" Feb 20 12:07:38.592298 master-0 kubenswrapper[31420]: I0220 12:07:38.592212 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af8794e46bca44f5295255350b5f789a307ef0b49c6359ff00d86023682622b0"} err="failed to get container status \"af8794e46bca44f5295255350b5f789a307ef0b49c6359ff00d86023682622b0\": rpc error: code = NotFound desc = could not find container \"af8794e46bca44f5295255350b5f789a307ef0b49c6359ff00d86023682622b0\": container with ID starting with af8794e46bca44f5295255350b5f789a307ef0b49c6359ff00d86023682622b0 not found: ID does not exist" Feb 20 12:07:39.272418 master-0 kubenswrapper[31420]: E0220 12:07:39.272324 31420 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:39.273961 master-0 kubenswrapper[31420]: E0220 12:07:39.273425 31420 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:39.289648 master-0 kubenswrapper[31420]: E0220 12:07:39.289236 31420 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:39.291406 master-0 kubenswrapper[31420]: E0220 12:07:39.291337 31420 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:39.292100 master-0 kubenswrapper[31420]: E0220 12:07:39.292055 31420 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:39.292256 master-0 kubenswrapper[31420]: I0220 12:07:39.292231 31420 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 20 12:07:39.293067 master-0 kubenswrapper[31420]: E0220 12:07:39.292929 31420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 20 12:07:39.295617 master-0 kubenswrapper[31420]: E0220 12:07:39.295570 31420 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:39.296340 master-0 kubenswrapper[31420]: I0220 12:07:39.296313 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:39.328664 master-0 kubenswrapper[31420]: E0220 12:07:39.328333 31420 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1895f30b8a1be62d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:10f4041a226ca54ed300f2badc93fd43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 12:07:39.327432237 +0000 UTC m=+164.046670468,LastTimestamp:2026-02-20 12:07:39.327432237 +0000 UTC m=+164.046670468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 12:07:39.410590 master-0 kubenswrapper[31420]: I0220 12:07:39.410480 31420 status_manager.go:851] "Failed to get status for pod" podUID="bf6108c5-19ba-4f99-9f75-6e02fa5876f2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:39.411717 master-0 kubenswrapper[31420]: I0220 12:07:39.411646 31420 status_manager.go:851] "Failed to get status for pod" podUID="bf6108c5-19ba-4f99-9f75-6e02fa5876f2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:39.414045 master-0 kubenswrapper[31420]: I0220 12:07:39.414006 31420 status_manager.go:851] "Failed to get status for pod" podUID="eb342c942d3d92fd08ed7cf68fafb94c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:39.418578 master-0 kubenswrapper[31420]: I0220 12:07:39.418517 31420 status_manager.go:851] "Failed to get status for pod" podUID="eb342c942d3d92fd08ed7cf68fafb94c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:39.419031 master-0 kubenswrapper[31420]: I0220 12:07:39.418998 31420 status_manager.go:851] "Failed to get status for pod" podUID="bf6108c5-19ba-4f99-9f75-6e02fa5876f2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:39.459322 master-0 kubenswrapper[31420]: I0220 12:07:39.459216 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"10f4041a226ca54ed300f2badc93fd43","Type":"ContainerStarted","Data":"6e601a051ec2359dba18d28cb4543af7249fa193f27b53702314383c1b662cae"} Feb 20 12:07:39.494366 master-0 kubenswrapper[31420]: E0220 12:07:39.493996 31420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 20 12:07:39.509746 master-0 kubenswrapper[31420]: I0220 12:07:39.509408 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb342c942d3d92fd08ed7cf68fafb94c" path="/var/lib/kubelet/pods/eb342c942d3d92fd08ed7cf68fafb94c/volumes" Feb 20 12:07:39.895631 master-0 kubenswrapper[31420]: E0220 12:07:39.895551 31420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 20 12:07:40.472329 master-0 kubenswrapper[31420]: I0220 12:07:40.472216 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"10f4041a226ca54ed300f2badc93fd43","Type":"ContainerStarted","Data":"2663b47d99f8b649affaa920b741ce96808073403384a9a64f7c0926a5d3fbf0"} Feb 20 12:07:40.473966 master-0 kubenswrapper[31420]: E0220 12:07:40.473869 31420 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:40.474107 master-0 kubenswrapper[31420]: I0220 12:07:40.473873 31420 status_manager.go:851] "Failed to get status for pod" podUID="bf6108c5-19ba-4f99-9f75-6e02fa5876f2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:40.696490 master-0 kubenswrapper[31420]: E0220 12:07:40.696391 31420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 20 12:07:41.041654 master-0 kubenswrapper[31420]: E0220 12:07:41.041343 31420 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1895f30b8a1be62d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:10f4041a226ca54ed300f2badc93fd43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 12:07:39.327432237 +0000 UTC m=+164.046670468,LastTimestamp:2026-02-20 12:07:39.327432237 +0000 UTC m=+164.046670468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 12:07:41.484224 master-0 kubenswrapper[31420]: E0220 12:07:41.484005 31420 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:07:42.297492 master-0 kubenswrapper[31420]: E0220 12:07:42.297395 31420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 20 12:07:42.494763 master-0 kubenswrapper[31420]: I0220 12:07:42.492572 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_65774ccd44b6b404cec890cd0cfa3872/kube-controller-manager/0.log" Feb 20 12:07:42.494763 master-0 kubenswrapper[31420]: I0220 12:07:42.492641 31420 generic.go:334] "Generic (PLEG): container finished" podID="65774ccd44b6b404cec890cd0cfa3872" containerID="fb26f752e48be63937e70537d486ea02b5e41733fdb3b27eed62024dc371a88d" exitCode=1 Feb 20 12:07:42.494763 master-0 kubenswrapper[31420]: I0220 12:07:42.492715 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"65774ccd44b6b404cec890cd0cfa3872","Type":"ContainerDied","Data":"fb26f752e48be63937e70537d486ea02b5e41733fdb3b27eed62024dc371a88d"} Feb 20 12:07:42.494763 master-0 kubenswrapper[31420]: I0220 12:07:42.493726 31420 scope.go:117] "RemoveContainer" containerID="fb26f752e48be63937e70537d486ea02b5e41733fdb3b27eed62024dc371a88d" Feb 20 12:07:42.494763 master-0 kubenswrapper[31420]: I0220 12:07:42.494182 31420 status_manager.go:851] "Failed to get status for pod" podUID="65774ccd44b6b404cec890cd0cfa3872" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:42.496847 master-0 kubenswrapper[31420]: I0220 12:07:42.495832 31420 status_manager.go:851] "Failed to get status for pod" podUID="bf6108c5-19ba-4f99-9f75-6e02fa5876f2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:43.104927 master-0 kubenswrapper[31420]: I0220 12:07:43.104840 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:07:43.244867 master-0 kubenswrapper[31420]: I0220 12:07:43.244785 31420 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:07:43.496196 master-0 kubenswrapper[31420]: I0220 12:07:43.496027 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:43.498218 master-0 kubenswrapper[31420]: I0220 12:07:43.498128 31420 status_manager.go:851] "Failed to get status for pod" podUID="bf6108c5-19ba-4f99-9f75-6e02fa5876f2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:43.499354 master-0 kubenswrapper[31420]: I0220 12:07:43.499269 31420 status_manager.go:851] "Failed to get status for pod" podUID="65774ccd44b6b404cec890cd0cfa3872" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:43.510141 master-0 kubenswrapper[31420]: I0220 12:07:43.510077 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_65774ccd44b6b404cec890cd0cfa3872/kube-controller-manager/0.log" Feb 20 12:07:43.512020 master-0 kubenswrapper[31420]: I0220 12:07:43.511910 31420 status_manager.go:851] "Failed to get status for pod" podUID="65774ccd44b6b404cec890cd0cfa3872" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:43.513265 master-0 kubenswrapper[31420]: I0220 12:07:43.513183 31420 status_manager.go:851] "Failed to get status for pod" podUID="bf6108c5-19ba-4f99-9f75-6e02fa5876f2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:43.514761 master-0 kubenswrapper[31420]: I0220 12:07:43.514693 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"65774ccd44b6b404cec890cd0cfa3872","Type":"ContainerStarted","Data":"9018203c8a3208e7e4bcd5a26ed32c54dd1c05833036ee87e4b5bf9b3b7f996e"} Feb 20 12:07:43.519550 master-0 kubenswrapper[31420]: I0220 12:07:43.519468 31420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7c3803db-7f28-427b-a5c4-5e91943e7810" Feb 20 12:07:43.519689 master-0 kubenswrapper[31420]: I0220 12:07:43.519575 31420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7c3803db-7f28-427b-a5c4-5e91943e7810" Feb 20 12:07:43.520853 master-0 kubenswrapper[31420]: E0220 12:07:43.520779 31420 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:43.521619 master-0 kubenswrapper[31420]: I0220 12:07:43.521570 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:43.557183 master-0 kubenswrapper[31420]: W0220 12:07:43.557123 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2202ebf88dd4d5cadde1ad8cb2bbaddc.slice/crio-116b7dee3ad784e2feab4174a8198425a92afa794b117a2015daf60c4b5486ea WatchSource:0}: Error finding container 116b7dee3ad784e2feab4174a8198425a92afa794b117a2015daf60c4b5486ea: Status 404 returned error can't find the container with id 116b7dee3ad784e2feab4174a8198425a92afa794b117a2015daf60c4b5486ea Feb 20 12:07:44.522014 master-0 kubenswrapper[31420]: I0220 12:07:44.521953 31420 generic.go:334] "Generic (PLEG): container finished" podID="2202ebf88dd4d5cadde1ad8cb2bbaddc" containerID="3b333a43f77c364508fe518a28d9069a1a4b7d691ad0afae484ef7d69dd75773" exitCode=0 Feb 20 12:07:44.523078 master-0 kubenswrapper[31420]: I0220 12:07:44.522058 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"2202ebf88dd4d5cadde1ad8cb2bbaddc","Type":"ContainerDied","Data":"3b333a43f77c364508fe518a28d9069a1a4b7d691ad0afae484ef7d69dd75773"} Feb 20 12:07:44.523078 master-0 kubenswrapper[31420]: I0220 12:07:44.522754 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"2202ebf88dd4d5cadde1ad8cb2bbaddc","Type":"ContainerStarted","Data":"116b7dee3ad784e2feab4174a8198425a92afa794b117a2015daf60c4b5486ea"} Feb 20 12:07:44.523488 master-0 kubenswrapper[31420]: I0220 12:07:44.523426 31420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7c3803db-7f28-427b-a5c4-5e91943e7810" Feb 20 12:07:44.523488 master-0 kubenswrapper[31420]: I0220 12:07:44.523466 31420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7c3803db-7f28-427b-a5c4-5e91943e7810" Feb 20 12:07:44.524668 master-0 kubenswrapper[31420]: I0220 12:07:44.524587 31420 status_manager.go:851] "Failed to get status for pod" podUID="bf6108c5-19ba-4f99-9f75-6e02fa5876f2" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:44.524838 master-0 kubenswrapper[31420]: E0220 12:07:44.524593 31420 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:44.525601 master-0 kubenswrapper[31420]: I0220 12:07:44.525466 31420 status_manager.go:851] "Failed to get status for pod" podUID="65774ccd44b6b404cec890cd0cfa3872" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 12:07:45.537975 master-0 kubenswrapper[31420]: I0220 12:07:45.536744 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"2202ebf88dd4d5cadde1ad8cb2bbaddc","Type":"ContainerStarted","Data":"97fc8cde4bafc3a910384bbdf02defb58c46d6f77e981f2ca9485d0b0556f9d1"} Feb 20 12:07:45.537975 master-0 kubenswrapper[31420]: I0220 12:07:45.536803 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"2202ebf88dd4d5cadde1ad8cb2bbaddc","Type":"ContainerStarted","Data":"31f5108a5aed50865d30527c45ab736ccd4400f0aa2396619a2711858c2e2431"} Feb 20 12:07:45.537975 master-0 kubenswrapper[31420]: I0220 12:07:45.536817 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"2202ebf88dd4d5cadde1ad8cb2bbaddc","Type":"ContainerStarted","Data":"4512fcd70b99cfaca6ccfcddf71cd5f43b2d401e85ce7a2ae7018c7bdc552a47"} Feb 20 12:07:46.547892 master-0 kubenswrapper[31420]: I0220 12:07:46.547843 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"2202ebf88dd4d5cadde1ad8cb2bbaddc","Type":"ContainerStarted","Data":"3098dfd7dc5a1c3f954b616c3eb4b76c8ab65f63dad41023390cf85044178c02"} Feb 20 12:07:46.547892 master-0 kubenswrapper[31420]: I0220 12:07:46.547900 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"2202ebf88dd4d5cadde1ad8cb2bbaddc","Type":"ContainerStarted","Data":"5a4bf14305f6440e9938fe7fcbff39d9e7c64c166ba60a2533030e809f0aed23"} Feb 20 12:07:46.548467 master-0 kubenswrapper[31420]: I0220 12:07:46.548093 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:46.548467 master-0 kubenswrapper[31420]: I0220 12:07:46.548220 31420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7c3803db-7f28-427b-a5c4-5e91943e7810" Feb 20 12:07:46.548467 master-0 kubenswrapper[31420]: I0220 12:07:46.548249 31420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7c3803db-7f28-427b-a5c4-5e91943e7810" Feb 20 12:07:48.522710 master-0 kubenswrapper[31420]: I0220 12:07:48.522609 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:48.522710 master-0 kubenswrapper[31420]: I0220 12:07:48.522685 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:48.531574 master-0 kubenswrapper[31420]: I0220 12:07:48.531453 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:49.615128 master-0 kubenswrapper[31420]: I0220 12:07:49.614322 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:07:49.621653 master-0 kubenswrapper[31420]: I0220 12:07:49.621573 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:07:50.585375 master-0 kubenswrapper[31420]: I0220 12:07:50.585312 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:07:51.577399 master-0 kubenswrapper[31420]: I0220 12:07:51.577358 31420 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:52.600607 master-0 kubenswrapper[31420]: I0220 12:07:52.600506 31420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7c3803db-7f28-427b-a5c4-5e91943e7810" Feb 20 12:07:52.600607 master-0 kubenswrapper[31420]: I0220 12:07:52.600601 31420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7c3803db-7f28-427b-a5c4-5e91943e7810" Feb 20 12:07:52.607145 master-0 kubenswrapper[31420]: I0220 12:07:52.607112 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:07:53.111355 master-0 kubenswrapper[31420]: I0220 12:07:53.111250 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:07:53.567565 master-0 kubenswrapper[31420]: I0220 12:07:53.566639 31420 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="2202ebf88dd4d5cadde1ad8cb2bbaddc" podUID="9fbee3cb-42d9-4d07-a0ef-ff50225f9949" Feb 20 12:07:53.607821 master-0 kubenswrapper[31420]: I0220 12:07:53.607745 31420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7c3803db-7f28-427b-a5c4-5e91943e7810" Feb 20 12:07:53.607821 master-0 kubenswrapper[31420]: I0220 12:07:53.607788 31420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7c3803db-7f28-427b-a5c4-5e91943e7810" Feb 20 12:07:54.530390 master-0 kubenswrapper[31420]: I0220 12:07:54.530137 31420 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 20 12:07:54.633641 master-0 kubenswrapper[31420]: I0220 12:07:54.633555 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 20 12:07:54.692925 master-0 kubenswrapper[31420]: I0220 12:07:54.692835 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 20 12:07:54.865321 master-0 kubenswrapper[31420]: I0220 12:07:54.865122 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 20 12:07:54.999955 master-0 kubenswrapper[31420]: I0220 12:07:54.999817 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 20 12:07:55.262363 master-0 kubenswrapper[31420]: I0220 12:07:55.262275 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 20 12:07:55.364019 master-0 kubenswrapper[31420]: I0220 12:07:55.363936 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 20 12:07:55.402059 master-0 kubenswrapper[31420]: I0220 12:07:55.401971 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 20 12:07:55.520080 master-0 kubenswrapper[31420]: I0220 12:07:55.519922 31420 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="2202ebf88dd4d5cadde1ad8cb2bbaddc" podUID="9fbee3cb-42d9-4d07-a0ef-ff50225f9949" Feb 20 12:07:55.740349 master-0 kubenswrapper[31420]: I0220 12:07:55.740280 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 20 12:07:55.890205 master-0 kubenswrapper[31420]: I0220 12:07:55.890010 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 20 12:07:56.457322 master-0 kubenswrapper[31420]: I0220 12:07:56.457232 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 20 12:07:56.517334 master-0 kubenswrapper[31420]: I0220 12:07:56.517109 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 20 12:07:56.595450 master-0 kubenswrapper[31420]: I0220 12:07:56.595393 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 20 12:07:56.607958 master-0 kubenswrapper[31420]: I0220 12:07:56.607650 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 20 12:07:56.927729 master-0 kubenswrapper[31420]: I0220 12:07:56.927628 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 20 12:07:56.961149 master-0 kubenswrapper[31420]: I0220 12:07:56.960401 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 20 12:07:56.989800 master-0 kubenswrapper[31420]: I0220 12:07:56.989667 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 20 12:07:57.005877 master-0 kubenswrapper[31420]: I0220 12:07:57.005782 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 20 12:07:57.299319 master-0 kubenswrapper[31420]: I0220 12:07:57.299227 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 20 12:07:57.736863 master-0 kubenswrapper[31420]: I0220 12:07:57.736723 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 12:07:57.750556 master-0 kubenswrapper[31420]: I0220 12:07:57.750458 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 20 12:07:58.008982 master-0 kubenswrapper[31420]: I0220 12:07:58.008923 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-jfq59" Feb 20 12:07:58.036347 master-0 kubenswrapper[31420]: I0220 12:07:58.036265 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 20 12:07:58.041565 master-0 kubenswrapper[31420]: I0220 12:07:58.041495 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 20 12:07:58.061209 master-0 kubenswrapper[31420]: I0220 12:07:58.061111 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 20 12:07:58.104461 master-0 kubenswrapper[31420]: I0220 12:07:58.104402 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 20 12:07:58.148998 master-0 kubenswrapper[31420]: I0220 12:07:58.148896 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 12:07:58.172298 master-0 kubenswrapper[31420]: I0220 12:07:58.172220 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 20 12:07:58.220309 master-0 kubenswrapper[31420]: I0220 12:07:58.220259 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 20 12:07:58.267919 master-0 kubenswrapper[31420]: I0220 12:07:58.267809 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-69d2b" Feb 20 12:07:58.275040 master-0 kubenswrapper[31420]: I0220 12:07:58.274978 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 20 12:07:58.324883 master-0 kubenswrapper[31420]: I0220 12:07:58.324805 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 20 12:07:58.329385 master-0 kubenswrapper[31420]: I0220 12:07:58.329332 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 20 12:07:58.371990 master-0 kubenswrapper[31420]: I0220 12:07:58.371908 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 12:07:58.400786 master-0 kubenswrapper[31420]: I0220 12:07:58.400717 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 20 12:07:58.477811 master-0 kubenswrapper[31420]: I0220 12:07:58.477740 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 20 12:07:58.503829 master-0 kubenswrapper[31420]: I0220 12:07:58.503751 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 20 12:07:58.533303 master-0 kubenswrapper[31420]: I0220 12:07:58.533163 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 20 12:07:58.542085 master-0 kubenswrapper[31420]: I0220 12:07:58.542037 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 20 12:07:58.597742 master-0 kubenswrapper[31420]: I0220 12:07:58.597647 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 20 12:07:58.648568 master-0 kubenswrapper[31420]: I0220 12:07:58.645999 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 20 12:07:58.734632 master-0 kubenswrapper[31420]: I0220 12:07:58.733871 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 20 12:07:58.735703 master-0 kubenswrapper[31420]: I0220 12:07:58.735635 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 12:07:58.771376 master-0 kubenswrapper[31420]: I0220 12:07:58.771296 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 20 12:07:58.799869 master-0 kubenswrapper[31420]: I0220 12:07:58.799741 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 20 12:07:58.907732 master-0 kubenswrapper[31420]: I0220 12:07:58.907620 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 20 12:07:58.948573 master-0 kubenswrapper[31420]: I0220 12:07:58.945865 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 20 12:07:58.964568 master-0 kubenswrapper[31420]: I0220 12:07:58.963776 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 20 12:07:59.002815 master-0 kubenswrapper[31420]: I0220 12:07:59.002614 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 20 12:07:59.025396 master-0 kubenswrapper[31420]: I0220 12:07:59.025294 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 12:07:59.047855 master-0 kubenswrapper[31420]: I0220 12:07:59.047794 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 20 12:07:59.060047 master-0 kubenswrapper[31420]: I0220 12:07:59.059959 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 20 12:07:59.081997 master-0 kubenswrapper[31420]: I0220 12:07:59.081907 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 20 12:07:59.101090 master-0 kubenswrapper[31420]: I0220 12:07:59.100994 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 20 12:07:59.104381 master-0 kubenswrapper[31420]: I0220 12:07:59.104321 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-r89nt" Feb 20 12:07:59.207416 master-0 kubenswrapper[31420]: I0220 12:07:59.207343 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 20 12:07:59.214037 master-0 kubenswrapper[31420]: I0220 12:07:59.213987 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 20 12:07:59.243335 master-0 kubenswrapper[31420]: I0220 12:07:59.243251 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-fdbqz" Feb 20 12:07:59.268478 master-0 kubenswrapper[31420]: I0220 12:07:59.268403 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-g5hcq" Feb 20 12:07:59.302388 master-0 kubenswrapper[31420]: I0220 12:07:59.301979 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 20 12:07:59.322604 master-0 kubenswrapper[31420]: I0220 12:07:59.322445 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 20 12:07:59.322828 master-0 kubenswrapper[31420]: I0220 12:07:59.322675 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 20 12:07:59.452386 master-0 kubenswrapper[31420]: I0220 12:07:59.452307 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 20 12:07:59.494796 master-0 kubenswrapper[31420]: I0220 12:07:59.494684 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 20 12:07:59.495168 master-0 kubenswrapper[31420]: I0220 12:07:59.494720 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 20 12:07:59.520410 master-0 kubenswrapper[31420]: I0220 12:07:59.520341 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 20 12:07:59.534664 master-0 kubenswrapper[31420]: I0220 12:07:59.534502 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-llz42" Feb 20 12:07:59.566784 master-0 kubenswrapper[31420]: I0220 12:07:59.566727 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 20 12:07:59.589070 master-0 kubenswrapper[31420]: I0220 12:07:59.588925 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 20 12:07:59.620205 master-0 kubenswrapper[31420]: I0220 12:07:59.620128 31420 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 20 12:07:59.628295 master-0 kubenswrapper[31420]: I0220 12:07:59.628239 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 20 12:07:59.631910 master-0 kubenswrapper[31420]: I0220 12:07:59.631845 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 20 12:07:59.677596 master-0 kubenswrapper[31420]: I0220 12:07:59.676280 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 20 12:07:59.704682 master-0 kubenswrapper[31420]: I0220 12:07:59.704490 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 20 12:07:59.705995 master-0 kubenswrapper[31420]: I0220 12:07:59.705327 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 20 12:07:59.751258 master-0 kubenswrapper[31420]: I0220 12:07:59.751173 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 20 12:07:59.784920 master-0 kubenswrapper[31420]: I0220 12:07:59.784827 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 20 12:07:59.826004 master-0 kubenswrapper[31420]: I0220 12:07:59.825924 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 20 12:07:59.846623 master-0 kubenswrapper[31420]: I0220 12:07:59.846451 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 20 12:07:59.846834 master-0 kubenswrapper[31420]: I0220 12:07:59.846463 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 20 12:07:59.860622 master-0 kubenswrapper[31420]: I0220 12:07:59.860365 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 20 12:07:59.903830 master-0 kubenswrapper[31420]: I0220 12:07:59.902449 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-8ksk5" Feb 20 12:07:59.953361 master-0 kubenswrapper[31420]: I0220 12:07:59.953295 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-k7mnd" Feb 20 12:07:59.960228 master-0 kubenswrapper[31420]: I0220 12:07:59.960156 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 20 12:07:59.964121 master-0 kubenswrapper[31420]: I0220 12:07:59.964082 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 20 12:07:59.969164 master-0 kubenswrapper[31420]: I0220 12:07:59.969115 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 20 12:07:59.974319 master-0 kubenswrapper[31420]: I0220 12:07:59.974293 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-lc26c" Feb 20 12:07:59.976347 master-0 kubenswrapper[31420]: I0220 12:07:59.976286 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 20 12:07:59.980168 master-0 kubenswrapper[31420]: I0220 12:07:59.980100 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 20 12:07:59.980286 master-0 kubenswrapper[31420]: I0220 12:07:59.980120 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 20 12:07:59.982521 master-0 kubenswrapper[31420]: I0220 12:07:59.982496 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 20 12:08:00.016596 master-0 kubenswrapper[31420]: I0220 12:08:00.012841 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 20 12:08:00.026641 master-0 kubenswrapper[31420]: I0220 12:08:00.024940 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 20 12:08:00.061595 master-0 kubenswrapper[31420]: I0220 12:08:00.061492 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 20 12:08:00.095858 master-0 kubenswrapper[31420]: I0220 12:08:00.095820 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 20 12:08:00.095858 master-0 kubenswrapper[31420]: I0220 12:08:00.097550 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 20 12:08:00.165824 master-0 kubenswrapper[31420]: I0220 12:08:00.165690 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 20 12:08:00.194913 master-0 kubenswrapper[31420]: I0220 12:08:00.194847 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 20 12:08:00.224702 master-0 kubenswrapper[31420]: I0220 12:08:00.224651 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 20 12:08:00.285105 master-0 kubenswrapper[31420]: I0220 12:08:00.285016 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-27tf7" Feb 20 12:08:00.329500 master-0 kubenswrapper[31420]: I0220 12:08:00.329409 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 20 12:08:00.332136 master-0 kubenswrapper[31420]: I0220 12:08:00.332081 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 20 12:08:00.347873 master-0 kubenswrapper[31420]: I0220 12:08:00.347156 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 20 12:08:00.352336 master-0 kubenswrapper[31420]: I0220 12:08:00.352260 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 20 12:08:00.379566 master-0 kubenswrapper[31420]: I0220 12:08:00.379512 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 20 12:08:00.406234 master-0 kubenswrapper[31420]: I0220 12:08:00.406163 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 20 12:08:00.423209 master-0 kubenswrapper[31420]: I0220 12:08:00.423124 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 20 12:08:00.423521 master-0 kubenswrapper[31420]: I0220 12:08:00.423501 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 20 12:08:00.435144 master-0 kubenswrapper[31420]: I0220 12:08:00.435068 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 20 12:08:00.442845 master-0 kubenswrapper[31420]: I0220 12:08:00.442793 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-l7xzb" Feb 20 12:08:00.582639 master-0 kubenswrapper[31420]: I0220 12:08:00.582515 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 20 12:08:00.587188 master-0 kubenswrapper[31420]: I0220 12:08:00.586589 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 20 12:08:00.642793 master-0 kubenswrapper[31420]: I0220 12:08:00.642700 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 20 12:08:00.684135 master-0 kubenswrapper[31420]: I0220 12:08:00.683950 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 20 12:08:00.701033 master-0 kubenswrapper[31420]: I0220 12:08:00.700720 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 20 12:08:00.701811 master-0 kubenswrapper[31420]: I0220 12:08:00.701758 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 20 12:08:00.714804 master-0 kubenswrapper[31420]: I0220 12:08:00.714596 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 20 12:08:00.724731 master-0 kubenswrapper[31420]: I0220 12:08:00.724646 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 20 12:08:00.784474 master-0 kubenswrapper[31420]: I0220 12:08:00.784392 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 20 12:08:00.790357 master-0 kubenswrapper[31420]: I0220 12:08:00.790279 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 20 12:08:00.895991 master-0 kubenswrapper[31420]: I0220 12:08:00.895919 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 20 12:08:00.905521 master-0 kubenswrapper[31420]: I0220 12:08:00.905451 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 20 12:08:00.920790 master-0 kubenswrapper[31420]: I0220 12:08:00.920715 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 20 12:08:00.942946 master-0 kubenswrapper[31420]: I0220 12:08:00.942812 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 20 12:08:01.014251 master-0 kubenswrapper[31420]: I0220 12:08:01.014179 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 20 12:08:01.048436 master-0 kubenswrapper[31420]: I0220 12:08:01.048339 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 20 12:08:01.076518 master-0 kubenswrapper[31420]: I0220 12:08:01.076467 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 20 12:08:01.163922 master-0 kubenswrapper[31420]: I0220 12:08:01.163870 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 20 12:08:01.187245 master-0 kubenswrapper[31420]: I0220 12:08:01.187172 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 20 12:08:01.226085 master-0 kubenswrapper[31420]: I0220 12:08:01.225945 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-jmbqp" Feb 20 12:08:01.260574 master-0 kubenswrapper[31420]: I0220 12:08:01.260485 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 20 12:08:01.270806 master-0 kubenswrapper[31420]: I0220 12:08:01.270748 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 20 12:08:01.378147 master-0 kubenswrapper[31420]: I0220 12:08:01.378082 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 12:08:01.397293 master-0 kubenswrapper[31420]: I0220 12:08:01.397247 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 20 12:08:01.433201 master-0 kubenswrapper[31420]: I0220 12:08:01.433125 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-92b9q" Feb 20 12:08:01.468270 master-0 kubenswrapper[31420]: I0220 12:08:01.468212 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-mhfhg" Feb 20 12:08:01.481028 master-0 kubenswrapper[31420]: I0220 12:08:01.480936 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 20 12:08:01.495974 master-0 kubenswrapper[31420]: I0220 12:08:01.495941 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 20 12:08:01.549819 master-0 kubenswrapper[31420]: I0220 12:08:01.549752 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 20 12:08:01.660828 master-0 kubenswrapper[31420]: I0220 12:08:01.660767 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 20 12:08:02.119964 master-0 kubenswrapper[31420]: I0220 12:08:02.119776 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 20 12:08:02.124718 master-0 kubenswrapper[31420]: I0220 12:08:02.121817 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 20 12:08:02.124718 master-0 kubenswrapper[31420]: I0220 12:08:02.122175 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 20 12:08:02.133874 master-0 kubenswrapper[31420]: I0220 12:08:02.127637 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 20 12:08:02.133874 master-0 kubenswrapper[31420]: I0220 12:08:02.127858 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 20 12:08:02.133874 master-0 kubenswrapper[31420]: I0220 12:08:02.127940 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 20 12:08:02.133874 master-0 kubenswrapper[31420]: I0220 12:08:02.128078 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 20 12:08:02.133874 master-0 kubenswrapper[31420]: I0220 12:08:02.131002 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 20 12:08:02.133874 master-0 kubenswrapper[31420]: I0220 12:08:02.131631 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 20 12:08:02.133874 master-0 kubenswrapper[31420]: I0220 12:08:02.131824 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 20 12:08:02.133874 master-0 kubenswrapper[31420]: I0220 12:08:02.131949 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 20 12:08:02.133874 master-0 kubenswrapper[31420]: I0220 12:08:02.132466 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 20 12:08:02.133874 master-0 kubenswrapper[31420]: I0220 12:08:02.132724 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 20 12:08:02.143432 master-0 kubenswrapper[31420]: I0220 12:08:02.143356 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 20 12:08:02.143925 master-0 kubenswrapper[31420]: I0220 12:08:02.143862 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 20 12:08:02.154126 master-0 kubenswrapper[31420]: I0220 12:08:02.154071 31420 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 20 12:08:02.165331 master-0 kubenswrapper[31420]: I0220 12:08:02.165284 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 20 12:08:02.165416 master-0 kubenswrapper[31420]: I0220 12:08:02.165366 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 20 12:08:02.169764 master-0 kubenswrapper[31420]: I0220 12:08:02.169642 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 20 12:08:02.172251 master-0 kubenswrapper[31420]: I0220 12:08:02.172215 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 12:08:02.173409 master-0 kubenswrapper[31420]: I0220 12:08:02.173354 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 20 12:08:02.182964 master-0 kubenswrapper[31420]: I0220 12:08:02.182897 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 20 12:08:02.187623 master-0 kubenswrapper[31420]: I0220 12:08:02.185701 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 20 12:08:02.193212 master-0 kubenswrapper[31420]: I0220 12:08:02.193146 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=11.193129959 podStartE2EDuration="11.193129959s" podCreationTimestamp="2026-02-20 12:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:08:02.188376695 +0000 UTC m=+186.907614936" watchObservedRunningTime="2026-02-20 12:08:02.193129959 +0000 UTC m=+186.912368200" Feb 20 12:08:02.222058 master-0 kubenswrapper[31420]: I0220 12:08:02.222002 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 20 12:08:02.279250 master-0 kubenswrapper[31420]: I0220 12:08:02.279191 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 20 12:08:02.286759 master-0 kubenswrapper[31420]: I0220 12:08:02.286735 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 20 12:08:02.312757 master-0 kubenswrapper[31420]: I0220 12:08:02.312704 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 20 12:08:02.319155 master-0 kubenswrapper[31420]: I0220 12:08:02.319119 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 20 12:08:02.330179 master-0 kubenswrapper[31420]: I0220 12:08:02.330138 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-zhr86" Feb 20 12:08:02.337347 master-0 kubenswrapper[31420]: I0220 12:08:02.337256 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 20 12:08:02.384866 master-0 kubenswrapper[31420]: I0220 12:08:02.384754 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 20 12:08:02.414669 master-0 kubenswrapper[31420]: I0220 12:08:02.414594 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 20 12:08:02.472664 master-0 kubenswrapper[31420]: I0220 12:08:02.472580 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 20 12:08:02.499623 master-0 kubenswrapper[31420]: I0220 12:08:02.499566 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 20 12:08:02.503601 master-0 kubenswrapper[31420]: I0220 12:08:02.503573 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 20 12:08:02.612057 master-0 kubenswrapper[31420]: I0220 12:08:02.611967 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 20 12:08:02.621768 master-0 kubenswrapper[31420]: I0220 12:08:02.621718 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 20 12:08:02.642175 master-0 kubenswrapper[31420]: I0220 12:08:02.642051 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 20 12:08:02.645183 master-0 kubenswrapper[31420]: I0220 12:08:02.645117 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 20 12:08:02.645336 master-0 kubenswrapper[31420]: I0220 12:08:02.645126 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 20 12:08:02.646724 master-0 kubenswrapper[31420]: I0220 12:08:02.646682 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 20 12:08:02.686843 master-0 kubenswrapper[31420]: I0220 12:08:02.686671 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 20 12:08:02.687154 master-0 kubenswrapper[31420]: I0220 12:08:02.686719 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 20 12:08:02.725585 master-0 kubenswrapper[31420]: I0220 12:08:02.720955 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-vncjl" Feb 20 12:08:02.738224 master-0 kubenswrapper[31420]: I0220 12:08:02.738159 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 20 12:08:02.744691 master-0 kubenswrapper[31420]: I0220 12:08:02.744637 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 20 12:08:02.753143 master-0 kubenswrapper[31420]: I0220 12:08:02.753088 31420 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 20 12:08:02.759020 master-0 kubenswrapper[31420]: I0220 12:08:02.758975 31420 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 20 12:08:02.759237 master-0 kubenswrapper[31420]: I0220 12:08:02.759201 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="10f4041a226ca54ed300f2badc93fd43" containerName="startup-monitor" containerID="cri-o://2663b47d99f8b649affaa920b741ce96808073403384a9a64f7c0926a5d3fbf0" gracePeriod=5 Feb 20 12:08:02.770785 master-0 kubenswrapper[31420]: I0220 12:08:02.770742 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 20 12:08:02.771772 master-0 kubenswrapper[31420]: I0220 12:08:02.771745 31420 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 20 12:08:02.826605 master-0 kubenswrapper[31420]: I0220 12:08:02.826559 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 20 12:08:02.936316 master-0 kubenswrapper[31420]: I0220 12:08:02.936171 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 20 12:08:02.984360 master-0 kubenswrapper[31420]: I0220 12:08:02.984310 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 20 12:08:02.986633 master-0 kubenswrapper[31420]: I0220 12:08:02.986600 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 20 12:08:02.990236 master-0 kubenswrapper[31420]: I0220 12:08:02.990197 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 20 12:08:03.009411 master-0 kubenswrapper[31420]: I0220 12:08:03.007336 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 20 12:08:03.009411 master-0 kubenswrapper[31420]: I0220 12:08:03.008441 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 20 12:08:03.025555 master-0 kubenswrapper[31420]: I0220 12:08:03.019979 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ffxph" Feb 20 12:08:03.029616 master-0 kubenswrapper[31420]: I0220 12:08:03.029550 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 20 12:08:03.047063 master-0 kubenswrapper[31420]: I0220 12:08:03.047005 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 20 12:08:03.132196 master-0 kubenswrapper[31420]: I0220 12:08:03.130235 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 20 12:08:03.187086 master-0 kubenswrapper[31420]: I0220 12:08:03.186965 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 20 12:08:03.382751 master-0 kubenswrapper[31420]: I0220 12:08:03.382630 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-jxm2z" Feb 20 12:08:03.404976 master-0 kubenswrapper[31420]: I0220 12:08:03.404911 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 20 12:08:03.454410 master-0 kubenswrapper[31420]: I0220 12:08:03.454256 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 20 12:08:03.483648 master-0 kubenswrapper[31420]: I0220 12:08:03.483586 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-q8nx7" Feb 20 12:08:03.502993 master-0 kubenswrapper[31420]: I0220 12:08:03.502941 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 20 12:08:03.529654 master-0 kubenswrapper[31420]: I0220 12:08:03.529588 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-9z85g" Feb 20 12:08:03.557267 master-0 kubenswrapper[31420]: I0220 12:08:03.557189 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-zxcjx" Feb 20 12:08:03.581149 master-0 kubenswrapper[31420]: I0220 12:08:03.580787 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 20 12:08:03.581149 master-0 kubenswrapper[31420]: I0220 12:08:03.581038 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-g5hlk" Feb 20 12:08:03.589864 master-0 kubenswrapper[31420]: I0220 12:08:03.587882 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 20 12:08:03.589864 master-0 kubenswrapper[31420]: I0220 12:08:03.589128 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-st2x9" Feb 20 12:08:03.619680 master-0 kubenswrapper[31420]: I0220 12:08:03.619631 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 20 12:08:03.633872 master-0 kubenswrapper[31420]: I0220 12:08:03.633506 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-nd6lj" Feb 20 12:08:03.653101 master-0 kubenswrapper[31420]: I0220 12:08:03.653042 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 20 12:08:03.700677 master-0 kubenswrapper[31420]: I0220 12:08:03.700453 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 20 12:08:03.732495 master-0 kubenswrapper[31420]: I0220 12:08:03.732093 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 20 12:08:03.770545 master-0 kubenswrapper[31420]: I0220 12:08:03.770441 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 20 12:08:03.820853 master-0 kubenswrapper[31420]: I0220 12:08:03.820772 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 20 12:08:03.839344 master-0 kubenswrapper[31420]: I0220 12:08:03.839283 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 20 12:08:03.861372 master-0 kubenswrapper[31420]: I0220 12:08:03.861287 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 20 12:08:03.862275 master-0 kubenswrapper[31420]: I0220 12:08:03.862223 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 20 12:08:03.874697 master-0 kubenswrapper[31420]: I0220 12:08:03.874640 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 20 12:08:03.918852 master-0 kubenswrapper[31420]: I0220 12:08:03.918774 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 20 12:08:03.926207 master-0 kubenswrapper[31420]: I0220 12:08:03.926179 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 20 12:08:04.007834 master-0 kubenswrapper[31420]: I0220 12:08:04.007751 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-mr6l7" Feb 20 12:08:04.019578 master-0 kubenswrapper[31420]: I0220 12:08:04.019425 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-cqc0j177hn3k9" Feb 20 12:08:04.042830 master-0 kubenswrapper[31420]: I0220 12:08:04.042771 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 20 12:08:04.092180 master-0 kubenswrapper[31420]: I0220 12:08:04.091842 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 20 12:08:04.097189 master-0 kubenswrapper[31420]: I0220 12:08:04.097146 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 20 12:08:04.179469 master-0 kubenswrapper[31420]: I0220 12:08:04.179366 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 20 12:08:04.309070 master-0 kubenswrapper[31420]: I0220 12:08:04.308919 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 20 12:08:04.421398 master-0 kubenswrapper[31420]: I0220 12:08:04.421328 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 20 12:08:04.473522 master-0 kubenswrapper[31420]: I0220 12:08:04.473441 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-lv5zr" Feb 20 12:08:04.496510 master-0 kubenswrapper[31420]: I0220 12:08:04.496432 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 20 12:08:04.554025 master-0 kubenswrapper[31420]: I0220 12:08:04.553960 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 20 12:08:04.569021 master-0 kubenswrapper[31420]: I0220 12:08:04.568904 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 20 12:08:04.793640 master-0 kubenswrapper[31420]: I0220 12:08:04.793511 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 20 12:08:04.912639 master-0 kubenswrapper[31420]: I0220 12:08:04.912462 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 20 12:08:05.136811 master-0 kubenswrapper[31420]: I0220 12:08:05.136733 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 12:08:05.299482 master-0 kubenswrapper[31420]: I0220 12:08:05.299366 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 20 12:08:05.602015 master-0 kubenswrapper[31420]: I0220 12:08:05.601814 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 20 12:08:05.634995 master-0 kubenswrapper[31420]: I0220 12:08:05.634907 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-l5hc4" Feb 20 12:08:05.907674 master-0 kubenswrapper[31420]: I0220 12:08:05.907497 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 20 12:08:05.924768 master-0 kubenswrapper[31420]: I0220 12:08:05.924717 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 20 12:08:06.009654 master-0 kubenswrapper[31420]: I0220 12:08:06.009616 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-46trq" Feb 20 12:08:06.769377 master-0 kubenswrapper[31420]: I0220 12:08:06.769336 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 20 12:08:07.096299 master-0 kubenswrapper[31420]: I0220 12:08:07.096183 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-74ql7"] Feb 20 12:08:07.101382 master-0 kubenswrapper[31420]: E0220 12:08:07.101340 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f4041a226ca54ed300f2badc93fd43" containerName="startup-monitor" Feb 20 12:08:07.101382 master-0 kubenswrapper[31420]: I0220 12:08:07.101383 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f4041a226ca54ed300f2badc93fd43" containerName="startup-monitor" Feb 20 12:08:07.101611 master-0 kubenswrapper[31420]: E0220 12:08:07.101410 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6108c5-19ba-4f99-9f75-6e02fa5876f2" containerName="installer" Feb 20 12:08:07.101611 master-0 kubenswrapper[31420]: I0220 12:08:07.101419 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6108c5-19ba-4f99-9f75-6e02fa5876f2" containerName="installer" Feb 20 12:08:07.101696 master-0 kubenswrapper[31420]: I0220 12:08:07.101677 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f4041a226ca54ed300f2badc93fd43" containerName="startup-monitor" Feb 20 12:08:07.101749 master-0 kubenswrapper[31420]: I0220 12:08:07.101733 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf6108c5-19ba-4f99-9f75-6e02fa5876f2" containerName="installer" Feb 20 12:08:07.102310 master-0 kubenswrapper[31420]: I0220 12:08:07.102283 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:07.104900 master-0 kubenswrapper[31420]: I0220 12:08:07.104850 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 20 12:08:07.105054 master-0 kubenswrapper[31420]: I0220 12:08:07.105020 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 20 12:08:07.105104 master-0 kubenswrapper[31420]: I0220 12:08:07.105031 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 20 12:08:07.106047 master-0 kubenswrapper[31420]: I0220 12:08:07.106024 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-zshxw" Feb 20 12:08:07.112220 master-0 kubenswrapper[31420]: I0220 12:08:07.112180 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 20 12:08:07.114209 master-0 kubenswrapper[31420]: I0220 12:08:07.114184 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 20 12:08:07.116615 master-0 kubenswrapper[31420]: I0220 12:08:07.116565 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-74ql7"] Feb 20 12:08:07.204298 master-0 kubenswrapper[31420]: I0220 12:08:07.204223 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j75f9\" (UniqueName: \"kubernetes.io/projected/298cd5fa-38c1-4bd3-a300-d82166658f50-kube-api-access-j75f9\") pod \"console-operator-5df5ffc47c-74ql7\" (UID: \"298cd5fa-38c1-4bd3-a300-d82166658f50\") " pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:07.204640 master-0 kubenswrapper[31420]: I0220 12:08:07.204615 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/298cd5fa-38c1-4bd3-a300-d82166658f50-trusted-ca\") pod \"console-operator-5df5ffc47c-74ql7\" (UID: \"298cd5fa-38c1-4bd3-a300-d82166658f50\") " pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:07.204764 master-0 kubenswrapper[31420]: I0220 12:08:07.204747 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/298cd5fa-38c1-4bd3-a300-d82166658f50-serving-cert\") pod \"console-operator-5df5ffc47c-74ql7\" (UID: \"298cd5fa-38c1-4bd3-a300-d82166658f50\") " pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:07.204877 master-0 kubenswrapper[31420]: I0220 12:08:07.204857 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/298cd5fa-38c1-4bd3-a300-d82166658f50-config\") pod \"console-operator-5df5ffc47c-74ql7\" (UID: \"298cd5fa-38c1-4bd3-a300-d82166658f50\") " pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:07.307124 master-0 kubenswrapper[31420]: I0220 12:08:07.306981 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j75f9\" (UniqueName: \"kubernetes.io/projected/298cd5fa-38c1-4bd3-a300-d82166658f50-kube-api-access-j75f9\") pod \"console-operator-5df5ffc47c-74ql7\" (UID: \"298cd5fa-38c1-4bd3-a300-d82166658f50\") " pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:07.307403 master-0 kubenswrapper[31420]: I0220 12:08:07.307140 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/298cd5fa-38c1-4bd3-a300-d82166658f50-trusted-ca\") pod \"console-operator-5df5ffc47c-74ql7\" (UID: \"298cd5fa-38c1-4bd3-a300-d82166658f50\") " pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:07.307403 master-0 kubenswrapper[31420]: I0220 12:08:07.307198 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/298cd5fa-38c1-4bd3-a300-d82166658f50-serving-cert\") pod \"console-operator-5df5ffc47c-74ql7\" (UID: \"298cd5fa-38c1-4bd3-a300-d82166658f50\") " pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:07.307403 master-0 kubenswrapper[31420]: I0220 12:08:07.307249 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/298cd5fa-38c1-4bd3-a300-d82166658f50-config\") pod \"console-operator-5df5ffc47c-74ql7\" (UID: \"298cd5fa-38c1-4bd3-a300-d82166658f50\") " pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:07.309847 master-0 kubenswrapper[31420]: I0220 12:08:07.309789 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/298cd5fa-38c1-4bd3-a300-d82166658f50-config\") pod \"console-operator-5df5ffc47c-74ql7\" (UID: \"298cd5fa-38c1-4bd3-a300-d82166658f50\") " pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:07.310222 master-0 kubenswrapper[31420]: I0220 12:08:07.310168 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/298cd5fa-38c1-4bd3-a300-d82166658f50-trusted-ca\") pod \"console-operator-5df5ffc47c-74ql7\" (UID: \"298cd5fa-38c1-4bd3-a300-d82166658f50\") " pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:07.313110 master-0 kubenswrapper[31420]: I0220 12:08:07.312970 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/298cd5fa-38c1-4bd3-a300-d82166658f50-serving-cert\") pod \"console-operator-5df5ffc47c-74ql7\" (UID: \"298cd5fa-38c1-4bd3-a300-d82166658f50\") " pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:07.342311 master-0 kubenswrapper[31420]: I0220 12:08:07.342239 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j75f9\" (UniqueName: \"kubernetes.io/projected/298cd5fa-38c1-4bd3-a300-d82166658f50-kube-api-access-j75f9\") pod \"console-operator-5df5ffc47c-74ql7\" (UID: \"298cd5fa-38c1-4bd3-a300-d82166658f50\") " pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:07.431684 master-0 kubenswrapper[31420]: I0220 12:08:07.431557 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:07.869766 master-0 kubenswrapper[31420]: I0220 12:08:07.868961 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-74ql7"] Feb 20 12:08:07.876310 master-0 kubenswrapper[31420]: W0220 12:08:07.876250 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod298cd5fa_38c1_4bd3_a300_d82166658f50.slice/crio-752903ed2e44b20d68e1c3cc6e6734b549a9c8601c7de9e18acca1f58fe4fff0 WatchSource:0}: Error finding container 752903ed2e44b20d68e1c3cc6e6734b549a9c8601c7de9e18acca1f58fe4fff0: Status 404 returned error can't find the container with id 752903ed2e44b20d68e1c3cc6e6734b549a9c8601c7de9e18acca1f58fe4fff0 Feb 20 12:08:08.179648 master-0 kubenswrapper[31420]: I0220 12:08:08.179459 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_10f4041a226ca54ed300f2badc93fd43/startup-monitor/0.log" Feb 20 12:08:08.179648 master-0 kubenswrapper[31420]: I0220 12:08:08.179571 31420 generic.go:334] "Generic (PLEG): container finished" podID="10f4041a226ca54ed300f2badc93fd43" containerID="2663b47d99f8b649affaa920b741ce96808073403384a9a64f7c0926a5d3fbf0" exitCode=137 Feb 20 12:08:08.180755 master-0 kubenswrapper[31420]: I0220 12:08:08.180686 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" event={"ID":"298cd5fa-38c1-4bd3-a300-d82166658f50","Type":"ContainerStarted","Data":"752903ed2e44b20d68e1c3cc6e6734b549a9c8601c7de9e18acca1f58fe4fff0"} Feb 20 12:08:08.302257 master-0 kubenswrapper[31420]: I0220 12:08:08.302201 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_10f4041a226ca54ed300f2badc93fd43/startup-monitor/0.log" Feb 20 12:08:08.302398 master-0 kubenswrapper[31420]: I0220 12:08:08.302306 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:08:08.323114 master-0 kubenswrapper[31420]: I0220 12:08:08.323059 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 20 12:08:08.423347 master-0 kubenswrapper[31420]: I0220 12:08:08.423266 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-resource-dir\") pod \"10f4041a226ca54ed300f2badc93fd43\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " Feb 20 12:08:08.423563 master-0 kubenswrapper[31420]: I0220 12:08:08.423456 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "10f4041a226ca54ed300f2badc93fd43" (UID: "10f4041a226ca54ed300f2badc93fd43"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:08:08.423677 master-0 kubenswrapper[31420]: I0220 12:08:08.423644 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-var-lock\") pod \"10f4041a226ca54ed300f2badc93fd43\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " Feb 20 12:08:08.423727 master-0 kubenswrapper[31420]: I0220 12:08:08.423686 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-var-lock" (OuterVolumeSpecName: "var-lock") pod "10f4041a226ca54ed300f2badc93fd43" (UID: "10f4041a226ca54ed300f2badc93fd43"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:08:08.423767 master-0 kubenswrapper[31420]: I0220 12:08:08.423742 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-var-log\") pod \"10f4041a226ca54ed300f2badc93fd43\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " Feb 20 12:08:08.423848 master-0 kubenswrapper[31420]: I0220 12:08:08.423818 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-manifests\") pod \"10f4041a226ca54ed300f2badc93fd43\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " Feb 20 12:08:08.423893 master-0 kubenswrapper[31420]: I0220 12:08:08.423823 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-var-log" (OuterVolumeSpecName: "var-log") pod "10f4041a226ca54ed300f2badc93fd43" (UID: "10f4041a226ca54ed300f2badc93fd43"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:08:08.423931 master-0 kubenswrapper[31420]: I0220 12:08:08.423885 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-pod-resource-dir\") pod \"10f4041a226ca54ed300f2badc93fd43\" (UID: \"10f4041a226ca54ed300f2badc93fd43\") " Feb 20 12:08:08.424309 master-0 kubenswrapper[31420]: I0220 12:08:08.423845 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-manifests" (OuterVolumeSpecName: "manifests") pod "10f4041a226ca54ed300f2badc93fd43" (UID: "10f4041a226ca54ed300f2badc93fd43"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:08:08.424428 master-0 kubenswrapper[31420]: I0220 12:08:08.424392 31420 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:08.424472 master-0 kubenswrapper[31420]: I0220 12:08:08.424431 31420 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-var-log\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:08.424472 master-0 kubenswrapper[31420]: I0220 12:08:08.424458 31420 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-manifests\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:08.424550 master-0 kubenswrapper[31420]: I0220 12:08:08.424484 31420 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:08.428926 master-0 kubenswrapper[31420]: I0220 12:08:08.428844 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "10f4041a226ca54ed300f2badc93fd43" (UID: "10f4041a226ca54ed300f2badc93fd43"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:08:08.526652 master-0 kubenswrapper[31420]: I0220 12:08:08.526577 31420 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/10f4041a226ca54ed300f2badc93fd43-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:09.202396 master-0 kubenswrapper[31420]: I0220 12:08:09.202327 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_10f4041a226ca54ed300f2badc93fd43/startup-monitor/0.log" Feb 20 12:08:09.202885 master-0 kubenswrapper[31420]: I0220 12:08:09.202463 31420 scope.go:117] "RemoveContainer" containerID="2663b47d99f8b649affaa920b741ce96808073403384a9a64f7c0926a5d3fbf0" Feb 20 12:08:09.202885 master-0 kubenswrapper[31420]: I0220 12:08:09.202585 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 12:08:09.505381 master-0 kubenswrapper[31420]: I0220 12:08:09.505309 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10f4041a226ca54ed300f2badc93fd43" path="/var/lib/kubelet/pods/10f4041a226ca54ed300f2badc93fd43/volumes" Feb 20 12:08:09.783457 master-0 kubenswrapper[31420]: I0220 12:08:09.783333 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 20 12:08:10.865626 master-0 kubenswrapper[31420]: I0220 12:08:10.865562 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-rjk9v" Feb 20 12:08:11.220243 master-0 kubenswrapper[31420]: I0220 12:08:11.220171 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" event={"ID":"298cd5fa-38c1-4bd3-a300-d82166658f50","Type":"ContainerStarted","Data":"71f3b240a0977eddc4b6fb8fe8cbe9f9921a6db37c8200966a68bb5de3aac6c6"} Feb 20 12:08:11.220717 master-0 kubenswrapper[31420]: I0220 12:08:11.220623 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:11.245950 master-0 kubenswrapper[31420]: I0220 12:08:11.245813 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" podStartSLOduration=1.435041234 podStartE2EDuration="4.245789099s" podCreationTimestamp="2026-02-20 12:08:07 +0000 UTC" firstStartedPulling="2026-02-20 12:08:07.87882447 +0000 UTC m=+192.598062731" lastFinishedPulling="2026-02-20 12:08:10.689572355 +0000 UTC m=+195.408810596" observedRunningTime="2026-02-20 12:08:11.242614049 +0000 UTC m=+195.961852300" watchObservedRunningTime="2026-02-20 12:08:11.245789099 +0000 UTC m=+195.965027380" Feb 20 12:08:11.273755 master-0 kubenswrapper[31420]: I0220 12:08:11.273694 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-j7fmn" Feb 20 12:08:11.577621 master-0 kubenswrapper[31420]: I0220 12:08:11.577514 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5df5ffc47c-74ql7" Feb 20 12:08:11.833349 master-0 kubenswrapper[31420]: I0220 12:08:11.833213 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-955b69498-tnjkt"] Feb 20 12:08:11.834485 master-0 kubenswrapper[31420]: I0220 12:08:11.834295 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-955b69498-tnjkt" Feb 20 12:08:11.837960 master-0 kubenswrapper[31420]: I0220 12:08:11.837840 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 20 12:08:11.838416 master-0 kubenswrapper[31420]: I0220 12:08:11.838374 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-rqjns" Feb 20 12:08:11.838610 master-0 kubenswrapper[31420]: I0220 12:08:11.838541 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 20 12:08:11.859706 master-0 kubenswrapper[31420]: I0220 12:08:11.858626 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-955b69498-tnjkt"] Feb 20 12:08:11.983702 master-0 kubenswrapper[31420]: I0220 12:08:11.983640 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2q5s\" (UniqueName: \"kubernetes.io/projected/2497e863-ea03-4513-8d7a-3b5fef6f323a-kube-api-access-g2q5s\") pod \"downloads-955b69498-tnjkt\" (UID: \"2497e863-ea03-4513-8d7a-3b5fef6f323a\") " pod="openshift-console/downloads-955b69498-tnjkt" Feb 20 12:08:12.085853 master-0 kubenswrapper[31420]: I0220 12:08:12.085696 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2q5s\" (UniqueName: \"kubernetes.io/projected/2497e863-ea03-4513-8d7a-3b5fef6f323a-kube-api-access-g2q5s\") pod \"downloads-955b69498-tnjkt\" (UID: \"2497e863-ea03-4513-8d7a-3b5fef6f323a\") " pod="openshift-console/downloads-955b69498-tnjkt" Feb 20 12:08:12.105059 master-0 kubenswrapper[31420]: I0220 12:08:12.104961 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2q5s\" (UniqueName: \"kubernetes.io/projected/2497e863-ea03-4513-8d7a-3b5fef6f323a-kube-api-access-g2q5s\") pod \"downloads-955b69498-tnjkt\" (UID: \"2497e863-ea03-4513-8d7a-3b5fef6f323a\") " pod="openshift-console/downloads-955b69498-tnjkt" Feb 20 12:08:12.155901 master-0 kubenswrapper[31420]: I0220 12:08:12.155794 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-955b69498-tnjkt" Feb 20 12:08:12.663595 master-0 kubenswrapper[31420]: I0220 12:08:12.662988 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-955b69498-tnjkt"] Feb 20 12:08:12.668028 master-0 kubenswrapper[31420]: W0220 12:08:12.667980 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2497e863_ea03_4513_8d7a_3b5fef6f323a.slice/crio-ee8b61c0b9bee25b18b79f19c989a87d0526de8e42e4a609b5a4282821b49226 WatchSource:0}: Error finding container ee8b61c0b9bee25b18b79f19c989a87d0526de8e42e4a609b5a4282821b49226: Status 404 returned error can't find the container with id ee8b61c0b9bee25b18b79f19c989a87d0526de8e42e4a609b5a4282821b49226 Feb 20 12:08:12.813428 master-0 kubenswrapper[31420]: I0220 12:08:12.813324 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 20 12:08:12.912415 master-0 kubenswrapper[31420]: I0220 12:08:12.912201 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 20 12:08:13.150997 master-0 kubenswrapper[31420]: I0220 12:08:13.150730 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 20 12:08:13.162587 master-0 kubenswrapper[31420]: I0220 12:08:13.162499 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 20 12:08:13.243017 master-0 kubenswrapper[31420]: I0220 12:08:13.242711 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-955b69498-tnjkt" event={"ID":"2497e863-ea03-4513-8d7a-3b5fef6f323a","Type":"ContainerStarted","Data":"ee8b61c0b9bee25b18b79f19c989a87d0526de8e42e4a609b5a4282821b49226"} Feb 20 12:08:13.683211 master-0 kubenswrapper[31420]: I0220 12:08:13.683117 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 20 12:08:13.705289 master-0 kubenswrapper[31420]: I0220 12:08:13.705190 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 20 12:08:13.734085 master-0 kubenswrapper[31420]: I0220 12:08:13.734016 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 20 12:08:13.863771 master-0 kubenswrapper[31420]: I0220 12:08:13.861406 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 20 12:08:13.928869 master-0 kubenswrapper[31420]: I0220 12:08:13.928810 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 20 12:08:14.131951 master-0 kubenswrapper[31420]: I0220 12:08:14.131861 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 20 12:08:14.234433 master-0 kubenswrapper[31420]: I0220 12:08:14.233306 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 20 12:08:14.283639 master-0 kubenswrapper[31420]: I0220 12:08:14.283569 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-fgsdc" Feb 20 12:08:14.308257 master-0 kubenswrapper[31420]: I0220 12:08:14.308147 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 20 12:08:14.475505 master-0 kubenswrapper[31420]: I0220 12:08:14.475114 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7f4d46dfcc-9m7bm"] Feb 20 12:08:14.477595 master-0 kubenswrapper[31420]: I0220 12:08:14.477517 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.479406 master-0 kubenswrapper[31420]: I0220 12:08:14.479288 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-2qn6j" Feb 20 12:08:14.483109 master-0 kubenswrapper[31420]: I0220 12:08:14.482779 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 20 12:08:14.483109 master-0 kubenswrapper[31420]: I0220 12:08:14.482934 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 20 12:08:14.483109 master-0 kubenswrapper[31420]: I0220 12:08:14.482965 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 20 12:08:14.483109 master-0 kubenswrapper[31420]: I0220 12:08:14.482974 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 20 12:08:14.483340 master-0 kubenswrapper[31420]: I0220 12:08:14.483152 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 20 12:08:14.493812 master-0 kubenswrapper[31420]: I0220 12:08:14.492659 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f4d46dfcc-9m7bm"] Feb 20 12:08:14.627047 master-0 kubenswrapper[31420]: I0220 12:08:14.626969 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c2zd\" (UniqueName: \"kubernetes.io/projected/b16b42ee-106a-4667-82dc-463c002e7437-kube-api-access-4c2zd\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.627398 master-0 kubenswrapper[31420]: I0220 12:08:14.627335 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-service-ca\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.627468 master-0 kubenswrapper[31420]: I0220 12:08:14.627434 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-oauth-serving-cert\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.627538 master-0 kubenswrapper[31420]: I0220 12:08:14.627485 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-console-config\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.627590 master-0 kubenswrapper[31420]: I0220 12:08:14.627520 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.627689 master-0 kubenswrapper[31420]: I0220 12:08:14.627636 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-oauth-config\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.662950 master-0 kubenswrapper[31420]: I0220 12:08:14.662864 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 20 12:08:14.729392 master-0 kubenswrapper[31420]: I0220 12:08:14.729256 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c2zd\" (UniqueName: \"kubernetes.io/projected/b16b42ee-106a-4667-82dc-463c002e7437-kube-api-access-4c2zd\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.729392 master-0 kubenswrapper[31420]: I0220 12:08:14.729319 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-service-ca\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.729392 master-0 kubenswrapper[31420]: I0220 12:08:14.729352 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-oauth-serving-cert\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.729392 master-0 kubenswrapper[31420]: I0220 12:08:14.729391 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-console-config\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.729712 master-0 kubenswrapper[31420]: I0220 12:08:14.729424 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.729712 master-0 kubenswrapper[31420]: I0220 12:08:14.729473 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-oauth-config\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.730417 master-0 kubenswrapper[31420]: E0220 12:08:14.729923 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:14.730417 master-0 kubenswrapper[31420]: E0220 12:08:14.729990 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert podName:b16b42ee-106a-4667-82dc-463c002e7437 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:15.229971284 +0000 UTC m=+199.949209525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert") pod "console-7f4d46dfcc-9m7bm" (UID: "b16b42ee-106a-4667-82dc-463c002e7437") : secret "console-serving-cert" not found Feb 20 12:08:14.730543 master-0 kubenswrapper[31420]: I0220 12:08:14.730425 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-service-ca\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.730955 master-0 kubenswrapper[31420]: I0220 12:08:14.730892 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-oauth-serving-cert\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.730955 master-0 kubenswrapper[31420]: I0220 12:08:14.730918 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-console-config\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.733039 master-0 kubenswrapper[31420]: I0220 12:08:14.733005 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-oauth-config\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.756300 master-0 kubenswrapper[31420]: I0220 12:08:14.756227 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c2zd\" (UniqueName: \"kubernetes.io/projected/b16b42ee-106a-4667-82dc-463c002e7437-kube-api-access-4c2zd\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:14.869997 master-0 kubenswrapper[31420]: I0220 12:08:14.869945 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 20 12:08:15.237565 master-0 kubenswrapper[31420]: I0220 12:08:15.236658 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 20 12:08:15.241565 master-0 kubenswrapper[31420]: I0220 12:08:15.238645 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:15.241565 master-0 kubenswrapper[31420]: E0220 12:08:15.238774 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:15.241565 master-0 kubenswrapper[31420]: E0220 12:08:15.238818 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert podName:b16b42ee-106a-4667-82dc-463c002e7437 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:16.238802111 +0000 UTC m=+200.958040342 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert") pod "console-7f4d46dfcc-9m7bm" (UID: "b16b42ee-106a-4667-82dc-463c002e7437") : secret "console-serving-cert" not found Feb 20 12:08:15.422889 master-0 kubenswrapper[31420]: I0220 12:08:15.422827 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 20 12:08:16.114689 master-0 kubenswrapper[31420]: I0220 12:08:16.114612 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 20 12:08:16.136940 master-0 kubenswrapper[31420]: I0220 12:08:16.136869 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 20 12:08:16.253986 master-0 kubenswrapper[31420]: I0220 12:08:16.253931 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:16.254621 master-0 kubenswrapper[31420]: E0220 12:08:16.254079 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:16.254621 master-0 kubenswrapper[31420]: E0220 12:08:16.254162 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert podName:b16b42ee-106a-4667-82dc-463c002e7437 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:18.254140799 +0000 UTC m=+202.973379040 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert") pod "console-7f4d46dfcc-9m7bm" (UID: "b16b42ee-106a-4667-82dc-463c002e7437") : secret "console-serving-cert" not found Feb 20 12:08:16.526922 master-0 kubenswrapper[31420]: I0220 12:08:16.526852 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 12:08:17.214149 master-0 kubenswrapper[31420]: I0220 12:08:17.214073 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 20 12:08:17.460401 master-0 kubenswrapper[31420]: I0220 12:08:17.460354 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xsh5v" Feb 20 12:08:17.796330 master-0 kubenswrapper[31420]: I0220 12:08:17.796253 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 20 12:08:18.210949 master-0 kubenswrapper[31420]: I0220 12:08:18.210811 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 20 12:08:18.287340 master-0 kubenswrapper[31420]: I0220 12:08:18.287249 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:18.287696 master-0 kubenswrapper[31420]: E0220 12:08:18.287613 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:18.287795 master-0 kubenswrapper[31420]: E0220 12:08:18.287742 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert podName:b16b42ee-106a-4667-82dc-463c002e7437 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:22.287712196 +0000 UTC m=+207.006950477 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert") pod "console-7f4d46dfcc-9m7bm" (UID: "b16b42ee-106a-4667-82dc-463c002e7437") : secret "console-serving-cert" not found Feb 20 12:08:18.362546 master-0 kubenswrapper[31420]: I0220 12:08:18.362394 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 20 12:08:18.520570 master-0 kubenswrapper[31420]: I0220 12:08:18.520478 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 20 12:08:19.057363 master-0 kubenswrapper[31420]: I0220 12:08:19.057273 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 20 12:08:19.225485 master-0 kubenswrapper[31420]: I0220 12:08:19.225306 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 20 12:08:19.383750 master-0 kubenswrapper[31420]: I0220 12:08:19.383584 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 20 12:08:19.403912 master-0 kubenswrapper[31420]: I0220 12:08:19.403814 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 20 12:08:19.405684 master-0 kubenswrapper[31420]: I0220 12:08:19.405620 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 20 12:08:20.502264 master-0 kubenswrapper[31420]: I0220 12:08:20.502226 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 20 12:08:21.072379 master-0 kubenswrapper[31420]: I0220 12:08:21.072313 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7db6b45755-6rz2s"] Feb 20 12:08:21.073892 master-0 kubenswrapper[31420]: I0220 12:08:21.073848 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.080047 master-0 kubenswrapper[31420]: I0220 12:08:21.079349 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7db6b45755-6rz2s"] Feb 20 12:08:21.087860 master-0 kubenswrapper[31420]: I0220 12:08:21.087792 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 20 12:08:21.236168 master-0 kubenswrapper[31420]: I0220 12:08:21.236085 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-config\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.236406 master-0 kubenswrapper[31420]: I0220 12:08:21.236278 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-oauth-serving-cert\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.236406 master-0 kubenswrapper[31420]: I0220 12:08:21.236344 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-oauth-config\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.236580 master-0 kubenswrapper[31420]: I0220 12:08:21.236495 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-trusted-ca-bundle\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.236654 master-0 kubenswrapper[31420]: I0220 12:08:21.236621 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmpj9\" (UniqueName: \"kubernetes.io/projected/8076e8ff-cac6-4008-b719-b92fb734d4f9-kube-api-access-vmpj9\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.236737 master-0 kubenswrapper[31420]: I0220 12:08:21.236710 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.236801 master-0 kubenswrapper[31420]: I0220 12:08:21.236755 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-service-ca\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.338347 master-0 kubenswrapper[31420]: I0220 12:08:21.338171 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-config\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.338637 master-0 kubenswrapper[31420]: I0220 12:08:21.338471 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-oauth-serving-cert\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.338637 master-0 kubenswrapper[31420]: I0220 12:08:21.338599 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-oauth-config\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.338773 master-0 kubenswrapper[31420]: I0220 12:08:21.338651 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-trusted-ca-bundle\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.338773 master-0 kubenswrapper[31420]: I0220 12:08:21.338686 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmpj9\" (UniqueName: \"kubernetes.io/projected/8076e8ff-cac6-4008-b719-b92fb734d4f9-kube-api-access-vmpj9\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.338977 master-0 kubenswrapper[31420]: I0220 12:08:21.338914 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.339077 master-0 kubenswrapper[31420]: I0220 12:08:21.339003 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-service-ca\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.339640 master-0 kubenswrapper[31420]: I0220 12:08:21.339553 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-oauth-serving-cert\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.339740 master-0 kubenswrapper[31420]: E0220 12:08:21.339683 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:21.340140 master-0 kubenswrapper[31420]: I0220 12:08:21.340077 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-config\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.340346 master-0 kubenswrapper[31420]: E0220 12:08:21.340296 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert podName:8076e8ff-cac6-4008-b719-b92fb734d4f9 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:21.840262183 +0000 UTC m=+206.559500454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert") pod "console-7db6b45755-6rz2s" (UID: "8076e8ff-cac6-4008-b719-b92fb734d4f9") : secret "console-serving-cert" not found Feb 20 12:08:21.340513 master-0 kubenswrapper[31420]: I0220 12:08:21.340454 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-trusted-ca-bundle\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.340630 master-0 kubenswrapper[31420]: I0220 12:08:21.340584 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-service-ca\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.344032 master-0 kubenswrapper[31420]: I0220 12:08:21.343940 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-oauth-config\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.359704 master-0 kubenswrapper[31420]: I0220 12:08:21.358772 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmpj9\" (UniqueName: \"kubernetes.io/projected/8076e8ff-cac6-4008-b719-b92fb734d4f9-kube-api-access-vmpj9\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.845896 master-0 kubenswrapper[31420]: I0220 12:08:21.845803 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:21.848222 master-0 kubenswrapper[31420]: E0220 12:08:21.845967 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:21.848222 master-0 kubenswrapper[31420]: E0220 12:08:21.846052 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert podName:8076e8ff-cac6-4008-b719-b92fb734d4f9 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:22.846028073 +0000 UTC m=+207.565266304 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert") pod "console-7db6b45755-6rz2s" (UID: "8076e8ff-cac6-4008-b719-b92fb734d4f9") : secret "console-serving-cert" not found Feb 20 12:08:22.353608 master-0 kubenswrapper[31420]: I0220 12:08:22.353495 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:22.354013 master-0 kubenswrapper[31420]: E0220 12:08:22.353791 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:22.354013 master-0 kubenswrapper[31420]: E0220 12:08:22.353950 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert podName:b16b42ee-106a-4667-82dc-463c002e7437 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:30.353909603 +0000 UTC m=+215.073147904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert") pod "console-7f4d46dfcc-9m7bm" (UID: "b16b42ee-106a-4667-82dc-463c002e7437") : secret "console-serving-cert" not found Feb 20 12:08:22.861351 master-0 kubenswrapper[31420]: I0220 12:08:22.861267 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:22.861853 master-0 kubenswrapper[31420]: E0220 12:08:22.861420 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:22.861853 master-0 kubenswrapper[31420]: E0220 12:08:22.861500 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert podName:8076e8ff-cac6-4008-b719-b92fb734d4f9 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:24.861483764 +0000 UTC m=+209.580722005 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert") pod "console-7db6b45755-6rz2s" (UID: "8076e8ff-cac6-4008-b719-b92fb734d4f9") : secret "console-serving-cert" not found Feb 20 12:08:24.897145 master-0 kubenswrapper[31420]: I0220 12:08:24.896993 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:24.897979 master-0 kubenswrapper[31420]: E0220 12:08:24.897221 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:24.897979 master-0 kubenswrapper[31420]: E0220 12:08:24.897327 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert podName:8076e8ff-cac6-4008-b719-b92fb734d4f9 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:28.897306194 +0000 UTC m=+213.616544435 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert") pod "console-7db6b45755-6rz2s" (UID: "8076e8ff-cac6-4008-b719-b92fb734d4f9") : secret "console-serving-cert" not found Feb 20 12:08:28.971823 master-0 kubenswrapper[31420]: I0220 12:08:28.971759 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:28.973906 master-0 kubenswrapper[31420]: E0220 12:08:28.972130 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:28.973906 master-0 kubenswrapper[31420]: E0220 12:08:28.972296 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert podName:8076e8ff-cac6-4008-b719-b92fb734d4f9 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:36.972266737 +0000 UTC m=+221.691504978 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert") pod "console-7db6b45755-6rz2s" (UID: "8076e8ff-cac6-4008-b719-b92fb734d4f9") : secret "console-serving-cert" not found Feb 20 12:08:29.411072 master-0 kubenswrapper[31420]: I0220 12:08:29.411006 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7f4d46dfcc-9m7bm"] Feb 20 12:08:29.411704 master-0 kubenswrapper[31420]: E0220 12:08:29.411659 31420 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[console-serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-console/console-7f4d46dfcc-9m7bm" podUID="b16b42ee-106a-4667-82dc-463c002e7437" Feb 20 12:08:29.443915 master-0 kubenswrapper[31420]: I0220 12:08:29.443825 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6f5fc458cf-6qd8t"] Feb 20 12:08:29.444942 master-0 kubenswrapper[31420]: I0220 12:08:29.444906 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.461110 master-0 kubenswrapper[31420]: I0220 12:08:29.461051 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f5fc458cf-6qd8t"] Feb 20 12:08:29.482048 master-0 kubenswrapper[31420]: I0220 12:08:29.481964 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-oauth-config\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.482048 master-0 kubenswrapper[31420]: I0220 12:08:29.482039 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-trusted-ca-bundle\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.482390 master-0 kubenswrapper[31420]: I0220 12:08:29.482069 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-oauth-serving-cert\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.482390 master-0 kubenswrapper[31420]: I0220 12:08:29.482105 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.482390 master-0 kubenswrapper[31420]: I0220 12:08:29.482175 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-console-config\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.482390 master-0 kubenswrapper[31420]: I0220 12:08:29.482218 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-service-ca\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.482390 master-0 kubenswrapper[31420]: I0220 12:08:29.482244 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjcxx\" (UniqueName: \"kubernetes.io/projected/97a4b974-ceda-481d-8e07-f6e94e37095c-kube-api-access-vjcxx\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.583785 master-0 kubenswrapper[31420]: I0220 12:08:29.583689 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-oauth-config\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.584027 master-0 kubenswrapper[31420]: I0220 12:08:29.583805 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-trusted-ca-bundle\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.584027 master-0 kubenswrapper[31420]: I0220 12:08:29.583836 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-oauth-serving-cert\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.584343 master-0 kubenswrapper[31420]: I0220 12:08:29.583911 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.584616 master-0 kubenswrapper[31420]: E0220 12:08:29.584482 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:29.584882 master-0 kubenswrapper[31420]: I0220 12:08:29.584814 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-console-config\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.584996 master-0 kubenswrapper[31420]: I0220 12:08:29.584969 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-oauth-serving-cert\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.585285 master-0 kubenswrapper[31420]: I0220 12:08:29.585209 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-service-ca\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.585424 master-0 kubenswrapper[31420]: I0220 12:08:29.585385 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjcxx\" (UniqueName: \"kubernetes.io/projected/97a4b974-ceda-481d-8e07-f6e94e37095c-kube-api-access-vjcxx\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.585642 master-0 kubenswrapper[31420]: E0220 12:08:29.585503 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert podName:97a4b974-ceda-481d-8e07-f6e94e37095c nodeName:}" failed. No retries permitted until 2026-02-20 12:08:30.085414587 +0000 UTC m=+214.804652868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert") pod "console-6f5fc458cf-6qd8t" (UID: "97a4b974-ceda-481d-8e07-f6e94e37095c") : secret "console-serving-cert" not found Feb 20 12:08:29.585776 master-0 kubenswrapper[31420]: I0220 12:08:29.585755 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-service-ca\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.586016 master-0 kubenswrapper[31420]: I0220 12:08:29.585962 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-console-config\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.586145 master-0 kubenswrapper[31420]: I0220 12:08:29.586112 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-trusted-ca-bundle\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.600870 master-0 kubenswrapper[31420]: I0220 12:08:29.600814 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-oauth-config\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:29.603837 master-0 kubenswrapper[31420]: I0220 12:08:29.603804 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjcxx\" (UniqueName: \"kubernetes.io/projected/97a4b974-ceda-481d-8e07-f6e94e37095c-kube-api-access-vjcxx\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:30.096912 master-0 kubenswrapper[31420]: I0220 12:08:30.096817 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:30.097963 master-0 kubenswrapper[31420]: E0220 12:08:30.097039 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:30.097963 master-0 kubenswrapper[31420]: E0220 12:08:30.097142 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert podName:97a4b974-ceda-481d-8e07-f6e94e37095c nodeName:}" failed. No retries permitted until 2026-02-20 12:08:31.097120055 +0000 UTC m=+215.816358286 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert") pod "console-6f5fc458cf-6qd8t" (UID: "97a4b974-ceda-481d-8e07-f6e94e37095c") : secret "console-serving-cert" not found Feb 20 12:08:30.397754 master-0 kubenswrapper[31420]: I0220 12:08:30.397592 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:30.400842 master-0 kubenswrapper[31420]: I0220 12:08:30.400772 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert\") pod \"console-7f4d46dfcc-9m7bm\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:30.401220 master-0 kubenswrapper[31420]: E0220 12:08:30.401177 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:30.401399 master-0 kubenswrapper[31420]: E0220 12:08:30.401262 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert podName:b16b42ee-106a-4667-82dc-463c002e7437 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:46.401239136 +0000 UTC m=+231.120477427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert") pod "console-7f4d46dfcc-9m7bm" (UID: "b16b42ee-106a-4667-82dc-463c002e7437") : secret "console-serving-cert" not found Feb 20 12:08:30.414448 master-0 kubenswrapper[31420]: I0220 12:08:30.414377 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:30.502181 master-0 kubenswrapper[31420]: I0220 12:08:30.502047 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c2zd\" (UniqueName: \"kubernetes.io/projected/b16b42ee-106a-4667-82dc-463c002e7437-kube-api-access-4c2zd\") pod \"b16b42ee-106a-4667-82dc-463c002e7437\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " Feb 20 12:08:30.502181 master-0 kubenswrapper[31420]: I0220 12:08:30.502123 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-oauth-serving-cert\") pod \"b16b42ee-106a-4667-82dc-463c002e7437\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " Feb 20 12:08:30.502792 master-0 kubenswrapper[31420]: I0220 12:08:30.502323 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-console-config\") pod \"b16b42ee-106a-4667-82dc-463c002e7437\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " Feb 20 12:08:30.502792 master-0 kubenswrapper[31420]: I0220 12:08:30.502377 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-oauth-config\") pod \"b16b42ee-106a-4667-82dc-463c002e7437\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " Feb 20 12:08:30.502792 master-0 kubenswrapper[31420]: I0220 12:08:30.502434 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-service-ca\") pod \"b16b42ee-106a-4667-82dc-463c002e7437\" (UID: \"b16b42ee-106a-4667-82dc-463c002e7437\") " Feb 20 12:08:30.503557 master-0 kubenswrapper[31420]: I0220 12:08:30.503466 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-console-config" (OuterVolumeSpecName: "console-config") pod "b16b42ee-106a-4667-82dc-463c002e7437" (UID: "b16b42ee-106a-4667-82dc-463c002e7437"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:08:30.503716 master-0 kubenswrapper[31420]: I0220 12:08:30.503513 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b16b42ee-106a-4667-82dc-463c002e7437" (UID: "b16b42ee-106a-4667-82dc-463c002e7437"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:08:30.503716 master-0 kubenswrapper[31420]: I0220 12:08:30.503651 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-service-ca" (OuterVolumeSpecName: "service-ca") pod "b16b42ee-106a-4667-82dc-463c002e7437" (UID: "b16b42ee-106a-4667-82dc-463c002e7437"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:08:30.507277 master-0 kubenswrapper[31420]: I0220 12:08:30.507199 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b16b42ee-106a-4667-82dc-463c002e7437" (UID: "b16b42ee-106a-4667-82dc-463c002e7437"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:08:30.508089 master-0 kubenswrapper[31420]: I0220 12:08:30.507506 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b16b42ee-106a-4667-82dc-463c002e7437-kube-api-access-4c2zd" (OuterVolumeSpecName: "kube-api-access-4c2zd") pod "b16b42ee-106a-4667-82dc-463c002e7437" (UID: "b16b42ee-106a-4667-82dc-463c002e7437"). InnerVolumeSpecName "kube-api-access-4c2zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:08:30.604653 master-0 kubenswrapper[31420]: I0220 12:08:30.604410 31420 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-console-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:30.604653 master-0 kubenswrapper[31420]: I0220 12:08:30.604444 31420 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:30.604653 master-0 kubenswrapper[31420]: I0220 12:08:30.604456 31420 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:30.604653 master-0 kubenswrapper[31420]: I0220 12:08:30.604465 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c2zd\" (UniqueName: \"kubernetes.io/projected/b16b42ee-106a-4667-82dc-463c002e7437-kube-api-access-4c2zd\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:30.604653 master-0 kubenswrapper[31420]: I0220 12:08:30.604474 31420 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b16b42ee-106a-4667-82dc-463c002e7437-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:31.114208 master-0 kubenswrapper[31420]: I0220 12:08:31.113048 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:31.114208 master-0 kubenswrapper[31420]: E0220 12:08:31.113265 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:31.114208 master-0 kubenswrapper[31420]: E0220 12:08:31.113390 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert podName:97a4b974-ceda-481d-8e07-f6e94e37095c nodeName:}" failed. No retries permitted until 2026-02-20 12:08:33.113360878 +0000 UTC m=+217.832599159 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert") pod "console-6f5fc458cf-6qd8t" (UID: "97a4b974-ceda-481d-8e07-f6e94e37095c") : secret "console-serving-cert" not found Feb 20 12:08:31.411558 master-0 kubenswrapper[31420]: I0220 12:08:31.411321 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f4d46dfcc-9m7bm" Feb 20 12:08:31.515842 master-0 kubenswrapper[31420]: I0220 12:08:31.515764 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7f4d46dfcc-9m7bm"] Feb 20 12:08:31.516183 master-0 kubenswrapper[31420]: I0220 12:08:31.515857 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7f4d46dfcc-9m7bm"] Feb 20 12:08:31.522107 master-0 kubenswrapper[31420]: I0220 12:08:31.522049 31420 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b16b42ee-106a-4667-82dc-463c002e7437-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:33.157713 master-0 kubenswrapper[31420]: I0220 12:08:33.157574 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:33.157713 master-0 kubenswrapper[31420]: E0220 12:08:33.157743 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:33.160848 master-0 kubenswrapper[31420]: E0220 12:08:33.157834 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert podName:97a4b974-ceda-481d-8e07-f6e94e37095c nodeName:}" failed. No retries permitted until 2026-02-20 12:08:37.157814893 +0000 UTC m=+221.877053134 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert") pod "console-6f5fc458cf-6qd8t" (UID: "97a4b974-ceda-481d-8e07-f6e94e37095c") : secret "console-serving-cert" not found Feb 20 12:08:33.511036 master-0 kubenswrapper[31420]: I0220 12:08:33.510981 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b16b42ee-106a-4667-82dc-463c002e7437" path="/var/lib/kubelet/pods/b16b42ee-106a-4667-82dc-463c002e7437/volumes" Feb 20 12:08:37.029126 master-0 kubenswrapper[31420]: I0220 12:08:37.028638 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert\") pod \"console-7db6b45755-6rz2s\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:37.029126 master-0 kubenswrapper[31420]: E0220 12:08:37.028942 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:37.029126 master-0 kubenswrapper[31420]: E0220 12:08:37.029040 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert podName:8076e8ff-cac6-4008-b719-b92fb734d4f9 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:53.029021699 +0000 UTC m=+237.748259940 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert") pod "console-7db6b45755-6rz2s" (UID: "8076e8ff-cac6-4008-b719-b92fb734d4f9") : secret "console-serving-cert" not found Feb 20 12:08:37.237235 master-0 kubenswrapper[31420]: I0220 12:08:37.237131 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:37.237434 master-0 kubenswrapper[31420]: E0220 12:08:37.237261 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:37.237434 master-0 kubenswrapper[31420]: E0220 12:08:37.237366 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert podName:97a4b974-ceda-481d-8e07-f6e94e37095c nodeName:}" failed. No retries permitted until 2026-02-20 12:08:45.237343596 +0000 UTC m=+229.956581927 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert") pod "console-6f5fc458cf-6qd8t" (UID: "97a4b974-ceda-481d-8e07-f6e94e37095c") : secret "console-serving-cert" not found Feb 20 12:08:45.268978 master-0 kubenswrapper[31420]: I0220 12:08:45.268913 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert\") pod \"console-6f5fc458cf-6qd8t\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:45.269649 master-0 kubenswrapper[31420]: E0220 12:08:45.269115 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:45.269649 master-0 kubenswrapper[31420]: E0220 12:08:45.269210 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert podName:97a4b974-ceda-481d-8e07-f6e94e37095c nodeName:}" failed. No retries permitted until 2026-02-20 12:09:01.269188983 +0000 UTC m=+245.988427214 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert") pod "console-6f5fc458cf-6qd8t" (UID: "97a4b974-ceda-481d-8e07-f6e94e37095c") : secret "console-serving-cert" not found Feb 20 12:08:45.551835 master-0 kubenswrapper[31420]: I0220 12:08:45.551712 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-dtq2m"] Feb 20 12:08:45.552812 master-0 kubenswrapper[31420]: I0220 12:08:45.552779 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" Feb 20 12:08:45.554512 master-0 kubenswrapper[31420]: I0220 12:08:45.554480 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 20 12:08:45.555115 master-0 kubenswrapper[31420]: I0220 12:08:45.555069 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 20 12:08:45.575611 master-0 kubenswrapper[31420]: I0220 12:08:45.575515 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-dtq2m\" (UID: \"e76022bd-2c71-4e40-9f65-d07f3ba095f1\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" Feb 20 12:08:45.575791 master-0 kubenswrapper[31420]: I0220 12:08:45.575684 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e76022bd-2c71-4e40-9f65-d07f3ba095f1-nginx-conf\") pod \"networking-console-plugin-79f587d78f-dtq2m\" (UID: \"e76022bd-2c71-4e40-9f65-d07f3ba095f1\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" Feb 20 12:08:45.714930 master-0 kubenswrapper[31420]: I0220 12:08:45.714879 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e76022bd-2c71-4e40-9f65-d07f3ba095f1-nginx-conf\") pod \"networking-console-plugin-79f587d78f-dtq2m\" (UID: \"e76022bd-2c71-4e40-9f65-d07f3ba095f1\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" Feb 20 12:08:45.715132 master-0 kubenswrapper[31420]: I0220 12:08:45.715000 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-dtq2m\" (UID: \"e76022bd-2c71-4e40-9f65-d07f3ba095f1\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" Feb 20 12:08:45.715255 master-0 kubenswrapper[31420]: E0220 12:08:45.715202 31420 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 20 12:08:45.715319 master-0 kubenswrapper[31420]: E0220 12:08:45.715302 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert podName:e76022bd-2c71-4e40-9f65-d07f3ba095f1 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:46.21528603 +0000 UTC m=+230.934524271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert") pod "networking-console-plugin-79f587d78f-dtq2m" (UID: "e76022bd-2c71-4e40-9f65-d07f3ba095f1") : secret "networking-console-plugin-cert" not found Feb 20 12:08:45.716172 master-0 kubenswrapper[31420]: I0220 12:08:45.716109 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e76022bd-2c71-4e40-9f65-d07f3ba095f1-nginx-conf\") pod \"networking-console-plugin-79f587d78f-dtq2m\" (UID: \"e76022bd-2c71-4e40-9f65-d07f3ba095f1\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" Feb 20 12:08:46.084427 master-0 kubenswrapper[31420]: I0220 12:08:46.084368 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-dtq2m"] Feb 20 12:08:46.222243 master-0 kubenswrapper[31420]: I0220 12:08:46.222194 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-dtq2m\" (UID: \"e76022bd-2c71-4e40-9f65-d07f3ba095f1\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" Feb 20 12:08:46.222507 master-0 kubenswrapper[31420]: E0220 12:08:46.222365 31420 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 20 12:08:46.222507 master-0 kubenswrapper[31420]: E0220 12:08:46.222416 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert podName:e76022bd-2c71-4e40-9f65-d07f3ba095f1 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:47.222402498 +0000 UTC m=+231.941640739 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert") pod "networking-console-plugin-79f587d78f-dtq2m" (UID: "e76022bd-2c71-4e40-9f65-d07f3ba095f1") : secret "networking-console-plugin-cert" not found Feb 20 12:08:47.241695 master-0 kubenswrapper[31420]: I0220 12:08:47.241609 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-dtq2m\" (UID: \"e76022bd-2c71-4e40-9f65-d07f3ba095f1\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" Feb 20 12:08:47.242511 master-0 kubenswrapper[31420]: E0220 12:08:47.241807 31420 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 20 12:08:47.242511 master-0 kubenswrapper[31420]: E0220 12:08:47.241898 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert podName:e76022bd-2c71-4e40-9f65-d07f3ba095f1 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:49.241878423 +0000 UTC m=+233.961116664 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert") pod "networking-console-plugin-79f587d78f-dtq2m" (UID: "e76022bd-2c71-4e40-9f65-d07f3ba095f1") : secret "networking-console-plugin-cert" not found Feb 20 12:08:48.688476 master-0 kubenswrapper[31420]: I0220 12:08:48.688392 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7db6b45755-6rz2s"] Feb 20 12:08:48.689359 master-0 kubenswrapper[31420]: E0220 12:08:48.688992 31420 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[console-serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-console/console-7db6b45755-6rz2s" podUID="8076e8ff-cac6-4008-b719-b92fb734d4f9" Feb 20 12:08:48.747472 master-0 kubenswrapper[31420]: I0220 12:08:48.747408 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6d9c46fd68-spxt2"] Feb 20 12:08:48.748293 master-0 kubenswrapper[31420]: I0220 12:08:48.748267 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.768773 master-0 kubenswrapper[31420]: I0220 12:08:48.768725 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6d9c46fd68-spxt2"] Feb 20 12:08:48.872260 master-0 kubenswrapper[31420]: I0220 12:08:48.872196 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-service-ca\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.872678 master-0 kubenswrapper[31420]: I0220 12:08:48.872626 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-oauth-config\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.872751 master-0 kubenswrapper[31420]: I0220 12:08:48.872684 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-console-config\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.872814 master-0 kubenswrapper[31420]: I0220 12:08:48.872784 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-trusted-ca-bundle\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.872869 master-0 kubenswrapper[31420]: I0220 12:08:48.872850 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.873156 master-0 kubenswrapper[31420]: I0220 12:08:48.873102 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnlk9\" (UniqueName: \"kubernetes.io/projected/8a7f358a-4a42-4323-ba9f-888aec86247a-kube-api-access-nnlk9\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.873250 master-0 kubenswrapper[31420]: I0220 12:08:48.873161 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-oauth-serving-cert\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.974813 master-0 kubenswrapper[31420]: I0220 12:08:48.974759 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-service-ca\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.974910 master-0 kubenswrapper[31420]: I0220 12:08:48.974841 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-console-config\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.974910 master-0 kubenswrapper[31420]: I0220 12:08:48.974868 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-oauth-config\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.974981 master-0 kubenswrapper[31420]: I0220 12:08:48.974913 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-trusted-ca-bundle\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.974981 master-0 kubenswrapper[31420]: I0220 12:08:48.974956 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.975046 master-0 kubenswrapper[31420]: I0220 12:08:48.975019 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnlk9\" (UniqueName: \"kubernetes.io/projected/8a7f358a-4a42-4323-ba9f-888aec86247a-kube-api-access-nnlk9\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.975079 master-0 kubenswrapper[31420]: I0220 12:08:48.975047 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-oauth-serving-cert\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.976142 master-0 kubenswrapper[31420]: I0220 12:08:48.976110 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-oauth-serving-cert\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.976686 master-0 kubenswrapper[31420]: E0220 12:08:48.976661 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:48.976761 master-0 kubenswrapper[31420]: E0220 12:08:48.976719 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert podName:8a7f358a-4a42-4323-ba9f-888aec86247a nodeName:}" failed. No retries permitted until 2026-02-20 12:08:49.476704681 +0000 UTC m=+234.195942922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert") pod "console-6d9c46fd68-spxt2" (UID: "8a7f358a-4a42-4323-ba9f-888aec86247a") : secret "console-serving-cert" not found Feb 20 12:08:48.976998 master-0 kubenswrapper[31420]: I0220 12:08:48.976967 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-trusted-ca-bundle\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.977269 master-0 kubenswrapper[31420]: I0220 12:08:48.977229 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-service-ca\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.977325 master-0 kubenswrapper[31420]: I0220 12:08:48.977284 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-console-config\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.981537 master-0 kubenswrapper[31420]: I0220 12:08:48.981487 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-oauth-config\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:48.997730 master-0 kubenswrapper[31420]: I0220 12:08:48.997683 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnlk9\" (UniqueName: \"kubernetes.io/projected/8a7f358a-4a42-4323-ba9f-888aec86247a-kube-api-access-nnlk9\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:49.281602 master-0 kubenswrapper[31420]: I0220 12:08:49.281473 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-dtq2m\" (UID: \"e76022bd-2c71-4e40-9f65-d07f3ba095f1\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" Feb 20 12:08:49.281867 master-0 kubenswrapper[31420]: E0220 12:08:49.281800 31420 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 20 12:08:49.282005 master-0 kubenswrapper[31420]: E0220 12:08:49.281965 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert podName:e76022bd-2c71-4e40-9f65-d07f3ba095f1 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:53.281928903 +0000 UTC m=+238.001167234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert") pod "networking-console-plugin-79f587d78f-dtq2m" (UID: "e76022bd-2c71-4e40-9f65-d07f3ba095f1") : secret "networking-console-plugin-cert" not found Feb 20 12:08:49.485775 master-0 kubenswrapper[31420]: I0220 12:08:49.485675 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:49.486023 master-0 kubenswrapper[31420]: E0220 12:08:49.485872 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:49.486023 master-0 kubenswrapper[31420]: E0220 12:08:49.485986 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert podName:8a7f358a-4a42-4323-ba9f-888aec86247a nodeName:}" failed. No retries permitted until 2026-02-20 12:08:50.485958579 +0000 UTC m=+235.205196850 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert") pod "console-6d9c46fd68-spxt2" (UID: "8a7f358a-4a42-4323-ba9f-888aec86247a") : secret "console-serving-cert" not found Feb 20 12:08:49.560141 master-0 kubenswrapper[31420]: I0220 12:08:49.560013 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:49.560613 master-0 kubenswrapper[31420]: I0220 12:08:49.560008 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-955b69498-tnjkt" event={"ID":"2497e863-ea03-4513-8d7a-3b5fef6f323a","Type":"ContainerStarted","Data":"a28244e9e998af2e75d6feb43df2ae96eec683fc6412f096b5f2dc0b5aa9fd0c"} Feb 20 12:08:49.560899 master-0 kubenswrapper[31420]: I0220 12:08:49.560872 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-955b69498-tnjkt" Feb 20 12:08:49.562657 master-0 kubenswrapper[31420]: I0220 12:08:49.562614 31420 patch_prober.go:28] interesting pod/downloads-955b69498-tnjkt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.98:8080/\": dial tcp 10.128.0.98:8080: connect: connection refused" start-of-body= Feb 20 12:08:49.562750 master-0 kubenswrapper[31420]: I0220 12:08:49.562681 31420 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-tnjkt" podUID="2497e863-ea03-4513-8d7a-3b5fef6f323a" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.98:8080/\": dial tcp 10.128.0.98:8080: connect: connection refused" Feb 20 12:08:49.573672 master-0 kubenswrapper[31420]: I0220 12:08:49.573629 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:49.603507 master-0 kubenswrapper[31420]: I0220 12:08:49.603322 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-955b69498-tnjkt" podStartSLOduration=2.425375906 podStartE2EDuration="38.603234078s" podCreationTimestamp="2026-02-20 12:08:11 +0000 UTC" firstStartedPulling="2026-02-20 12:08:12.671376842 +0000 UTC m=+197.390615093" lastFinishedPulling="2026-02-20 12:08:48.849235024 +0000 UTC m=+233.568473265" observedRunningTime="2026-02-20 12:08:49.598280518 +0000 UTC m=+234.317518799" watchObservedRunningTime="2026-02-20 12:08:49.603234078 +0000 UTC m=+234.322472389" Feb 20 12:08:49.688932 master-0 kubenswrapper[31420]: I0220 12:08:49.688884 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmpj9\" (UniqueName: \"kubernetes.io/projected/8076e8ff-cac6-4008-b719-b92fb734d4f9-kube-api-access-vmpj9\") pod \"8076e8ff-cac6-4008-b719-b92fb734d4f9\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " Feb 20 12:08:49.688932 master-0 kubenswrapper[31420]: I0220 12:08:49.688939 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-trusted-ca-bundle\") pod \"8076e8ff-cac6-4008-b719-b92fb734d4f9\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " Feb 20 12:08:49.689788 master-0 kubenswrapper[31420]: I0220 12:08:49.688968 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-config\") pod \"8076e8ff-cac6-4008-b719-b92fb734d4f9\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " Feb 20 12:08:49.689788 master-0 kubenswrapper[31420]: I0220 12:08:49.689028 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-oauth-serving-cert\") pod \"8076e8ff-cac6-4008-b719-b92fb734d4f9\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " Feb 20 12:08:49.689788 master-0 kubenswrapper[31420]: I0220 12:08:49.689270 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-service-ca\") pod \"8076e8ff-cac6-4008-b719-b92fb734d4f9\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " Feb 20 12:08:49.689788 master-0 kubenswrapper[31420]: I0220 12:08:49.689333 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-oauth-config\") pod \"8076e8ff-cac6-4008-b719-b92fb734d4f9\" (UID: \"8076e8ff-cac6-4008-b719-b92fb734d4f9\") " Feb 20 12:08:49.690602 master-0 kubenswrapper[31420]: I0220 12:08:49.690558 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8076e8ff-cac6-4008-b719-b92fb734d4f9" (UID: "8076e8ff-cac6-4008-b719-b92fb734d4f9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:08:49.690838 master-0 kubenswrapper[31420]: I0220 12:08:49.690814 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8076e8ff-cac6-4008-b719-b92fb734d4f9" (UID: "8076e8ff-cac6-4008-b719-b92fb734d4f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:08:49.691052 master-0 kubenswrapper[31420]: I0220 12:08:49.691028 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-config" (OuterVolumeSpecName: "console-config") pod "8076e8ff-cac6-4008-b719-b92fb734d4f9" (UID: "8076e8ff-cac6-4008-b719-b92fb734d4f9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:08:49.691238 master-0 kubenswrapper[31420]: I0220 12:08:49.691216 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-service-ca" (OuterVolumeSpecName: "service-ca") pod "8076e8ff-cac6-4008-b719-b92fb734d4f9" (UID: "8076e8ff-cac6-4008-b719-b92fb734d4f9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:08:49.694511 master-0 kubenswrapper[31420]: I0220 12:08:49.694476 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8076e8ff-cac6-4008-b719-b92fb734d4f9" (UID: "8076e8ff-cac6-4008-b719-b92fb734d4f9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:08:49.694627 master-0 kubenswrapper[31420]: I0220 12:08:49.694558 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8076e8ff-cac6-4008-b719-b92fb734d4f9-kube-api-access-vmpj9" (OuterVolumeSpecName: "kube-api-access-vmpj9") pod "8076e8ff-cac6-4008-b719-b92fb734d4f9" (UID: "8076e8ff-cac6-4008-b719-b92fb734d4f9"). InnerVolumeSpecName "kube-api-access-vmpj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:08:49.790441 master-0 kubenswrapper[31420]: I0220 12:08:49.790375 31420 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:49.790441 master-0 kubenswrapper[31420]: I0220 12:08:49.790413 31420 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:49.790441 master-0 kubenswrapper[31420]: I0220 12:08:49.790425 31420 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:49.790441 master-0 kubenswrapper[31420]: I0220 12:08:49.790434 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmpj9\" (UniqueName: \"kubernetes.io/projected/8076e8ff-cac6-4008-b719-b92fb734d4f9-kube-api-access-vmpj9\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:49.790441 master-0 kubenswrapper[31420]: I0220 12:08:49.790443 31420 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:49.790441 master-0 kubenswrapper[31420]: I0220 12:08:49.790454 31420 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:49.957772 master-0 kubenswrapper[31420]: I0220 12:08:49.957643 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f5fc458cf-6qd8t"] Feb 20 12:08:49.959994 master-0 kubenswrapper[31420]: E0220 12:08:49.958799 31420 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[console-serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-console/console-6f5fc458cf-6qd8t" podUID="97a4b974-ceda-481d-8e07-f6e94e37095c" Feb 20 12:08:49.994367 master-0 kubenswrapper[31420]: I0220 12:08:49.994303 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7f6444fbcc-rvd49"] Feb 20 12:08:49.995348 master-0 kubenswrapper[31420]: I0220 12:08:49.995322 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.013777 master-0 kubenswrapper[31420]: I0220 12:08:50.013637 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f6444fbcc-rvd49"] Feb 20 12:08:50.095553 master-0 kubenswrapper[31420]: I0220 12:08:50.095488 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-trusted-ca-bundle\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.095553 master-0 kubenswrapper[31420]: I0220 12:08:50.095550 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.095832 master-0 kubenswrapper[31420]: I0220 12:08:50.095606 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-oauth-config\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.095832 master-0 kubenswrapper[31420]: I0220 12:08:50.095771 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twvx8\" (UniqueName: \"kubernetes.io/projected/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-kube-api-access-twvx8\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.095949 master-0 kubenswrapper[31420]: I0220 12:08:50.095843 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-config\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.095949 master-0 kubenswrapper[31420]: I0220 12:08:50.095904 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-service-ca\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.096110 master-0 kubenswrapper[31420]: I0220 12:08:50.096069 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-oauth-serving-cert\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.198162 master-0 kubenswrapper[31420]: I0220 12:08:50.198076 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-oauth-serving-cert\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.198511 master-0 kubenswrapper[31420]: I0220 12:08:50.198265 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-trusted-ca-bundle\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.199688 master-0 kubenswrapper[31420]: I0220 12:08:50.199630 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-oauth-serving-cert\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.199801 master-0 kubenswrapper[31420]: E0220 12:08:50.199774 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:50.199872 master-0 kubenswrapper[31420]: E0220 12:08:50.199857 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert podName:79e3c3e3-1405-4ac9-b024-b6f2d25347b4 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:50.699831901 +0000 UTC m=+235.419070182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert") pod "console-7f6444fbcc-rvd49" (UID: "79e3c3e3-1405-4ac9-b024-b6f2d25347b4") : secret "console-serving-cert" not found Feb 20 12:08:50.200227 master-0 kubenswrapper[31420]: I0220 12:08:50.200173 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-trusted-ca-bundle\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.200310 master-0 kubenswrapper[31420]: I0220 12:08:50.198584 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.200420 master-0 kubenswrapper[31420]: I0220 12:08:50.200378 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-oauth-config\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.201174 master-0 kubenswrapper[31420]: I0220 12:08:50.201122 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twvx8\" (UniqueName: \"kubernetes.io/projected/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-kube-api-access-twvx8\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.201264 master-0 kubenswrapper[31420]: I0220 12:08:50.201176 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-config\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.201264 master-0 kubenswrapper[31420]: I0220 12:08:50.201230 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-service-ca\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.203159 master-0 kubenswrapper[31420]: I0220 12:08:50.203109 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-service-ca\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.204636 master-0 kubenswrapper[31420]: I0220 12:08:50.204572 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-config\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.205384 master-0 kubenswrapper[31420]: I0220 12:08:50.205329 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-oauth-config\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.234120 master-0 kubenswrapper[31420]: I0220 12:08:50.233979 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twvx8\" (UniqueName: \"kubernetes.io/projected/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-kube-api-access-twvx8\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.506757 master-0 kubenswrapper[31420]: I0220 12:08:50.506664 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:50.507150 master-0 kubenswrapper[31420]: E0220 12:08:50.507082 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:50.507229 master-0 kubenswrapper[31420]: E0220 12:08:50.507153 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert podName:8a7f358a-4a42-4323-ba9f-888aec86247a nodeName:}" failed. No retries permitted until 2026-02-20 12:08:52.507131292 +0000 UTC m=+237.226369563 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert") pod "console-6d9c46fd68-spxt2" (UID: "8a7f358a-4a42-4323-ba9f-888aec86247a") : secret "console-serving-cert" not found Feb 20 12:08:50.568890 master-0 kubenswrapper[31420]: I0220 12:08:50.568781 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7db6b45755-6rz2s" Feb 20 12:08:50.569191 master-0 kubenswrapper[31420]: I0220 12:08:50.569035 31420 patch_prober.go:28] interesting pod/downloads-955b69498-tnjkt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.98:8080/\": dial tcp 10.128.0.98:8080: connect: connection refused" start-of-body= Feb 20 12:08:50.569191 master-0 kubenswrapper[31420]: I0220 12:08:50.569111 31420 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-tnjkt" podUID="2497e863-ea03-4513-8d7a-3b5fef6f323a" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.98:8080/\": dial tcp 10.128.0.98:8080: connect: connection refused" Feb 20 12:08:50.569447 master-0 kubenswrapper[31420]: I0220 12:08:50.569167 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:50.582040 master-0 kubenswrapper[31420]: I0220 12:08:50.581908 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:50.641248 master-0 kubenswrapper[31420]: I0220 12:08:50.641163 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7db6b45755-6rz2s"] Feb 20 12:08:50.650863 master-0 kubenswrapper[31420]: I0220 12:08:50.650795 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7db6b45755-6rz2s"] Feb 20 12:08:50.711372 master-0 kubenswrapper[31420]: I0220 12:08:50.711297 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjcxx\" (UniqueName: \"kubernetes.io/projected/97a4b974-ceda-481d-8e07-f6e94e37095c-kube-api-access-vjcxx\") pod \"97a4b974-ceda-481d-8e07-f6e94e37095c\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " Feb 20 12:08:50.711945 master-0 kubenswrapper[31420]: I0220 12:08:50.711480 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-oauth-serving-cert\") pod \"97a4b974-ceda-481d-8e07-f6e94e37095c\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " Feb 20 12:08:50.711945 master-0 kubenswrapper[31420]: I0220 12:08:50.711619 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-oauth-config\") pod \"97a4b974-ceda-481d-8e07-f6e94e37095c\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " Feb 20 12:08:50.711945 master-0 kubenswrapper[31420]: I0220 12:08:50.711832 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-service-ca\") pod \"97a4b974-ceda-481d-8e07-f6e94e37095c\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " Feb 20 12:08:50.711945 master-0 kubenswrapper[31420]: I0220 12:08:50.711912 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-console-config\") pod \"97a4b974-ceda-481d-8e07-f6e94e37095c\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " Feb 20 12:08:50.712134 master-0 kubenswrapper[31420]: I0220 12:08:50.711997 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-trusted-ca-bundle\") pod \"97a4b974-ceda-481d-8e07-f6e94e37095c\" (UID: \"97a4b974-ceda-481d-8e07-f6e94e37095c\") " Feb 20 12:08:50.713178 master-0 kubenswrapper[31420]: I0220 12:08:50.712759 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:50.713178 master-0 kubenswrapper[31420]: I0220 12:08:50.712886 31420 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8076e8ff-cac6-4008-b719-b92fb734d4f9-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:50.713178 master-0 kubenswrapper[31420]: E0220 12:08:50.713011 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:50.713178 master-0 kubenswrapper[31420]: E0220 12:08:50.713090 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert podName:79e3c3e3-1405-4ac9-b024-b6f2d25347b4 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:51.713064522 +0000 UTC m=+236.432302793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert") pod "console-7f6444fbcc-rvd49" (UID: "79e3c3e3-1405-4ac9-b024-b6f2d25347b4") : secret "console-serving-cert" not found Feb 20 12:08:50.715772 master-0 kubenswrapper[31420]: I0220 12:08:50.715642 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-service-ca" (OuterVolumeSpecName: "service-ca") pod "97a4b974-ceda-481d-8e07-f6e94e37095c" (UID: "97a4b974-ceda-481d-8e07-f6e94e37095c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:08:50.716407 master-0 kubenswrapper[31420]: I0220 12:08:50.716335 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "97a4b974-ceda-481d-8e07-f6e94e37095c" (UID: "97a4b974-ceda-481d-8e07-f6e94e37095c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:08:50.716477 master-0 kubenswrapper[31420]: I0220 12:08:50.716355 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-console-config" (OuterVolumeSpecName: "console-config") pod "97a4b974-ceda-481d-8e07-f6e94e37095c" (UID: "97a4b974-ceda-481d-8e07-f6e94e37095c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:08:50.717074 master-0 kubenswrapper[31420]: I0220 12:08:50.716976 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "97a4b974-ceda-481d-8e07-f6e94e37095c" (UID: "97a4b974-ceda-481d-8e07-f6e94e37095c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:08:50.717459 master-0 kubenswrapper[31420]: I0220 12:08:50.717410 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a4b974-ceda-481d-8e07-f6e94e37095c-kube-api-access-vjcxx" (OuterVolumeSpecName: "kube-api-access-vjcxx") pod "97a4b974-ceda-481d-8e07-f6e94e37095c" (UID: "97a4b974-ceda-481d-8e07-f6e94e37095c"). InnerVolumeSpecName "kube-api-access-vjcxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:08:50.719371 master-0 kubenswrapper[31420]: I0220 12:08:50.719284 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "97a4b974-ceda-481d-8e07-f6e94e37095c" (UID: "97a4b974-ceda-481d-8e07-f6e94e37095c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:08:50.814600 master-0 kubenswrapper[31420]: I0220 12:08:50.814434 31420 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:50.814600 master-0 kubenswrapper[31420]: I0220 12:08:50.814504 31420 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:50.814600 master-0 kubenswrapper[31420]: I0220 12:08:50.814548 31420 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:50.814600 master-0 kubenswrapper[31420]: I0220 12:08:50.814563 31420 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-console-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:50.814600 master-0 kubenswrapper[31420]: I0220 12:08:50.814577 31420 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97a4b974-ceda-481d-8e07-f6e94e37095c-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:50.814600 master-0 kubenswrapper[31420]: I0220 12:08:50.814590 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjcxx\" (UniqueName: \"kubernetes.io/projected/97a4b974-ceda-481d-8e07-f6e94e37095c-kube-api-access-vjcxx\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:51.505349 master-0 kubenswrapper[31420]: I0220 12:08:51.505265 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8076e8ff-cac6-4008-b719-b92fb734d4f9" path="/var/lib/kubelet/pods/8076e8ff-cac6-4008-b719-b92fb734d4f9/volumes" Feb 20 12:08:51.575289 master-0 kubenswrapper[31420]: I0220 12:08:51.575232 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f5fc458cf-6qd8t" Feb 20 12:08:51.631953 master-0 kubenswrapper[31420]: I0220 12:08:51.631859 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f5fc458cf-6qd8t"] Feb 20 12:08:51.646638 master-0 kubenswrapper[31420]: I0220 12:08:51.646478 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6f5fc458cf-6qd8t"] Feb 20 12:08:51.729368 master-0 kubenswrapper[31420]: I0220 12:08:51.729286 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:51.730214 master-0 kubenswrapper[31420]: E0220 12:08:51.729512 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:51.730214 master-0 kubenswrapper[31420]: E0220 12:08:51.729646 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert podName:79e3c3e3-1405-4ac9-b024-b6f2d25347b4 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:53.729617715 +0000 UTC m=+238.448855996 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert") pod "console-7f6444fbcc-rvd49" (UID: "79e3c3e3-1405-4ac9-b024-b6f2d25347b4") : secret "console-serving-cert" not found Feb 20 12:08:51.730214 master-0 kubenswrapper[31420]: I0220 12:08:51.729561 31420 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97a4b974-ceda-481d-8e07-f6e94e37095c-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:08:52.157125 master-0 kubenswrapper[31420]: I0220 12:08:52.156982 31420 patch_prober.go:28] interesting pod/downloads-955b69498-tnjkt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.98:8080/\": dial tcp 10.128.0.98:8080: connect: connection refused" start-of-body= Feb 20 12:08:52.157125 master-0 kubenswrapper[31420]: I0220 12:08:52.157100 31420 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-tnjkt" podUID="2497e863-ea03-4513-8d7a-3b5fef6f323a" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.98:8080/\": dial tcp 10.128.0.98:8080: connect: connection refused" Feb 20 12:08:52.157125 master-0 kubenswrapper[31420]: I0220 12:08:52.157124 31420 patch_prober.go:28] interesting pod/downloads-955b69498-tnjkt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.98:8080/\": dial tcp 10.128.0.98:8080: connect: connection refused" start-of-body= Feb 20 12:08:52.157823 master-0 kubenswrapper[31420]: I0220 12:08:52.157174 31420 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-955b69498-tnjkt" podUID="2497e863-ea03-4513-8d7a-3b5fef6f323a" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.98:8080/\": dial tcp 10.128.0.98:8080: connect: connection refused" Feb 20 12:08:52.542589 master-0 kubenswrapper[31420]: I0220 12:08:52.542485 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:52.542916 master-0 kubenswrapper[31420]: E0220 12:08:52.542772 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:52.542995 master-0 kubenswrapper[31420]: E0220 12:08:52.542917 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert podName:8a7f358a-4a42-4323-ba9f-888aec86247a nodeName:}" failed. No retries permitted until 2026-02-20 12:08:56.54288032 +0000 UTC m=+241.262118631 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert") pod "console-6d9c46fd68-spxt2" (UID: "8a7f358a-4a42-4323-ba9f-888aec86247a") : secret "console-serving-cert" not found Feb 20 12:08:53.359014 master-0 kubenswrapper[31420]: I0220 12:08:53.358890 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-dtq2m\" (UID: \"e76022bd-2c71-4e40-9f65-d07f3ba095f1\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" Feb 20 12:08:53.359863 master-0 kubenswrapper[31420]: E0220 12:08:53.359114 31420 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 20 12:08:53.359863 master-0 kubenswrapper[31420]: E0220 12:08:53.359217 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert podName:e76022bd-2c71-4e40-9f65-d07f3ba095f1 nodeName:}" failed. No retries permitted until 2026-02-20 12:09:01.359192522 +0000 UTC m=+246.078430833 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert") pod "networking-console-plugin-79f587d78f-dtq2m" (UID: "e76022bd-2c71-4e40-9f65-d07f3ba095f1") : secret "networking-console-plugin-cert" not found Feb 20 12:08:53.511599 master-0 kubenswrapper[31420]: I0220 12:08:53.511494 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a4b974-ceda-481d-8e07-f6e94e37095c" path="/var/lib/kubelet/pods/97a4b974-ceda-481d-8e07-f6e94e37095c/volumes" Feb 20 12:08:53.771359 master-0 kubenswrapper[31420]: I0220 12:08:53.771273 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:53.771825 master-0 kubenswrapper[31420]: E0220 12:08:53.771497 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:53.771825 master-0 kubenswrapper[31420]: E0220 12:08:53.771665 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert podName:79e3c3e3-1405-4ac9-b024-b6f2d25347b4 nodeName:}" failed. No retries permitted until 2026-02-20 12:08:57.77163505 +0000 UTC m=+242.490873331 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert") pod "console-7f6444fbcc-rvd49" (UID: "79e3c3e3-1405-4ac9-b024-b6f2d25347b4") : secret "console-serving-cert" not found Feb 20 12:08:56.616563 master-0 kubenswrapper[31420]: I0220 12:08:56.616485 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:08:56.617483 master-0 kubenswrapper[31420]: E0220 12:08:56.616831 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:56.617483 master-0 kubenswrapper[31420]: E0220 12:08:56.616937 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert podName:8a7f358a-4a42-4323-ba9f-888aec86247a nodeName:}" failed. No retries permitted until 2026-02-20 12:09:04.616907138 +0000 UTC m=+249.336145409 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert") pod "console-6d9c46fd68-spxt2" (UID: "8a7f358a-4a42-4323-ba9f-888aec86247a") : secret "console-serving-cert" not found Feb 20 12:08:57.835811 master-0 kubenswrapper[31420]: I0220 12:08:57.835737 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:08:57.836478 master-0 kubenswrapper[31420]: E0220 12:08:57.835953 31420 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Feb 20 12:08:57.836478 master-0 kubenswrapper[31420]: E0220 12:08:57.836079 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert podName:79e3c3e3-1405-4ac9-b024-b6f2d25347b4 nodeName:}" failed. No retries permitted until 2026-02-20 12:09:05.836049277 +0000 UTC m=+250.555287618 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert") pod "console-7f6444fbcc-rvd49" (UID: "79e3c3e3-1405-4ac9-b024-b6f2d25347b4") : secret "console-serving-cert" not found Feb 20 12:09:01.459346 master-0 kubenswrapper[31420]: I0220 12:09:01.459290 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-dtq2m\" (UID: \"e76022bd-2c71-4e40-9f65-d07f3ba095f1\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" Feb 20 12:09:01.462314 master-0 kubenswrapper[31420]: I0220 12:09:01.462274 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e76022bd-2c71-4e40-9f65-d07f3ba095f1-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-dtq2m\" (UID: \"e76022bd-2c71-4e40-9f65-d07f3ba095f1\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" Feb 20 12:09:01.472969 master-0 kubenswrapper[31420]: I0220 12:09:01.472938 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" Feb 20 12:09:02.192430 master-0 kubenswrapper[31420]: I0220 12:09:02.192323 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-955b69498-tnjkt" Feb 20 12:09:02.225004 master-0 kubenswrapper[31420]: I0220 12:09:02.224929 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-dtq2m"] Feb 20 12:09:02.678994 master-0 kubenswrapper[31420]: I0220 12:09:02.678879 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" event={"ID":"e76022bd-2c71-4e40-9f65-d07f3ba095f1","Type":"ContainerStarted","Data":"f32b377e7b66570a2d8c791d7f3de6664c1275d9b806bcdfad4dd0a57085b64c"} Feb 20 12:09:04.639855 master-0 kubenswrapper[31420]: I0220 12:09:04.639750 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:09:04.645922 master-0 kubenswrapper[31420]: I0220 12:09:04.645835 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert\") pod \"console-6d9c46fd68-spxt2\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:09:04.683907 master-0 kubenswrapper[31420]: I0220 12:09:04.683796 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-2qn6j" Feb 20 12:09:04.691028 master-0 kubenswrapper[31420]: I0220 12:09:04.690965 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:09:05.836518 master-0 kubenswrapper[31420]: I0220 12:09:05.836418 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6d9c46fd68-spxt2"] Feb 20 12:09:05.882422 master-0 kubenswrapper[31420]: I0220 12:09:05.882330 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:09:05.888341 master-0 kubenswrapper[31420]: I0220 12:09:05.888302 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert\") pod \"console-7f6444fbcc-rvd49\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:09:05.923557 master-0 kubenswrapper[31420]: I0220 12:09:05.923449 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:09:06.274348 master-0 kubenswrapper[31420]: I0220 12:09:06.274306 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l"] Feb 20 12:09:06.442979 master-0 kubenswrapper[31420]: I0220 12:09:06.442921 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f6444fbcc-rvd49"] Feb 20 12:09:06.673954 master-0 kubenswrapper[31420]: I0220 12:09:06.673795 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 20 12:09:06.681582 master-0 kubenswrapper[31420]: I0220 12:09:06.679173 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.681582 master-0 kubenswrapper[31420]: I0220 12:09:06.680936 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 20 12:09:06.681582 master-0 kubenswrapper[31420]: I0220 12:09:06.681436 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 20 12:09:06.682621 master-0 kubenswrapper[31420]: I0220 12:09:06.682206 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 20 12:09:06.682621 master-0 kubenswrapper[31420]: I0220 12:09:06.682244 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 20 12:09:06.682621 master-0 kubenswrapper[31420]: I0220 12:09:06.682261 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 20 12:09:06.682621 master-0 kubenswrapper[31420]: I0220 12:09:06.682208 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 20 12:09:06.683846 master-0 kubenswrapper[31420]: I0220 12:09:06.683266 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 20 12:09:06.695334 master-0 kubenswrapper[31420]: I0220 12:09:06.695108 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d6df1a9-67a1-4776-917e-aa4aa6424faf-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.695334 master-0 kubenswrapper[31420]: I0220 12:09:06.695174 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d6df1a9-67a1-4776-917e-aa4aa6424faf-config-out\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.695334 master-0 kubenswrapper[31420]: I0220 12:09:06.695196 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.695334 master-0 kubenswrapper[31420]: I0220 12:09:06.695222 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-config-volume\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.695334 master-0 kubenswrapper[31420]: I0220 12:09:06.695244 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85cb6\" (UniqueName: \"kubernetes.io/projected/4d6df1a9-67a1-4776-917e-aa4aa6424faf-kube-api-access-85cb6\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.695960 master-0 kubenswrapper[31420]: I0220 12:09:06.695407 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4d6df1a9-67a1-4776-917e-aa4aa6424faf-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.695960 master-0 kubenswrapper[31420]: I0220 12:09:06.695594 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d6df1a9-67a1-4776-917e-aa4aa6424faf-tls-assets\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.695960 master-0 kubenswrapper[31420]: I0220 12:09:06.695669 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.695960 master-0 kubenswrapper[31420]: I0220 12:09:06.695715 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-web-config\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.695960 master-0 kubenswrapper[31420]: I0220 12:09:06.695753 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.696162 master-0 kubenswrapper[31420]: I0220 12:09:06.695978 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/4d6df1a9-67a1-4776-917e-aa4aa6424faf-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.696162 master-0 kubenswrapper[31420]: I0220 12:09:06.696007 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.696329 master-0 kubenswrapper[31420]: I0220 12:09:06.696269 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 20 12:09:06.696378 master-0 kubenswrapper[31420]: I0220 12:09:06.696347 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 20 12:09:06.730708 master-0 kubenswrapper[31420]: I0220 12:09:06.730621 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" event={"ID":"e76022bd-2c71-4e40-9f65-d07f3ba095f1","Type":"ContainerStarted","Data":"11e2a9bc8943cc2a807892a12c9574ca69a70ddfc6f2936b95391b48c64fa32d"} Feb 20 12:09:06.733473 master-0 kubenswrapper[31420]: I0220 12:09:06.733412 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d9c46fd68-spxt2" event={"ID":"8a7f358a-4a42-4323-ba9f-888aec86247a","Type":"ContainerStarted","Data":"272a477ac555c214c0000ad8fa9ef87fac0ca7099e40e3ddb86900dd5ba529d1"} Feb 20 12:09:06.735343 master-0 kubenswrapper[31420]: I0220 12:09:06.735299 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f6444fbcc-rvd49" event={"ID":"79e3c3e3-1405-4ac9-b024-b6f2d25347b4","Type":"ContainerStarted","Data":"3e5d2298e3d9e07ec065996b96601a8bf745dd995ddcab587dd163020295fac8"} Feb 20 12:09:06.759263 master-0 kubenswrapper[31420]: I0220 12:09:06.759068 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-79f587d78f-dtq2m" podStartSLOduration=17.702557969 podStartE2EDuration="21.759044545s" podCreationTimestamp="2026-02-20 12:08:45 +0000 UTC" firstStartedPulling="2026-02-20 12:09:02.221676589 +0000 UTC m=+246.940914870" lastFinishedPulling="2026-02-20 12:09:06.278163205 +0000 UTC m=+250.997401446" observedRunningTime="2026-02-20 12:09:06.750311934 +0000 UTC m=+251.469550175" watchObservedRunningTime="2026-02-20 12:09:06.759044545 +0000 UTC m=+251.478282796" Feb 20 12:09:06.804233 master-0 kubenswrapper[31420]: I0220 12:09:06.804138 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d6df1a9-67a1-4776-917e-aa4aa6424faf-tls-assets\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.804233 master-0 kubenswrapper[31420]: I0220 12:09:06.804245 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.804598 master-0 kubenswrapper[31420]: I0220 12:09:06.804274 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-web-config\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.804598 master-0 kubenswrapper[31420]: I0220 12:09:06.804302 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.804598 master-0 kubenswrapper[31420]: I0220 12:09:06.804322 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/4d6df1a9-67a1-4776-917e-aa4aa6424faf-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.804598 master-0 kubenswrapper[31420]: I0220 12:09:06.804338 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.804598 master-0 kubenswrapper[31420]: I0220 12:09:06.804370 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d6df1a9-67a1-4776-917e-aa4aa6424faf-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.804598 master-0 kubenswrapper[31420]: I0220 12:09:06.804397 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d6df1a9-67a1-4776-917e-aa4aa6424faf-config-out\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.804598 master-0 kubenswrapper[31420]: I0220 12:09:06.804418 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.804598 master-0 kubenswrapper[31420]: I0220 12:09:06.804446 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-config-volume\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.804598 master-0 kubenswrapper[31420]: I0220 12:09:06.804463 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85cb6\" (UniqueName: \"kubernetes.io/projected/4d6df1a9-67a1-4776-917e-aa4aa6424faf-kube-api-access-85cb6\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.804598 master-0 kubenswrapper[31420]: I0220 12:09:06.804493 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4d6df1a9-67a1-4776-917e-aa4aa6424faf-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.807517 master-0 kubenswrapper[31420]: I0220 12:09:06.806965 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d6df1a9-67a1-4776-917e-aa4aa6424faf-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.808713 master-0 kubenswrapper[31420]: I0220 12:09:06.807931 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4d6df1a9-67a1-4776-917e-aa4aa6424faf-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.808713 master-0 kubenswrapper[31420]: I0220 12:09:06.808245 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/4d6df1a9-67a1-4776-917e-aa4aa6424faf-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.809183 master-0 kubenswrapper[31420]: I0220 12:09:06.809140 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.809246 master-0 kubenswrapper[31420]: I0220 12:09:06.809226 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-config-volume\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.810306 master-0 kubenswrapper[31420]: I0220 12:09:06.810269 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.811099 master-0 kubenswrapper[31420]: I0220 12:09:06.810838 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d6df1a9-67a1-4776-917e-aa4aa6424faf-tls-assets\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.811099 master-0 kubenswrapper[31420]: I0220 12:09:06.811059 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d6df1a9-67a1-4776-917e-aa4aa6424faf-config-out\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.811256 master-0 kubenswrapper[31420]: I0220 12:09:06.811219 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.811704 master-0 kubenswrapper[31420]: I0220 12:09:06.811663 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.820071 master-0 kubenswrapper[31420]: I0220 12:09:06.820037 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d6df1a9-67a1-4776-917e-aa4aa6424faf-web-config\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:06.825371 master-0 kubenswrapper[31420]: I0220 12:09:06.825347 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85cb6\" (UniqueName: \"kubernetes.io/projected/4d6df1a9-67a1-4776-917e-aa4aa6424faf-kube-api-access-85cb6\") pod \"alertmanager-main-0\" (UID: \"4d6df1a9-67a1-4776-917e-aa4aa6424faf\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:07.010035 master-0 kubenswrapper[31420]: I0220 12:09:07.009964 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 20 12:09:07.572723 master-0 kubenswrapper[31420]: I0220 12:09:07.572656 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 20 12:09:07.588697 master-0 kubenswrapper[31420]: I0220 12:09:07.588645 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn"] Feb 20 12:09:07.604139 master-0 kubenswrapper[31420]: I0220 12:09:07.604079 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.607949 master-0 kubenswrapper[31420]: I0220 12:09:07.607921 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-bkdspsfk9v6db" Feb 20 12:09:07.608179 master-0 kubenswrapper[31420]: I0220 12:09:07.608158 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 20 12:09:07.608622 master-0 kubenswrapper[31420]: I0220 12:09:07.608511 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 20 12:09:07.608782 master-0 kubenswrapper[31420]: I0220 12:09:07.608760 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 20 12:09:07.608980 master-0 kubenswrapper[31420]: I0220 12:09:07.608960 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 20 12:09:07.609993 master-0 kubenswrapper[31420]: I0220 12:09:07.609968 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 20 12:09:07.617977 master-0 kubenswrapper[31420]: I0220 12:09:07.617901 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn"] Feb 20 12:09:07.630403 master-0 kubenswrapper[31420]: I0220 12:09:07.630330 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-grpc-tls\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.630697 master-0 kubenswrapper[31420]: I0220 12:09:07.630542 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.630858 master-0 kubenswrapper[31420]: I0220 12:09:07.630710 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txvhz\" (UniqueName: \"kubernetes.io/projected/b228c455-3f6c-4557-8bf1-e7b2fe45f275-kube-api-access-txvhz\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.630858 master-0 kubenswrapper[31420]: I0220 12:09:07.630773 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.630858 master-0 kubenswrapper[31420]: I0220 12:09:07.630814 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-thanos-querier-tls\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.630858 master-0 kubenswrapper[31420]: I0220 12:09:07.630832 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.631121 master-0 kubenswrapper[31420]: I0220 12:09:07.630931 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.631121 master-0 kubenswrapper[31420]: I0220 12:09:07.631060 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b228c455-3f6c-4557-8bf1-e7b2fe45f275-metrics-client-ca\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.735323 master-0 kubenswrapper[31420]: I0220 12:09:07.735223 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.735563 master-0 kubenswrapper[31420]: I0220 12:09:07.735335 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b228c455-3f6c-4557-8bf1-e7b2fe45f275-metrics-client-ca\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.735563 master-0 kubenswrapper[31420]: I0220 12:09:07.735386 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-grpc-tls\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.735563 master-0 kubenswrapper[31420]: I0220 12:09:07.735427 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.735563 master-0 kubenswrapper[31420]: I0220 12:09:07.735465 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txvhz\" (UniqueName: \"kubernetes.io/projected/b228c455-3f6c-4557-8bf1-e7b2fe45f275-kube-api-access-txvhz\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.735563 master-0 kubenswrapper[31420]: I0220 12:09:07.735496 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.735563 master-0 kubenswrapper[31420]: I0220 12:09:07.735516 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-thanos-querier-tls\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.735563 master-0 kubenswrapper[31420]: I0220 12:09:07.735556 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.737782 master-0 kubenswrapper[31420]: I0220 12:09:07.737738 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b228c455-3f6c-4557-8bf1-e7b2fe45f275-metrics-client-ca\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.741556 master-0 kubenswrapper[31420]: I0220 12:09:07.741217 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.741556 master-0 kubenswrapper[31420]: I0220 12:09:07.741330 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.742671 master-0 kubenswrapper[31420]: I0220 12:09:07.742627 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-grpc-tls\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.745653 master-0 kubenswrapper[31420]: I0220 12:09:07.743810 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.745653 master-0 kubenswrapper[31420]: I0220 12:09:07.744555 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-thanos-querier-tls\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.746051 master-0 kubenswrapper[31420]: I0220 12:09:07.745820 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b228c455-3f6c-4557-8bf1-e7b2fe45f275-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.750929 master-0 kubenswrapper[31420]: I0220 12:09:07.750849 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4d6df1a9-67a1-4776-917e-aa4aa6424faf","Type":"ContainerStarted","Data":"d6bb8d5b16e0138bbad6a12bd16859c5de0c61a8ea287e50de596b210091e60c"} Feb 20 12:09:07.751944 master-0 kubenswrapper[31420]: I0220 12:09:07.751900 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txvhz\" (UniqueName: \"kubernetes.io/projected/b228c455-3f6c-4557-8bf1-e7b2fe45f275-kube-api-access-txvhz\") pod \"thanos-querier-57dfb4b6b4-gvjmn\" (UID: \"b228c455-3f6c-4557-8bf1-e7b2fe45f275\") " pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:07.963787 master-0 kubenswrapper[31420]: I0220 12:09:07.963545 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:08.425704 master-0 kubenswrapper[31420]: I0220 12:09:08.425629 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn"] Feb 20 12:09:08.758291 master-0 kubenswrapper[31420]: I0220 12:09:08.758225 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" event={"ID":"b228c455-3f6c-4557-8bf1-e7b2fe45f275","Type":"ContainerStarted","Data":"04de6f46d6ae3aeb9b8aaae56d2b3f3d7b1e73fe618f0f95d6c7f47271a370e8"} Feb 20 12:09:08.760428 master-0 kubenswrapper[31420]: I0220 12:09:08.760362 31420 generic.go:334] "Generic (PLEG): container finished" podID="4d6df1a9-67a1-4776-917e-aa4aa6424faf" containerID="9798cb0c4ec7349a709b9e0c9e896d2b8579dc22d0a6acab51533dface550c74" exitCode=0 Feb 20 12:09:08.760581 master-0 kubenswrapper[31420]: I0220 12:09:08.760431 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4d6df1a9-67a1-4776-917e-aa4aa6424faf","Type":"ContainerDied","Data":"9798cb0c4ec7349a709b9e0c9e896d2b8579dc22d0a6acab51533dface550c74"} Feb 20 12:09:10.311370 master-0 kubenswrapper[31420]: I0220 12:09:10.311288 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-65bb9698b4-rf9nz"] Feb 20 12:09:10.312821 master-0 kubenswrapper[31420]: I0220 12:09:10.312780 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.317735 master-0 kubenswrapper[31420]: I0220 12:09:10.317679 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-5rrrmulfd8ffj" Feb 20 12:09:10.330907 master-0 kubenswrapper[31420]: I0220 12:09:10.328662 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l"] Feb 20 12:09:10.330907 master-0 kubenswrapper[31420]: I0220 12:09:10.329097 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" podUID="6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" containerName="metrics-server" containerID="cri-o://4b07f9abf3972d915c07bfcfda610b6bf504d961e92676742b5c0b96fc8aab1a" gracePeriod=170 Feb 20 12:09:10.341482 master-0 kubenswrapper[31420]: I0220 12:09:10.341400 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-65bb9698b4-rf9nz"] Feb 20 12:09:10.379754 master-0 kubenswrapper[31420]: I0220 12:09:10.379337 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/164fcfe3-a130-4e21-afdf-3bafadeef238-client-ca-bundle\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.379754 master-0 kubenswrapper[31420]: I0220 12:09:10.379401 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bq76\" (UniqueName: \"kubernetes.io/projected/164fcfe3-a130-4e21-afdf-3bafadeef238-kube-api-access-7bq76\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.379754 master-0 kubenswrapper[31420]: I0220 12:09:10.379611 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/164fcfe3-a130-4e21-afdf-3bafadeef238-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.379754 master-0 kubenswrapper[31420]: I0220 12:09:10.379663 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/164fcfe3-a130-4e21-afdf-3bafadeef238-audit-log\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.379754 master-0 kubenswrapper[31420]: I0220 12:09:10.379705 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/164fcfe3-a130-4e21-afdf-3bafadeef238-metrics-server-audit-profiles\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.380441 master-0 kubenswrapper[31420]: I0220 12:09:10.379887 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/164fcfe3-a130-4e21-afdf-3bafadeef238-secret-metrics-server-tls\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.380441 master-0 kubenswrapper[31420]: I0220 12:09:10.379948 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/164fcfe3-a130-4e21-afdf-3bafadeef238-secret-metrics-client-certs\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.481209 master-0 kubenswrapper[31420]: I0220 12:09:10.481131 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/164fcfe3-a130-4e21-afdf-3bafadeef238-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.482479 master-0 kubenswrapper[31420]: I0220 12:09:10.481454 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/164fcfe3-a130-4e21-afdf-3bafadeef238-audit-log\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.482479 master-0 kubenswrapper[31420]: I0220 12:09:10.481666 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/164fcfe3-a130-4e21-afdf-3bafadeef238-metrics-server-audit-profiles\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.482479 master-0 kubenswrapper[31420]: I0220 12:09:10.481809 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/164fcfe3-a130-4e21-afdf-3bafadeef238-secret-metrics-server-tls\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.482479 master-0 kubenswrapper[31420]: I0220 12:09:10.481900 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/164fcfe3-a130-4e21-afdf-3bafadeef238-secret-metrics-client-certs\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.482479 master-0 kubenswrapper[31420]: I0220 12:09:10.481971 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/164fcfe3-a130-4e21-afdf-3bafadeef238-audit-log\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.482479 master-0 kubenswrapper[31420]: I0220 12:09:10.482084 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/164fcfe3-a130-4e21-afdf-3bafadeef238-client-ca-bundle\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.482479 master-0 kubenswrapper[31420]: I0220 12:09:10.482110 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bq76\" (UniqueName: \"kubernetes.io/projected/164fcfe3-a130-4e21-afdf-3bafadeef238-kube-api-access-7bq76\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.482479 master-0 kubenswrapper[31420]: I0220 12:09:10.482371 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/164fcfe3-a130-4e21-afdf-3bafadeef238-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.483605 master-0 kubenswrapper[31420]: I0220 12:09:10.483507 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/164fcfe3-a130-4e21-afdf-3bafadeef238-metrics-server-audit-profiles\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.489074 master-0 kubenswrapper[31420]: I0220 12:09:10.486674 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/164fcfe3-a130-4e21-afdf-3bafadeef238-client-ca-bundle\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.489074 master-0 kubenswrapper[31420]: I0220 12:09:10.488613 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/164fcfe3-a130-4e21-afdf-3bafadeef238-secret-metrics-server-tls\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.489074 master-0 kubenswrapper[31420]: I0220 12:09:10.489011 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/164fcfe3-a130-4e21-afdf-3bafadeef238-secret-metrics-client-certs\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.505492 master-0 kubenswrapper[31420]: I0220 12:09:10.505399 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bq76\" (UniqueName: \"kubernetes.io/projected/164fcfe3-a130-4e21-afdf-3bafadeef238-kube-api-access-7bq76\") pod \"metrics-server-65bb9698b4-rf9nz\" (UID: \"164fcfe3-a130-4e21-afdf-3bafadeef238\") " pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:10.643550 master-0 kubenswrapper[31420]: I0220 12:09:10.642932 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:11.235008 master-0 kubenswrapper[31420]: I0220 12:09:11.234965 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-65bb9698b4-rf9nz"] Feb 20 12:09:11.241285 master-0 kubenswrapper[31420]: W0220 12:09:11.241188 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod164fcfe3_a130_4e21_afdf_3bafadeef238.slice/crio-4728f28cf6663376dee0d501cbfec70d8862e84ee694f2fb0a85e2952b03cd34 WatchSource:0}: Error finding container 4728f28cf6663376dee0d501cbfec70d8862e84ee694f2fb0a85e2952b03cd34: Status 404 returned error can't find the container with id 4728f28cf6663376dee0d501cbfec70d8862e84ee694f2fb0a85e2952b03cd34 Feb 20 12:09:11.801973 master-0 kubenswrapper[31420]: I0220 12:09:11.801869 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" event={"ID":"164fcfe3-a130-4e21-afdf-3bafadeef238","Type":"ContainerStarted","Data":"d4dd927b123a709464bb116084c4b75454b1ee121ce1117664bcaad12aca8ef4"} Feb 20 12:09:11.801973 master-0 kubenswrapper[31420]: I0220 12:09:11.801957 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" event={"ID":"164fcfe3-a130-4e21-afdf-3bafadeef238","Type":"ContainerStarted","Data":"4728f28cf6663376dee0d501cbfec70d8862e84ee694f2fb0a85e2952b03cd34"} Feb 20 12:09:11.811623 master-0 kubenswrapper[31420]: I0220 12:09:11.811582 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f6444fbcc-rvd49" event={"ID":"79e3c3e3-1405-4ac9-b024-b6f2d25347b4","Type":"ContainerStarted","Data":"813a359c8c6e4414bb8ec559afea2532cf036055bbb4319101d2daa4dfcd6e22"} Feb 20 12:09:11.814426 master-0 kubenswrapper[31420]: I0220 12:09:11.814243 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6d9c46fd68-spxt2_8a7f358a-4a42-4323-ba9f-888aec86247a/console/0.log" Feb 20 12:09:11.814426 master-0 kubenswrapper[31420]: I0220 12:09:11.814355 31420 generic.go:334] "Generic (PLEG): container finished" podID="8a7f358a-4a42-4323-ba9f-888aec86247a" containerID="7f2f2cb27bbfff87820a7d62fa5278c5885f9c6dd9922125c3a1f81bdc7d7195" exitCode=255 Feb 20 12:09:11.814588 master-0 kubenswrapper[31420]: I0220 12:09:11.814500 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d9c46fd68-spxt2" event={"ID":"8a7f358a-4a42-4323-ba9f-888aec86247a","Type":"ContainerDied","Data":"7f2f2cb27bbfff87820a7d62fa5278c5885f9c6dd9922125c3a1f81bdc7d7195"} Feb 20 12:09:11.814781 master-0 kubenswrapper[31420]: I0220 12:09:11.814737 31420 scope.go:117] "RemoveContainer" containerID="7f2f2cb27bbfff87820a7d62fa5278c5885f9c6dd9922125c3a1f81bdc7d7195" Feb 20 12:09:12.501860 master-0 kubenswrapper[31420]: I0220 12:09:12.501691 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" podStartSLOduration=2.5016606230000002 podStartE2EDuration="2.501660623s" podCreationTimestamp="2026-02-20 12:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:09:12.496700206 +0000 UTC m=+257.215938527" watchObservedRunningTime="2026-02-20 12:09:12.501660623 +0000 UTC m=+257.220898894" Feb 20 12:09:12.805516 master-0 kubenswrapper[31420]: I0220 12:09:12.805392 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7f6444fbcc-rvd49" podStartSLOduration=19.368931554 podStartE2EDuration="23.805372225s" podCreationTimestamp="2026-02-20 12:08:49 +0000 UTC" firstStartedPulling="2026-02-20 12:09:06.438549931 +0000 UTC m=+251.157788172" lastFinishedPulling="2026-02-20 12:09:10.874990582 +0000 UTC m=+255.594228843" observedRunningTime="2026-02-20 12:09:12.800274824 +0000 UTC m=+257.519513085" watchObservedRunningTime="2026-02-20 12:09:12.805372225 +0000 UTC m=+257.524610466" Feb 20 12:09:12.824820 master-0 kubenswrapper[31420]: I0220 12:09:12.824757 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6d9c46fd68-spxt2_8a7f358a-4a42-4323-ba9f-888aec86247a/console/0.log" Feb 20 12:09:12.824979 master-0 kubenswrapper[31420]: I0220 12:09:12.824845 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d9c46fd68-spxt2" event={"ID":"8a7f358a-4a42-4323-ba9f-888aec86247a","Type":"ContainerStarted","Data":"9fb57990b9207fa6d4fe791972eb076de54f242b4467b952a304b997f55aee4c"} Feb 20 12:09:14.436323 master-0 kubenswrapper[31420]: I0220 12:09:14.436229 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6d9c46fd68-spxt2" podStartSLOduration=21.434466399 podStartE2EDuration="26.436204s" podCreationTimestamp="2026-02-20 12:08:48 +0000 UTC" firstStartedPulling="2026-02-20 12:09:05.841981578 +0000 UTC m=+250.561219819" lastFinishedPulling="2026-02-20 12:09:10.843719149 +0000 UTC m=+255.562957420" observedRunningTime="2026-02-20 12:09:14.435996904 +0000 UTC m=+259.155235155" watchObservedRunningTime="2026-02-20 12:09:14.436204 +0000 UTC m=+259.155442241" Feb 20 12:09:14.562552 master-0 kubenswrapper[31420]: I0220 12:09:14.557820 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 20 12:09:14.562552 master-0 kubenswrapper[31420]: I0220 12:09:14.560601 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.564387 master-0 kubenswrapper[31420]: I0220 12:09:14.564352 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 20 12:09:14.564556 master-0 kubenswrapper[31420]: I0220 12:09:14.564520 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 20 12:09:14.566256 master-0 kubenswrapper[31420]: I0220 12:09:14.566212 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 20 12:09:14.566512 master-0 kubenswrapper[31420]: I0220 12:09:14.566483 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 20 12:09:14.566604 master-0 kubenswrapper[31420]: I0220 12:09:14.566571 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 20 12:09:14.566654 master-0 kubenswrapper[31420]: I0220 12:09:14.566637 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 20 12:09:14.566732 master-0 kubenswrapper[31420]: I0220 12:09:14.566706 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 20 12:09:14.566799 master-0 kubenswrapper[31420]: I0220 12:09:14.566739 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 20 12:09:14.570618 master-0 kubenswrapper[31420]: I0220 12:09:14.567206 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 20 12:09:14.570618 master-0 kubenswrapper[31420]: I0220 12:09:14.568239 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 20 12:09:14.570618 master-0 kubenswrapper[31420]: I0220 12:09:14.570263 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-f5sjk65senaqu" Feb 20 12:09:14.571667 master-0 kubenswrapper[31420]: I0220 12:09:14.571278 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.606776 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.606830 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e2efa6b2-2332-40b8-8e94-e7d8552ab973-config-out\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.606850 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2efa6b2-2332-40b8-8e94-e7d8552ab973-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.606877 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/e2efa6b2-2332-40b8-8e94-e7d8552ab973-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.606901 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e2efa6b2-2332-40b8-8e94-e7d8552ab973-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.606921 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.606941 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.606958 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5dcg\" (UniqueName: \"kubernetes.io/projected/e2efa6b2-2332-40b8-8e94-e7d8552ab973-kube-api-access-g5dcg\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.606978 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-web-config\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.607000 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e2efa6b2-2332-40b8-8e94-e7d8552ab973-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.607037 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.607051 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2efa6b2-2332-40b8-8e94-e7d8552ab973-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.607069 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.607090 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-config\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.607112 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2efa6b2-2332-40b8-8e94-e7d8552ab973-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.607157 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2efa6b2-2332-40b8-8e94-e7d8552ab973-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.607183 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.607549 master-0 kubenswrapper[31420]: I0220 12:09:14.607201 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.621608 master-0 kubenswrapper[31420]: I0220 12:09:14.619007 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 20 12:09:14.691853 master-0 kubenswrapper[31420]: I0220 12:09:14.691731 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:09:14.692834 master-0 kubenswrapper[31420]: I0220 12:09:14.692787 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:09:14.697544 master-0 kubenswrapper[31420]: I0220 12:09:14.694794 31420 patch_prober.go:28] interesting pod/console-6d9c46fd68-spxt2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 20 12:09:14.697544 master-0 kubenswrapper[31420]: I0220 12:09:14.694848 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6d9c46fd68-spxt2" podUID="8a7f358a-4a42-4323-ba9f-888aec86247a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 20 12:09:14.708056 master-0 kubenswrapper[31420]: I0220 12:09:14.707617 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2efa6b2-2332-40b8-8e94-e7d8552ab973-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708056 master-0 kubenswrapper[31420]: I0220 12:09:14.707695 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/e2efa6b2-2332-40b8-8e94-e7d8552ab973-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708056 master-0 kubenswrapper[31420]: I0220 12:09:14.707732 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e2efa6b2-2332-40b8-8e94-e7d8552ab973-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708056 master-0 kubenswrapper[31420]: I0220 12:09:14.707762 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708056 master-0 kubenswrapper[31420]: I0220 12:09:14.707804 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708056 master-0 kubenswrapper[31420]: I0220 12:09:14.707827 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5dcg\" (UniqueName: \"kubernetes.io/projected/e2efa6b2-2332-40b8-8e94-e7d8552ab973-kube-api-access-g5dcg\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708056 master-0 kubenswrapper[31420]: I0220 12:09:14.707848 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-web-config\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708056 master-0 kubenswrapper[31420]: I0220 12:09:14.707876 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e2efa6b2-2332-40b8-8e94-e7d8552ab973-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708056 master-0 kubenswrapper[31420]: I0220 12:09:14.707907 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708056 master-0 kubenswrapper[31420]: I0220 12:09:14.707930 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2efa6b2-2332-40b8-8e94-e7d8552ab973-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708056 master-0 kubenswrapper[31420]: I0220 12:09:14.707982 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708056 master-0 kubenswrapper[31420]: I0220 12:09:14.708013 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-config\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708056 master-0 kubenswrapper[31420]: I0220 12:09:14.708044 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2efa6b2-2332-40b8-8e94-e7d8552ab973-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708622 master-0 kubenswrapper[31420]: I0220 12:09:14.708084 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2efa6b2-2332-40b8-8e94-e7d8552ab973-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708622 master-0 kubenswrapper[31420]: I0220 12:09:14.708112 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708622 master-0 kubenswrapper[31420]: I0220 12:09:14.708140 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708622 master-0 kubenswrapper[31420]: I0220 12:09:14.708171 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.708622 master-0 kubenswrapper[31420]: I0220 12:09:14.708194 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e2efa6b2-2332-40b8-8e94-e7d8552ab973-config-out\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.711235 master-0 kubenswrapper[31420]: I0220 12:09:14.710299 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2efa6b2-2332-40b8-8e94-e7d8552ab973-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.711235 master-0 kubenswrapper[31420]: I0220 12:09:14.710857 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2efa6b2-2332-40b8-8e94-e7d8552ab973-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.712421 master-0 kubenswrapper[31420]: I0220 12:09:14.712383 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2efa6b2-2332-40b8-8e94-e7d8552ab973-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.713312 master-0 kubenswrapper[31420]: I0220 12:09:14.712501 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.713312 master-0 kubenswrapper[31420]: I0220 12:09:14.713169 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/e2efa6b2-2332-40b8-8e94-e7d8552ab973-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.714770 master-0 kubenswrapper[31420]: I0220 12:09:14.714698 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e2efa6b2-2332-40b8-8e94-e7d8552ab973-config-out\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.714864 master-0 kubenswrapper[31420]: I0220 12:09:14.714828 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-config\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.715277 master-0 kubenswrapper[31420]: I0220 12:09:14.715254 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e2efa6b2-2332-40b8-8e94-e7d8552ab973-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.715371 master-0 kubenswrapper[31420]: I0220 12:09:14.715348 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.715714 master-0 kubenswrapper[31420]: I0220 12:09:14.715694 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-web-config\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.715863 master-0 kubenswrapper[31420]: I0220 12:09:14.715838 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.715915 master-0 kubenswrapper[31420]: I0220 12:09:14.715890 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2efa6b2-2332-40b8-8e94-e7d8552ab973-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.719065 master-0 kubenswrapper[31420]: I0220 12:09:14.719022 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.719065 master-0 kubenswrapper[31420]: I0220 12:09:14.719048 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.719196 master-0 kubenswrapper[31420]: I0220 12:09:14.719128 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.721840 master-0 kubenswrapper[31420]: I0220 12:09:14.721468 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e2efa6b2-2332-40b8-8e94-e7d8552ab973-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.722463 master-0 kubenswrapper[31420]: I0220 12:09:14.722055 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e2efa6b2-2332-40b8-8e94-e7d8552ab973-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.736618 master-0 kubenswrapper[31420]: I0220 12:09:14.733817 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5dcg\" (UniqueName: \"kubernetes.io/projected/e2efa6b2-2332-40b8-8e94-e7d8552ab973-kube-api-access-g5dcg\") pod \"prometheus-k8s-0\" (UID: \"e2efa6b2-2332-40b8-8e94-e7d8552ab973\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:14.768559 master-0 kubenswrapper[31420]: I0220 12:09:14.768303 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:15.251624 master-0 kubenswrapper[31420]: I0220 12:09:15.251542 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 20 12:09:15.260920 master-0 kubenswrapper[31420]: W0220 12:09:15.260843 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2efa6b2_2332_40b8_8e94_e7d8552ab973.slice/crio-4aeae7e43770ee42ec20d876ee2d45f2dabd5573dc58b00cf3fc33747ccb1c00 WatchSource:0}: Error finding container 4aeae7e43770ee42ec20d876ee2d45f2dabd5573dc58b00cf3fc33747ccb1c00: Status 404 returned error can't find the container with id 4aeae7e43770ee42ec20d876ee2d45f2dabd5573dc58b00cf3fc33747ccb1c00 Feb 20 12:09:15.855338 master-0 kubenswrapper[31420]: I0220 12:09:15.855229 31420 generic.go:334] "Generic (PLEG): container finished" podID="e2efa6b2-2332-40b8-8e94-e7d8552ab973" containerID="005823c178da8efa43a3b4ff50ea9ecd4afa93553193cdc9aebaab5fa9e2885b" exitCode=0 Feb 20 12:09:15.856168 master-0 kubenswrapper[31420]: I0220 12:09:15.855434 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2efa6b2-2332-40b8-8e94-e7d8552ab973","Type":"ContainerDied","Data":"005823c178da8efa43a3b4ff50ea9ecd4afa93553193cdc9aebaab5fa9e2885b"} Feb 20 12:09:15.856168 master-0 kubenswrapper[31420]: I0220 12:09:15.855480 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2efa6b2-2332-40b8-8e94-e7d8552ab973","Type":"ContainerStarted","Data":"4aeae7e43770ee42ec20d876ee2d45f2dabd5573dc58b00cf3fc33747ccb1c00"} Feb 20 12:09:15.861828 master-0 kubenswrapper[31420]: I0220 12:09:15.861762 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" event={"ID":"b228c455-3f6c-4557-8bf1-e7b2fe45f275","Type":"ContainerStarted","Data":"710dacbb85db30ac94c828f47a6140759a03dea546a5d898a59ed92f1b6effb8"} Feb 20 12:09:15.861828 master-0 kubenswrapper[31420]: I0220 12:09:15.861816 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" event={"ID":"b228c455-3f6c-4557-8bf1-e7b2fe45f275","Type":"ContainerStarted","Data":"a13121cfb8bc33f3820ed13c090cfac8574c999afa90c78b5316127d2dd7c4b7"} Feb 20 12:09:15.861828 master-0 kubenswrapper[31420]: I0220 12:09:15.861834 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" event={"ID":"b228c455-3f6c-4557-8bf1-e7b2fe45f275","Type":"ContainerStarted","Data":"f8ea72d3b4c9b208b407ab8b532cc9c7f4a381d3282b30bbdc651857df4fc82f"} Feb 20 12:09:15.868882 master-0 kubenswrapper[31420]: I0220 12:09:15.868829 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4d6df1a9-67a1-4776-917e-aa4aa6424faf","Type":"ContainerStarted","Data":"e5268ec1439fddcb8b70fc5ff961a7b57a9cd762276b84ceb96a48c35b126ad7"} Feb 20 12:09:15.868882 master-0 kubenswrapper[31420]: I0220 12:09:15.868869 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4d6df1a9-67a1-4776-917e-aa4aa6424faf","Type":"ContainerStarted","Data":"8383c76587b95be5c0f3a76f56d48bcc4364c23e9d95543bc6765f72204d4c1b"} Feb 20 12:09:15.868882 master-0 kubenswrapper[31420]: I0220 12:09:15.868882 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4d6df1a9-67a1-4776-917e-aa4aa6424faf","Type":"ContainerStarted","Data":"351298370e31d6f15a57511b960e5de6f8e1cd55d2d1bd3cbf4416ad5c9dc6ba"} Feb 20 12:09:15.868882 master-0 kubenswrapper[31420]: I0220 12:09:15.868892 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4d6df1a9-67a1-4776-917e-aa4aa6424faf","Type":"ContainerStarted","Data":"2174eda4fe0aa5c71beab3e36e9db9d05a1927ce68061d6c08941849eb2162b3"} Feb 20 12:09:15.869263 master-0 kubenswrapper[31420]: I0220 12:09:15.868903 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4d6df1a9-67a1-4776-917e-aa4aa6424faf","Type":"ContainerStarted","Data":"e5c36d5222252e304a3946d5ee86f7b7a0ac6819d41bb75f268f38ce1583e63e"} Feb 20 12:09:15.924167 master-0 kubenswrapper[31420]: I0220 12:09:15.924094 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:09:15.924167 master-0 kubenswrapper[31420]: I0220 12:09:15.924157 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:09:15.925925 master-0 kubenswrapper[31420]: I0220 12:09:15.925851 31420 patch_prober.go:28] interesting pod/console-7f6444fbcc-rvd49 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 20 12:09:15.926032 master-0 kubenswrapper[31420]: I0220 12:09:15.925960 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7f6444fbcc-rvd49" podUID="79e3c3e3-1405-4ac9-b024-b6f2d25347b4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 20 12:09:16.884473 master-0 kubenswrapper[31420]: I0220 12:09:16.884374 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" event={"ID":"b228c455-3f6c-4557-8bf1-e7b2fe45f275","Type":"ContainerStarted","Data":"a9d81fbacd07d54acff22ac663b2e992855ec0f72c3100b86f51ad44966ef14e"} Feb 20 12:09:16.884473 master-0 kubenswrapper[31420]: I0220 12:09:16.884445 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" event={"ID":"b228c455-3f6c-4557-8bf1-e7b2fe45f275","Type":"ContainerStarted","Data":"0fda5dfd87b6d647b757fdd1a69bbdc12f97bdddd291ce0d4b648f20b9799d18"} Feb 20 12:09:16.891212 master-0 kubenswrapper[31420]: I0220 12:09:16.891137 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4d6df1a9-67a1-4776-917e-aa4aa6424faf","Type":"ContainerStarted","Data":"78d9c1859a0265cd88526ecaa4e4cfc53d329e67ee9ae44872d14100659d609e"} Feb 20 12:09:16.940360 master-0 kubenswrapper[31420]: I0220 12:09:16.940238 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.404164383 podStartE2EDuration="10.940203063s" podCreationTimestamp="2026-02-20 12:09:06 +0000 UTC" firstStartedPulling="2026-02-20 12:09:08.763249885 +0000 UTC m=+253.482488166" lastFinishedPulling="2026-02-20 12:09:16.299288565 +0000 UTC m=+261.018526846" observedRunningTime="2026-02-20 12:09:16.929071285 +0000 UTC m=+261.648309606" watchObservedRunningTime="2026-02-20 12:09:16.940203063 +0000 UTC m=+261.659441344" Feb 20 12:09:17.910081 master-0 kubenswrapper[31420]: I0220 12:09:17.909739 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" event={"ID":"b228c455-3f6c-4557-8bf1-e7b2fe45f275","Type":"ContainerStarted","Data":"02489b2f781baa8fea4e90ee6bba34d8134d78bd4514c921fc45a0c5a82427f3"} Feb 20 12:09:17.964870 master-0 kubenswrapper[31420]: I0220 12:09:17.964786 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:18.107144 master-0 kubenswrapper[31420]: I0220 12:09:18.107026 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" podStartSLOduration=3.274767938 podStartE2EDuration="11.106998952s" podCreationTimestamp="2026-02-20 12:09:07 +0000 UTC" firstStartedPulling="2026-02-20 12:09:08.447741908 +0000 UTC m=+253.166980149" lastFinishedPulling="2026-02-20 12:09:16.279972922 +0000 UTC m=+260.999211163" observedRunningTime="2026-02-20 12:09:18.103810324 +0000 UTC m=+262.823048575" watchObservedRunningTime="2026-02-20 12:09:18.106998952 +0000 UTC m=+262.826237193" Feb 20 12:09:19.942045 master-0 kubenswrapper[31420]: I0220 12:09:19.941911 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2efa6b2-2332-40b8-8e94-e7d8552ab973","Type":"ContainerStarted","Data":"d1cbf922ae6d3ae643a5f399cd173e385d3db113f43a96de103d2d889a5f6e4f"} Feb 20 12:09:19.946938 master-0 kubenswrapper[31420]: I0220 12:09:19.942041 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2efa6b2-2332-40b8-8e94-e7d8552ab973","Type":"ContainerStarted","Data":"27d2138da412deaa4aab5f1abe5ee147fa633e00bbf7e7349b864e036305218d"} Feb 20 12:09:19.946938 master-0 kubenswrapper[31420]: I0220 12:09:19.942105 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2efa6b2-2332-40b8-8e94-e7d8552ab973","Type":"ContainerStarted","Data":"7234e37e9c03273ab9f1d4bec95d4a2fe7a0bffafabbc7f8d8a6f0ee7f801ea8"} Feb 20 12:09:20.960464 master-0 kubenswrapper[31420]: I0220 12:09:20.960332 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2efa6b2-2332-40b8-8e94-e7d8552ab973","Type":"ContainerStarted","Data":"e518d0f19b936ddff57b5cd9c80b2d2ce1aa2cbfed0825591633a91cb8de390a"} Feb 20 12:09:20.960464 master-0 kubenswrapper[31420]: I0220 12:09:20.960469 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2efa6b2-2332-40b8-8e94-e7d8552ab973","Type":"ContainerStarted","Data":"c593738f3a3131e1534dbbbb84f689116e91c442415ada511582b15aee536ab1"} Feb 20 12:09:20.961423 master-0 kubenswrapper[31420]: I0220 12:09:20.960497 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e2efa6b2-2332-40b8-8e94-e7d8552ab973","Type":"ContainerStarted","Data":"4a38991fd7b3bcfbcd02fc0cb328809f42365cc8eb6c7d38d7385f7acfd67a75"} Feb 20 12:09:21.020590 master-0 kubenswrapper[31420]: I0220 12:09:21.020410 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.740156378 podStartE2EDuration="7.020375481s" podCreationTimestamp="2026-02-20 12:09:14 +0000 UTC" firstStartedPulling="2026-02-20 12:09:15.85954974 +0000 UTC m=+260.578788021" lastFinishedPulling="2026-02-20 12:09:19.139768883 +0000 UTC m=+263.859007124" observedRunningTime="2026-02-20 12:09:21.011779174 +0000 UTC m=+265.731017475" watchObservedRunningTime="2026-02-20 12:09:21.020375481 +0000 UTC m=+265.739613762" Feb 20 12:09:22.979428 master-0 kubenswrapper[31420]: I0220 12:09:22.979340 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-57dfb4b6b4-gvjmn" Feb 20 12:09:24.692031 master-0 kubenswrapper[31420]: I0220 12:09:24.691973 31420 patch_prober.go:28] interesting pod/console-6d9c46fd68-spxt2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 20 12:09:24.692658 master-0 kubenswrapper[31420]: I0220 12:09:24.692627 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6d9c46fd68-spxt2" podUID="8a7f358a-4a42-4323-ba9f-888aec86247a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 20 12:09:24.769087 master-0 kubenswrapper[31420]: I0220 12:09:24.769015 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:09:25.924768 master-0 kubenswrapper[31420]: I0220 12:09:25.924684 31420 patch_prober.go:28] interesting pod/console-7f6444fbcc-rvd49 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 20 12:09:25.925639 master-0 kubenswrapper[31420]: I0220 12:09:25.924785 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7f6444fbcc-rvd49" podUID="79e3c3e3-1405-4ac9-b024-b6f2d25347b4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 20 12:09:30.643868 master-0 kubenswrapper[31420]: I0220 12:09:30.643787 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:30.645162 master-0 kubenswrapper[31420]: I0220 12:09:30.643940 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:31.312732 master-0 kubenswrapper[31420]: I0220 12:09:31.312490 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" podUID="f47a071f-b3f1-4b00-9e67-d39b3e89eaac" containerName="oauth-openshift" containerID="cri-o://09673be12905973c2b1790af259ba0c225fcb1ae41753e836a740deaba11a687" gracePeriod=15 Feb 20 12:09:31.989437 master-0 kubenswrapper[31420]: I0220 12:09:31.989367 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:09:32.049662 master-0 kubenswrapper[31420]: I0220 12:09:32.049288 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp"] Feb 20 12:09:32.050136 master-0 kubenswrapper[31420]: E0220 12:09:32.049678 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f47a071f-b3f1-4b00-9e67-d39b3e89eaac" containerName="oauth-openshift" Feb 20 12:09:32.050136 master-0 kubenswrapper[31420]: I0220 12:09:32.049695 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="f47a071f-b3f1-4b00-9e67-d39b3e89eaac" containerName="oauth-openshift" Feb 20 12:09:32.050136 master-0 kubenswrapper[31420]: I0220 12:09:32.049883 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="f47a071f-b3f1-4b00-9e67-d39b3e89eaac" containerName="oauth-openshift" Feb 20 12:09:32.050464 master-0 kubenswrapper[31420]: I0220 12:09:32.050434 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.054514 master-0 kubenswrapper[31420]: I0220 12:09:32.054446 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.054817 master-0 kubenswrapper[31420]: I0220 12:09:32.054662 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.054817 master-0 kubenswrapper[31420]: I0220 12:09:32.054722 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.054817 master-0 kubenswrapper[31420]: I0220 12:09:32.054760 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-user-template-error\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.054817 master-0 kubenswrapper[31420]: I0220 12:09:32.054789 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-router-certs\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.055198 master-0 kubenswrapper[31420]: I0220 12:09:32.054825 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b5175581-36b3-4313-99aa-2e404a4c38cb-audit-policies\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.055198 master-0 kubenswrapper[31420]: I0220 12:09:32.054999 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-service-ca\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.055198 master-0 kubenswrapper[31420]: I0220 12:09:32.055069 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kthjk\" (UniqueName: \"kubernetes.io/projected/b5175581-36b3-4313-99aa-2e404a4c38cb-kube-api-access-kthjk\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.055516 master-0 kubenswrapper[31420]: I0220 12:09:32.055232 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.055516 master-0 kubenswrapper[31420]: I0220 12:09:32.055305 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.055516 master-0 kubenswrapper[31420]: I0220 12:09:32.055343 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5175581-36b3-4313-99aa-2e404a4c38cb-audit-dir\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.055516 master-0 kubenswrapper[31420]: I0220 12:09:32.055451 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-session\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.055516 master-0 kubenswrapper[31420]: I0220 12:09:32.055501 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-user-template-login\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.071873 master-0 kubenswrapper[31420]: I0220 12:09:32.071826 31420 generic.go:334] "Generic (PLEG): container finished" podID="f47a071f-b3f1-4b00-9e67-d39b3e89eaac" containerID="09673be12905973c2b1790af259ba0c225fcb1ae41753e836a740deaba11a687" exitCode=0 Feb 20 12:09:32.071873 master-0 kubenswrapper[31420]: I0220 12:09:32.071875 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" event={"ID":"f47a071f-b3f1-4b00-9e67-d39b3e89eaac","Type":"ContainerDied","Data":"09673be12905973c2b1790af259ba0c225fcb1ae41753e836a740deaba11a687"} Feb 20 12:09:32.072132 master-0 kubenswrapper[31420]: I0220 12:09:32.071903 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" event={"ID":"f47a071f-b3f1-4b00-9e67-d39b3e89eaac","Type":"ContainerDied","Data":"0d48f808852dd525b6997a8f44f34e27f45897e6032be5ebde82135fe3e20334"} Feb 20 12:09:32.072132 master-0 kubenswrapper[31420]: I0220 12:09:32.071909 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l" Feb 20 12:09:32.072448 master-0 kubenswrapper[31420]: I0220 12:09:32.071920 31420 scope.go:117] "RemoveContainer" containerID="09673be12905973c2b1790af259ba0c225fcb1ae41753e836a740deaba11a687" Feb 20 12:09:32.073821 master-0 kubenswrapper[31420]: I0220 12:09:32.073798 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp"] Feb 20 12:09:32.101139 master-0 kubenswrapper[31420]: I0220 12:09:32.100849 31420 scope.go:117] "RemoveContainer" containerID="09673be12905973c2b1790af259ba0c225fcb1ae41753e836a740deaba11a687" Feb 20 12:09:32.102453 master-0 kubenswrapper[31420]: E0220 12:09:32.101499 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09673be12905973c2b1790af259ba0c225fcb1ae41753e836a740deaba11a687\": container with ID starting with 09673be12905973c2b1790af259ba0c225fcb1ae41753e836a740deaba11a687 not found: ID does not exist" containerID="09673be12905973c2b1790af259ba0c225fcb1ae41753e836a740deaba11a687" Feb 20 12:09:32.102453 master-0 kubenswrapper[31420]: I0220 12:09:32.101581 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09673be12905973c2b1790af259ba0c225fcb1ae41753e836a740deaba11a687"} err="failed to get container status \"09673be12905973c2b1790af259ba0c225fcb1ae41753e836a740deaba11a687\": rpc error: code = NotFound desc = could not find container \"09673be12905973c2b1790af259ba0c225fcb1ae41753e836a740deaba11a687\": container with ID starting with 09673be12905973c2b1790af259ba0c225fcb1ae41753e836a740deaba11a687 not found: ID does not exist" Feb 20 12:09:32.156260 master-0 kubenswrapper[31420]: I0220 12:09:32.156112 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-router-certs\") pod \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " Feb 20 12:09:32.156260 master-0 kubenswrapper[31420]: I0220 12:09:32.156212 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-provider-selection\") pod \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " Feb 20 12:09:32.156614 master-0 kubenswrapper[31420]: I0220 12:09:32.156266 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-audit-dir\") pod \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " Feb 20 12:09:32.156614 master-0 kubenswrapper[31420]: I0220 12:09:32.156302 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl4dd\" (UniqueName: \"kubernetes.io/projected/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-kube-api-access-gl4dd\") pod \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " Feb 20 12:09:32.156614 master-0 kubenswrapper[31420]: I0220 12:09:32.156340 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-login\") pod \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " Feb 20 12:09:32.156614 master-0 kubenswrapper[31420]: I0220 12:09:32.156364 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-trusted-ca-bundle\") pod \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " Feb 20 12:09:32.156614 master-0 kubenswrapper[31420]: I0220 12:09:32.156419 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-service-ca\") pod \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " Feb 20 12:09:32.156614 master-0 kubenswrapper[31420]: I0220 12:09:32.156442 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-cliconfig\") pod \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " Feb 20 12:09:32.156614 master-0 kubenswrapper[31420]: I0220 12:09:32.156473 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-ocp-branding-template\") pod \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " Feb 20 12:09:32.156614 master-0 kubenswrapper[31420]: I0220 12:09:32.156509 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-error\") pod \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " Feb 20 12:09:32.156614 master-0 kubenswrapper[31420]: I0220 12:09:32.156473 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f47a071f-b3f1-4b00-9e67-d39b3e89eaac" (UID: "f47a071f-b3f1-4b00-9e67-d39b3e89eaac"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:09:32.157240 master-0 kubenswrapper[31420]: I0220 12:09:32.156570 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-serving-cert\") pod \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " Feb 20 12:09:32.157240 master-0 kubenswrapper[31420]: I0220 12:09:32.156743 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-audit-policies\") pod \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " Feb 20 12:09:32.157240 master-0 kubenswrapper[31420]: I0220 12:09:32.156903 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-session\") pod \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\" (UID: \"f47a071f-b3f1-4b00-9e67-d39b3e89eaac\") " Feb 20 12:09:32.158313 master-0 kubenswrapper[31420]: I0220 12:09:32.158243 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f47a071f-b3f1-4b00-9e67-d39b3e89eaac" (UID: "f47a071f-b3f1-4b00-9e67-d39b3e89eaac"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:09:32.158313 master-0 kubenswrapper[31420]: I0220 12:09:32.158269 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f47a071f-b3f1-4b00-9e67-d39b3e89eaac" (UID: "f47a071f-b3f1-4b00-9e67-d39b3e89eaac"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:09:32.158483 master-0 kubenswrapper[31420]: I0220 12:09:32.158302 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f47a071f-b3f1-4b00-9e67-d39b3e89eaac" (UID: "f47a071f-b3f1-4b00-9e67-d39b3e89eaac"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:09:32.159647 master-0 kubenswrapper[31420]: I0220 12:09:32.158507 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f47a071f-b3f1-4b00-9e67-d39b3e89eaac" (UID: "f47a071f-b3f1-4b00-9e67-d39b3e89eaac"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:09:32.159647 master-0 kubenswrapper[31420]: I0220 12:09:32.158783 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.159647 master-0 kubenswrapper[31420]: I0220 12:09:32.158946 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.159647 master-0 kubenswrapper[31420]: I0220 12:09:32.158987 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.159647 master-0 kubenswrapper[31420]: I0220 12:09:32.159026 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-user-template-error\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.159647 master-0 kubenswrapper[31420]: I0220 12:09:32.159218 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-router-certs\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.159647 master-0 kubenswrapper[31420]: I0220 12:09:32.159458 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b5175581-36b3-4313-99aa-2e404a4c38cb-audit-policies\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.159647 master-0 kubenswrapper[31420]: I0220 12:09:32.159598 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-service-ca\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.160005 master-0 kubenswrapper[31420]: I0220 12:09:32.159680 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kthjk\" (UniqueName: \"kubernetes.io/projected/b5175581-36b3-4313-99aa-2e404a4c38cb-kube-api-access-kthjk\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.160005 master-0 kubenswrapper[31420]: I0220 12:09:32.159734 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.160005 master-0 kubenswrapper[31420]: I0220 12:09:32.159779 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.160005 master-0 kubenswrapper[31420]: I0220 12:09:32.159873 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5175581-36b3-4313-99aa-2e404a4c38cb-audit-dir\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.160295 master-0 kubenswrapper[31420]: I0220 12:09:32.160260 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-session\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.160354 master-0 kubenswrapper[31420]: I0220 12:09:32.160294 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-service-ca\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.160354 master-0 kubenswrapper[31420]: I0220 12:09:32.160269 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.160429 master-0 kubenswrapper[31420]: I0220 12:09:32.160328 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-user-template-login\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.160602 master-0 kubenswrapper[31420]: I0220 12:09:32.160548 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5175581-36b3-4313-99aa-2e404a4c38cb-audit-dir\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.160659 master-0 kubenswrapper[31420]: I0220 12:09:32.160571 31420 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:09:32.160697 master-0 kubenswrapper[31420]: I0220 12:09:32.160674 31420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:09:32.160767 master-0 kubenswrapper[31420]: I0220 12:09:32.160706 31420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 12:09:32.160767 master-0 kubenswrapper[31420]: I0220 12:09:32.160715 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-kube-api-access-gl4dd" (OuterVolumeSpecName: "kube-api-access-gl4dd") pod "f47a071f-b3f1-4b00-9e67-d39b3e89eaac" (UID: "f47a071f-b3f1-4b00-9e67-d39b3e89eaac"). InnerVolumeSpecName "kube-api-access-gl4dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:09:32.160767 master-0 kubenswrapper[31420]: I0220 12:09:32.160734 31420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 20 12:09:32.160767 master-0 kubenswrapper[31420]: I0220 12:09:32.160760 31420 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 20 12:09:32.161009 master-0 kubenswrapper[31420]: I0220 12:09:32.160977 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.161056 master-0 kubenswrapper[31420]: I0220 12:09:32.161039 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f47a071f-b3f1-4b00-9e67-d39b3e89eaac" (UID: "f47a071f-b3f1-4b00-9e67-d39b3e89eaac"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:09:32.161148 master-0 kubenswrapper[31420]: I0220 12:09:32.161098 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b5175581-36b3-4313-99aa-2e404a4c38cb-audit-policies\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.161839 master-0 kubenswrapper[31420]: I0220 12:09:32.161794 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f47a071f-b3f1-4b00-9e67-d39b3e89eaac" (UID: "f47a071f-b3f1-4b00-9e67-d39b3e89eaac"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:09:32.161985 master-0 kubenswrapper[31420]: I0220 12:09:32.161936 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f47a071f-b3f1-4b00-9e67-d39b3e89eaac" (UID: "f47a071f-b3f1-4b00-9e67-d39b3e89eaac"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:09:32.163814 master-0 kubenswrapper[31420]: I0220 12:09:32.163739 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f47a071f-b3f1-4b00-9e67-d39b3e89eaac" (UID: "f47a071f-b3f1-4b00-9e67-d39b3e89eaac"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:09:32.163947 master-0 kubenswrapper[31420]: I0220 12:09:32.163905 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-router-certs\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.164328 master-0 kubenswrapper[31420]: I0220 12:09:32.164290 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f47a071f-b3f1-4b00-9e67-d39b3e89eaac" (UID: "f47a071f-b3f1-4b00-9e67-d39b3e89eaac"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:09:32.164470 master-0 kubenswrapper[31420]: I0220 12:09:32.164439 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f47a071f-b3f1-4b00-9e67-d39b3e89eaac" (UID: "f47a071f-b3f1-4b00-9e67-d39b3e89eaac"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:09:32.164814 master-0 kubenswrapper[31420]: I0220 12:09:32.164750 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.165115 master-0 kubenswrapper[31420]: I0220 12:09:32.165069 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-user-template-login\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.165307 master-0 kubenswrapper[31420]: I0220 12:09:32.165251 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f47a071f-b3f1-4b00-9e67-d39b3e89eaac" (UID: "f47a071f-b3f1-4b00-9e67-d39b3e89eaac"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:09:32.166248 master-0 kubenswrapper[31420]: I0220 12:09:32.166214 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.166296 master-0 kubenswrapper[31420]: I0220 12:09:32.166220 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-user-template-error\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.167180 master-0 kubenswrapper[31420]: I0220 12:09:32.167147 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-system-session\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.167933 master-0 kubenswrapper[31420]: I0220 12:09:32.167881 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b5175581-36b3-4313-99aa-2e404a4c38cb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.181626 master-0 kubenswrapper[31420]: I0220 12:09:32.181576 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kthjk\" (UniqueName: \"kubernetes.io/projected/b5175581-36b3-4313-99aa-2e404a4c38cb-kube-api-access-kthjk\") pod \"oauth-openshift-ffd7cb8f5-hmkkp\" (UID: \"b5175581-36b3-4313-99aa-2e404a4c38cb\") " pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.263568 master-0 kubenswrapper[31420]: I0220 12:09:32.263442 31420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 20 12:09:32.264264 master-0 kubenswrapper[31420]: I0220 12:09:32.263745 31420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 20 12:09:32.264264 master-0 kubenswrapper[31420]: I0220 12:09:32.263779 31420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:09:32.264264 master-0 kubenswrapper[31420]: I0220 12:09:32.263805 31420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 20 12:09:32.264264 master-0 kubenswrapper[31420]: I0220 12:09:32.263830 31420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:09:32.264264 master-0 kubenswrapper[31420]: I0220 12:09:32.263852 31420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 20 12:09:32.264264 master-0 kubenswrapper[31420]: I0220 12:09:32.263877 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl4dd\" (UniqueName: \"kubernetes.io/projected/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-kube-api-access-gl4dd\") on node \"master-0\" DevicePath \"\"" Feb 20 12:09:32.264264 master-0 kubenswrapper[31420]: I0220 12:09:32.263899 31420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f47a071f-b3f1-4b00-9e67-d39b3e89eaac-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 20 12:09:32.391020 master-0 kubenswrapper[31420]: I0220 12:09:32.390577 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:32.445817 master-0 kubenswrapper[31420]: I0220 12:09:32.445683 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l"] Feb 20 12:09:32.452910 master-0 kubenswrapper[31420]: I0220 12:09:32.452353 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-64ddc49fd6-t9s4l"] Feb 20 12:09:32.735202 master-0 kubenswrapper[31420]: I0220 12:09:32.735090 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp"] Feb 20 12:09:32.745840 master-0 kubenswrapper[31420]: W0220 12:09:32.745766 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5175581_36b3_4313_99aa_2e404a4c38cb.slice/crio-a1549b7f5a45862a6c60248de8bcbe0fdbfb4e5c262cf39d88f6ae66775983c0 WatchSource:0}: Error finding container a1549b7f5a45862a6c60248de8bcbe0fdbfb4e5c262cf39d88f6ae66775983c0: Status 404 returned error can't find the container with id a1549b7f5a45862a6c60248de8bcbe0fdbfb4e5c262cf39d88f6ae66775983c0 Feb 20 12:09:33.086444 master-0 kubenswrapper[31420]: I0220 12:09:33.086343 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" event={"ID":"b5175581-36b3-4313-99aa-2e404a4c38cb","Type":"ContainerStarted","Data":"a1549b7f5a45862a6c60248de8bcbe0fdbfb4e5c262cf39d88f6ae66775983c0"} Feb 20 12:09:33.511835 master-0 kubenswrapper[31420]: I0220 12:09:33.511730 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f47a071f-b3f1-4b00-9e67-d39b3e89eaac" path="/var/lib/kubelet/pods/f47a071f-b3f1-4b00-9e67-d39b3e89eaac/volumes" Feb 20 12:09:34.098695 master-0 kubenswrapper[31420]: I0220 12:09:34.098628 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" event={"ID":"b5175581-36b3-4313-99aa-2e404a4c38cb","Type":"ContainerStarted","Data":"a15976516b8606bbccc0d3d3d2ff70c7362dc00980b4df0eae40463c5e363504"} Feb 20 12:09:34.100064 master-0 kubenswrapper[31420]: I0220 12:09:34.099970 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:34.112997 master-0 kubenswrapper[31420]: I0220 12:09:34.112928 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" Feb 20 12:09:34.148007 master-0 kubenswrapper[31420]: I0220 12:09:34.147819 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-ffd7cb8f5-hmkkp" podStartSLOduration=28.147785206000002 podStartE2EDuration="28.147785206s" podCreationTimestamp="2026-02-20 12:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:09:34.140094154 +0000 UTC m=+278.859332445" watchObservedRunningTime="2026-02-20 12:09:34.147785206 +0000 UTC m=+278.867023487" Feb 20 12:09:34.692864 master-0 kubenswrapper[31420]: I0220 12:09:34.692761 31420 patch_prober.go:28] interesting pod/console-6d9c46fd68-spxt2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Feb 20 12:09:34.693193 master-0 kubenswrapper[31420]: I0220 12:09:34.692853 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6d9c46fd68-spxt2" podUID="8a7f358a-4a42-4323-ba9f-888aec86247a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Feb 20 12:09:35.924983 master-0 kubenswrapper[31420]: I0220 12:09:35.924886 31420 patch_prober.go:28] interesting pod/console-7f6444fbcc-rvd49 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 20 12:09:35.925977 master-0 kubenswrapper[31420]: I0220 12:09:35.924994 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7f6444fbcc-rvd49" podUID="79e3c3e3-1405-4ac9-b024-b6f2d25347b4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 20 12:09:44.699023 master-0 kubenswrapper[31420]: I0220 12:09:44.698969 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:09:44.705386 master-0 kubenswrapper[31420]: I0220 12:09:44.705360 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:09:45.931402 master-0 kubenswrapper[31420]: I0220 12:09:45.931332 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:09:45.939192 master-0 kubenswrapper[31420]: I0220 12:09:45.939112 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:09:46.352293 master-0 kubenswrapper[31420]: I0220 12:09:46.352209 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6d9c46fd68-spxt2"] Feb 20 12:09:50.652627 master-0 kubenswrapper[31420]: I0220 12:09:50.652512 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:50.659051 master-0 kubenswrapper[31420]: I0220 12:09:50.659000 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-65bb9698b4-rf9nz" Feb 20 12:09:55.486210 master-0 kubenswrapper[31420]: I0220 12:09:55.486145 31420 kubelet.go:1505] "Image garbage collection succeeded" Feb 20 12:10:11.423312 master-0 kubenswrapper[31420]: I0220 12:10:11.423160 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6d9c46fd68-spxt2" podUID="8a7f358a-4a42-4323-ba9f-888aec86247a" containerName="console" containerID="cri-o://9fb57990b9207fa6d4fe791972eb076de54f242b4467b952a304b997f55aee4c" gracePeriod=15 Feb 20 12:10:11.862744 master-0 kubenswrapper[31420]: I0220 12:10:11.862288 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6d9c46fd68-spxt2_8a7f358a-4a42-4323-ba9f-888aec86247a/console/1.log" Feb 20 12:10:11.863024 master-0 kubenswrapper[31420]: I0220 12:10:11.862980 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6d9c46fd68-spxt2_8a7f358a-4a42-4323-ba9f-888aec86247a/console/0.log" Feb 20 12:10:11.863098 master-0 kubenswrapper[31420]: I0220 12:10:11.863023 31420 generic.go:334] "Generic (PLEG): container finished" podID="8a7f358a-4a42-4323-ba9f-888aec86247a" containerID="9fb57990b9207fa6d4fe791972eb076de54f242b4467b952a304b997f55aee4c" exitCode=2 Feb 20 12:10:11.863098 master-0 kubenswrapper[31420]: I0220 12:10:11.863060 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d9c46fd68-spxt2" event={"ID":"8a7f358a-4a42-4323-ba9f-888aec86247a","Type":"ContainerDied","Data":"9fb57990b9207fa6d4fe791972eb076de54f242b4467b952a304b997f55aee4c"} Feb 20 12:10:11.863098 master-0 kubenswrapper[31420]: I0220 12:10:11.863095 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d9c46fd68-spxt2" event={"ID":"8a7f358a-4a42-4323-ba9f-888aec86247a","Type":"ContainerDied","Data":"272a477ac555c214c0000ad8fa9ef87fac0ca7099e40e3ddb86900dd5ba529d1"} Feb 20 12:10:11.863296 master-0 kubenswrapper[31420]: I0220 12:10:11.863110 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="272a477ac555c214c0000ad8fa9ef87fac0ca7099e40e3ddb86900dd5ba529d1" Feb 20 12:10:11.863296 master-0 kubenswrapper[31420]: I0220 12:10:11.863129 31420 scope.go:117] "RemoveContainer" containerID="7f2f2cb27bbfff87820a7d62fa5278c5885f9c6dd9922125c3a1f81bdc7d7195" Feb 20 12:10:11.949561 master-0 kubenswrapper[31420]: I0220 12:10:11.949505 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6d9c46fd68-spxt2_8a7f358a-4a42-4323-ba9f-888aec86247a/console/1.log" Feb 20 12:10:11.949813 master-0 kubenswrapper[31420]: I0220 12:10:11.949617 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:10:11.965459 master-0 kubenswrapper[31420]: I0220 12:10:11.965364 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-service-ca\") pod \"8a7f358a-4a42-4323-ba9f-888aec86247a\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " Feb 20 12:10:11.965674 master-0 kubenswrapper[31420]: I0220 12:10:11.965575 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert\") pod \"8a7f358a-4a42-4323-ba9f-888aec86247a\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " Feb 20 12:10:11.965674 master-0 kubenswrapper[31420]: I0220 12:10:11.965624 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-oauth-config\") pod \"8a7f358a-4a42-4323-ba9f-888aec86247a\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " Feb 20 12:10:11.965815 master-0 kubenswrapper[31420]: I0220 12:10:11.965699 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-trusted-ca-bundle\") pod \"8a7f358a-4a42-4323-ba9f-888aec86247a\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " Feb 20 12:10:11.965815 master-0 kubenswrapper[31420]: I0220 12:10:11.965802 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnlk9\" (UniqueName: \"kubernetes.io/projected/8a7f358a-4a42-4323-ba9f-888aec86247a-kube-api-access-nnlk9\") pod \"8a7f358a-4a42-4323-ba9f-888aec86247a\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " Feb 20 12:10:11.965960 master-0 kubenswrapper[31420]: I0220 12:10:11.965858 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-oauth-serving-cert\") pod \"8a7f358a-4a42-4323-ba9f-888aec86247a\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " Feb 20 12:10:11.965960 master-0 kubenswrapper[31420]: I0220 12:10:11.965897 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-console-config\") pod \"8a7f358a-4a42-4323-ba9f-888aec86247a\" (UID: \"8a7f358a-4a42-4323-ba9f-888aec86247a\") " Feb 20 12:10:11.966759 master-0 kubenswrapper[31420]: I0220 12:10:11.966691 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-service-ca" (OuterVolumeSpecName: "service-ca") pod "8a7f358a-4a42-4323-ba9f-888aec86247a" (UID: "8a7f358a-4a42-4323-ba9f-888aec86247a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:10:11.967097 master-0 kubenswrapper[31420]: I0220 12:10:11.967060 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8a7f358a-4a42-4323-ba9f-888aec86247a" (UID: "8a7f358a-4a42-4323-ba9f-888aec86247a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:10:11.967381 master-0 kubenswrapper[31420]: I0220 12:10:11.967333 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8a7f358a-4a42-4323-ba9f-888aec86247a" (UID: "8a7f358a-4a42-4323-ba9f-888aec86247a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:10:11.967852 master-0 kubenswrapper[31420]: I0220 12:10:11.967794 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-console-config" (OuterVolumeSpecName: "console-config") pod "8a7f358a-4a42-4323-ba9f-888aec86247a" (UID: "8a7f358a-4a42-4323-ba9f-888aec86247a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:10:11.968301 master-0 kubenswrapper[31420]: I0220 12:10:11.968244 31420 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 12:10:11.968301 master-0 kubenswrapper[31420]: I0220 12:10:11.968291 31420 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:10:11.968514 master-0 kubenswrapper[31420]: I0220 12:10:11.968323 31420 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:10:11.968685 master-0 kubenswrapper[31420]: I0220 12:10:11.968574 31420 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8a7f358a-4a42-4323-ba9f-888aec86247a-console-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:10:11.971649 master-0 kubenswrapper[31420]: I0220 12:10:11.971591 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8a7f358a-4a42-4323-ba9f-888aec86247a" (UID: "8a7f358a-4a42-4323-ba9f-888aec86247a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:10:11.971892 master-0 kubenswrapper[31420]: I0220 12:10:11.971840 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8a7f358a-4a42-4323-ba9f-888aec86247a" (UID: "8a7f358a-4a42-4323-ba9f-888aec86247a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:10:11.972348 master-0 kubenswrapper[31420]: I0220 12:10:11.972289 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7f358a-4a42-4323-ba9f-888aec86247a-kube-api-access-nnlk9" (OuterVolumeSpecName: "kube-api-access-nnlk9") pod "8a7f358a-4a42-4323-ba9f-888aec86247a" (UID: "8a7f358a-4a42-4323-ba9f-888aec86247a"). InnerVolumeSpecName "kube-api-access-nnlk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:10:12.070187 master-0 kubenswrapper[31420]: I0220 12:10:12.070090 31420 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:10:12.070187 master-0 kubenswrapper[31420]: I0220 12:10:12.070164 31420 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8a7f358a-4a42-4323-ba9f-888aec86247a-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:10:12.070187 master-0 kubenswrapper[31420]: I0220 12:10:12.070196 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnlk9\" (UniqueName: \"kubernetes.io/projected/8a7f358a-4a42-4323-ba9f-888aec86247a-kube-api-access-nnlk9\") on node \"master-0\" DevicePath \"\"" Feb 20 12:10:12.876306 master-0 kubenswrapper[31420]: I0220 12:10:12.876215 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6d9c46fd68-spxt2_8a7f358a-4a42-4323-ba9f-888aec86247a/console/1.log" Feb 20 12:10:12.877380 master-0 kubenswrapper[31420]: I0220 12:10:12.876429 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d9c46fd68-spxt2" Feb 20 12:10:12.943944 master-0 kubenswrapper[31420]: I0220 12:10:12.943864 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6d9c46fd68-spxt2"] Feb 20 12:10:12.957375 master-0 kubenswrapper[31420]: I0220 12:10:12.957320 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6d9c46fd68-spxt2"] Feb 20 12:10:13.511754 master-0 kubenswrapper[31420]: I0220 12:10:13.511688 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a7f358a-4a42-4323-ba9f-888aec86247a" path="/var/lib/kubelet/pods/8a7f358a-4a42-4323-ba9f-888aec86247a/volumes" Feb 20 12:10:14.769672 master-0 kubenswrapper[31420]: I0220 12:10:14.769590 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:10:14.818724 master-0 kubenswrapper[31420]: I0220 12:10:14.818599 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:10:14.920968 master-0 kubenswrapper[31420]: I0220 12:10:14.920911 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 12:10:34.372032 master-0 kubenswrapper[31420]: I0220 12:10:34.371644 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 20 12:10:34.373330 master-0 kubenswrapper[31420]: E0220 12:10:34.373092 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a7f358a-4a42-4323-ba9f-888aec86247a" containerName="console" Feb 20 12:10:34.373330 master-0 kubenswrapper[31420]: I0220 12:10:34.373131 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a7f358a-4a42-4323-ba9f-888aec86247a" containerName="console" Feb 20 12:10:34.373330 master-0 kubenswrapper[31420]: E0220 12:10:34.373173 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a7f358a-4a42-4323-ba9f-888aec86247a" containerName="console" Feb 20 12:10:34.373330 master-0 kubenswrapper[31420]: I0220 12:10:34.373193 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a7f358a-4a42-4323-ba9f-888aec86247a" containerName="console" Feb 20 12:10:34.373753 master-0 kubenswrapper[31420]: I0220 12:10:34.373477 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a7f358a-4a42-4323-ba9f-888aec86247a" containerName="console" Feb 20 12:10:34.373753 master-0 kubenswrapper[31420]: I0220 12:10:34.373720 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a7f358a-4a42-4323-ba9f-888aec86247a" containerName="console" Feb 20 12:10:34.375022 master-0 kubenswrapper[31420]: I0220 12:10:34.374978 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 12:10:34.380355 master-0 kubenswrapper[31420]: I0220 12:10:34.378960 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-h4rwl" Feb 20 12:10:34.380355 master-0 kubenswrapper[31420]: I0220 12:10:34.378988 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 20 12:10:34.393272 master-0 kubenswrapper[31420]: I0220 12:10:34.393194 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 20 12:10:34.503449 master-0 kubenswrapper[31420]: I0220 12:10:34.503356 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-kube-api-access\") pod \"installer-4-master-0\" (UID: \"cbbaa38c-b860-4dbd-9825-bc28b2d025bf\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 12:10:34.503796 master-0 kubenswrapper[31420]: I0220 12:10:34.503600 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"cbbaa38c-b860-4dbd-9825-bc28b2d025bf\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 12:10:34.504193 master-0 kubenswrapper[31420]: I0220 12:10:34.504144 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-var-lock\") pod \"installer-4-master-0\" (UID: \"cbbaa38c-b860-4dbd-9825-bc28b2d025bf\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 12:10:34.606927 master-0 kubenswrapper[31420]: I0220 12:10:34.606333 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-kube-api-access\") pod \"installer-4-master-0\" (UID: \"cbbaa38c-b860-4dbd-9825-bc28b2d025bf\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 12:10:34.607248 master-0 kubenswrapper[31420]: I0220 12:10:34.606984 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"cbbaa38c-b860-4dbd-9825-bc28b2d025bf\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 12:10:34.607248 master-0 kubenswrapper[31420]: I0220 12:10:34.607087 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"cbbaa38c-b860-4dbd-9825-bc28b2d025bf\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 12:10:34.607394 master-0 kubenswrapper[31420]: I0220 12:10:34.607276 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-var-lock\") pod \"installer-4-master-0\" (UID: \"cbbaa38c-b860-4dbd-9825-bc28b2d025bf\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 12:10:34.607394 master-0 kubenswrapper[31420]: I0220 12:10:34.607383 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-var-lock\") pod \"installer-4-master-0\" (UID: \"cbbaa38c-b860-4dbd-9825-bc28b2d025bf\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 12:10:34.637765 master-0 kubenswrapper[31420]: I0220 12:10:34.637600 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-kube-api-access\") pod \"installer-4-master-0\" (UID: \"cbbaa38c-b860-4dbd-9825-bc28b2d025bf\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 12:10:34.718902 master-0 kubenswrapper[31420]: I0220 12:10:34.718816 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 12:10:35.270394 master-0 kubenswrapper[31420]: W0220 12:10:35.270326 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcbbaa38c_b860_4dbd_9825_bc28b2d025bf.slice/crio-fbf8b5b0f331a64f5cc61c37255326f0225bdd85045a29707215558bb9746025 WatchSource:0}: Error finding container fbf8b5b0f331a64f5cc61c37255326f0225bdd85045a29707215558bb9746025: Status 404 returned error can't find the container with id fbf8b5b0f331a64f5cc61c37255326f0225bdd85045a29707215558bb9746025 Feb 20 12:10:35.272317 master-0 kubenswrapper[31420]: I0220 12:10:35.272248 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 20 12:10:36.106371 master-0 kubenswrapper[31420]: I0220 12:10:36.106265 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"cbbaa38c-b860-4dbd-9825-bc28b2d025bf","Type":"ContainerStarted","Data":"b84c311ffbf185011e1ed529b5a16a34d8bb89cf18fb9ad41458210ec9a27f14"} Feb 20 12:10:36.106371 master-0 kubenswrapper[31420]: I0220 12:10:36.106344 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"cbbaa38c-b860-4dbd-9825-bc28b2d025bf","Type":"ContainerStarted","Data":"fbf8b5b0f331a64f5cc61c37255326f0225bdd85045a29707215558bb9746025"} Feb 20 12:10:36.188962 master-0 kubenswrapper[31420]: I0220 12:10:36.188801 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.188749334 podStartE2EDuration="2.188749334s" podCreationTimestamp="2026-02-20 12:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:10:36.187313674 +0000 UTC m=+340.906551995" watchObservedRunningTime="2026-02-20 12:10:36.188749334 +0000 UTC m=+340.907987615" Feb 20 12:10:45.991746 master-0 kubenswrapper[31420]: I0220 12:10:45.991638 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5b66c6d797-hsk56"] Feb 20 12:10:45.993300 master-0 kubenswrapper[31420]: I0220 12:10:45.993254 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.016231 master-0 kubenswrapper[31420]: I0220 12:10:46.016182 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b66c6d797-hsk56"] Feb 20 12:10:46.115088 master-0 kubenswrapper[31420]: I0220 12:10:46.115012 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8488\" (UniqueName: \"kubernetes.io/projected/1bccbfa1-2cb4-462d-b428-e51795ea2442-kube-api-access-q8488\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.115088 master-0 kubenswrapper[31420]: I0220 12:10:46.115090 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-service-ca\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.115371 master-0 kubenswrapper[31420]: I0220 12:10:46.115140 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-trusted-ca-bundle\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.115371 master-0 kubenswrapper[31420]: I0220 12:10:46.115167 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-oauth-serving-cert\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.115371 master-0 kubenswrapper[31420]: I0220 12:10:46.115187 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-config\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.115498 master-0 kubenswrapper[31420]: I0220 12:10:46.115439 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-oauth-config\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.115738 master-0 kubenswrapper[31420]: I0220 12:10:46.115621 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-serving-cert\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.239123 master-0 kubenswrapper[31420]: I0220 12:10:46.238974 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-trusted-ca-bundle\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.239123 master-0 kubenswrapper[31420]: I0220 12:10:46.239127 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-oauth-serving-cert\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.239591 master-0 kubenswrapper[31420]: I0220 12:10:46.239468 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-config\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.239879 master-0 kubenswrapper[31420]: I0220 12:10:46.239825 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-oauth-config\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.240100 master-0 kubenswrapper[31420]: I0220 12:10:46.240043 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-serving-cert\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.240349 master-0 kubenswrapper[31420]: I0220 12:10:46.240296 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8488\" (UniqueName: \"kubernetes.io/projected/1bccbfa1-2cb4-462d-b428-e51795ea2442-kube-api-access-q8488\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.240615 master-0 kubenswrapper[31420]: I0220 12:10:46.240486 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-service-ca\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.241168 master-0 kubenswrapper[31420]: I0220 12:10:46.241122 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-oauth-serving-cert\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.242103 master-0 kubenswrapper[31420]: I0220 12:10:46.241973 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-config\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.242231 master-0 kubenswrapper[31420]: I0220 12:10:46.242116 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-service-ca\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.243292 master-0 kubenswrapper[31420]: I0220 12:10:46.243184 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-trusted-ca-bundle\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.245901 master-0 kubenswrapper[31420]: I0220 12:10:46.245856 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-oauth-config\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.248963 master-0 kubenswrapper[31420]: I0220 12:10:46.248904 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-serving-cert\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.339836 master-0 kubenswrapper[31420]: I0220 12:10:46.339784 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8488\" (UniqueName: \"kubernetes.io/projected/1bccbfa1-2cb4-462d-b428-e51795ea2442-kube-api-access-q8488\") pod \"console-5b66c6d797-hsk56\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:46.620465 master-0 kubenswrapper[31420]: I0220 12:10:46.620374 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:47.180601 master-0 kubenswrapper[31420]: I0220 12:10:47.180559 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b66c6d797-hsk56"] Feb 20 12:10:47.190001 master-0 kubenswrapper[31420]: W0220 12:10:47.189972 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bccbfa1_2cb4_462d_b428_e51795ea2442.slice/crio-2f349097989e1d4876bf24ffc1479a0cbeeb6f226fdd6e7c71980428aca0a813 WatchSource:0}: Error finding container 2f349097989e1d4876bf24ffc1479a0cbeeb6f226fdd6e7c71980428aca0a813: Status 404 returned error can't find the container with id 2f349097989e1d4876bf24ffc1479a0cbeeb6f226fdd6e7c71980428aca0a813 Feb 20 12:10:47.262893 master-0 kubenswrapper[31420]: I0220 12:10:47.262834 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b66c6d797-hsk56" event={"ID":"1bccbfa1-2cb4-462d-b428-e51795ea2442","Type":"ContainerStarted","Data":"2f349097989e1d4876bf24ffc1479a0cbeeb6f226fdd6e7c71980428aca0a813"} Feb 20 12:10:48.291687 master-0 kubenswrapper[31420]: I0220 12:10:48.291581 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b66c6d797-hsk56" event={"ID":"1bccbfa1-2cb4-462d-b428-e51795ea2442","Type":"ContainerStarted","Data":"ed895d7eb4ef4d65df7c0d8d39454c7eec115cf3f49e46a95425e0d6541e8f5d"} Feb 20 12:10:56.168657 master-0 kubenswrapper[31420]: I0220 12:10:56.167488 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5b66c6d797-hsk56" podStartSLOduration=11.167467285 podStartE2EDuration="11.167467285s" podCreationTimestamp="2026-02-20 12:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:10:48.320692757 +0000 UTC m=+353.039931049" watchObservedRunningTime="2026-02-20 12:10:56.167467285 +0000 UTC m=+360.886705536" Feb 20 12:10:56.169648 master-0 kubenswrapper[31420]: I0220 12:10:56.169236 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b66c6d797-hsk56"] Feb 20 12:10:56.215834 master-0 kubenswrapper[31420]: I0220 12:10:56.215732 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-67d46bcf94-kpph6"] Feb 20 12:10:56.217313 master-0 kubenswrapper[31420]: I0220 12:10:56.217051 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.225259 master-0 kubenswrapper[31420]: I0220 12:10:56.223654 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-trusted-ca-bundle\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.225259 master-0 kubenswrapper[31420]: I0220 12:10:56.223712 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcrfs\" (UniqueName: \"kubernetes.io/projected/5f543dc6-6a36-46e9-b01c-bb79931b13ac-kube-api-access-bcrfs\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.225259 master-0 kubenswrapper[31420]: I0220 12:10:56.223742 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-oauth-serving-cert\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.225259 master-0 kubenswrapper[31420]: I0220 12:10:56.223805 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-serving-cert\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.225259 master-0 kubenswrapper[31420]: I0220 12:10:56.223933 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-oauth-config\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.225259 master-0 kubenswrapper[31420]: I0220 12:10:56.224074 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-config\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.225259 master-0 kubenswrapper[31420]: I0220 12:10:56.224325 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-service-ca\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.228605 master-0 kubenswrapper[31420]: I0220 12:10:56.228463 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67d46bcf94-kpph6"] Feb 20 12:10:56.325173 master-0 kubenswrapper[31420]: I0220 12:10:56.325104 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-service-ca\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.325390 master-0 kubenswrapper[31420]: I0220 12:10:56.325222 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-trusted-ca-bundle\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.325390 master-0 kubenswrapper[31420]: I0220 12:10:56.325264 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcrfs\" (UniqueName: \"kubernetes.io/projected/5f543dc6-6a36-46e9-b01c-bb79931b13ac-kube-api-access-bcrfs\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.325390 master-0 kubenswrapper[31420]: I0220 12:10:56.325309 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-oauth-serving-cert\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.325743 master-0 kubenswrapper[31420]: I0220 12:10:56.325702 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-serving-cert\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.325825 master-0 kubenswrapper[31420]: I0220 12:10:56.325790 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-oauth-config\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.325897 master-0 kubenswrapper[31420]: I0220 12:10:56.325867 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-config\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.326143 master-0 kubenswrapper[31420]: I0220 12:10:56.326104 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-service-ca\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.327156 master-0 kubenswrapper[31420]: I0220 12:10:56.327109 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-oauth-serving-cert\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.328186 master-0 kubenswrapper[31420]: I0220 12:10:56.328134 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-config\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.330429 master-0 kubenswrapper[31420]: I0220 12:10:56.330379 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-trusted-ca-bundle\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.331242 master-0 kubenswrapper[31420]: I0220 12:10:56.330968 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-oauth-config\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.337148 master-0 kubenswrapper[31420]: I0220 12:10:56.337110 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-serving-cert\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.342927 master-0 kubenswrapper[31420]: I0220 12:10:56.342894 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcrfs\" (UniqueName: \"kubernetes.io/projected/5f543dc6-6a36-46e9-b01c-bb79931b13ac-kube-api-access-bcrfs\") pod \"console-67d46bcf94-kpph6\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.600947 master-0 kubenswrapper[31420]: I0220 12:10:56.600836 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:10:56.620943 master-0 kubenswrapper[31420]: I0220 12:10:56.620840 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:10:57.102043 master-0 kubenswrapper[31420]: I0220 12:10:57.101938 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67d46bcf94-kpph6"] Feb 20 12:10:57.102685 master-0 kubenswrapper[31420]: W0220 12:10:57.102635 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f543dc6_6a36_46e9_b01c_bb79931b13ac.slice/crio-77daf0bc051c91820775b31053034e8ee60ba431b87896db49a9ae1194b9f5c5 WatchSource:0}: Error finding container 77daf0bc051c91820775b31053034e8ee60ba431b87896db49a9ae1194b9f5c5: Status 404 returned error can't find the container with id 77daf0bc051c91820775b31053034e8ee60ba431b87896db49a9ae1194b9f5c5 Feb 20 12:10:57.393356 master-0 kubenswrapper[31420]: I0220 12:10:57.393168 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67d46bcf94-kpph6" event={"ID":"5f543dc6-6a36-46e9-b01c-bb79931b13ac","Type":"ContainerStarted","Data":"03ae87a577455e5806a844ed4c5637b1fbc1a24ab80943d551023758b2052453"} Feb 20 12:10:57.393356 master-0 kubenswrapper[31420]: I0220 12:10:57.393239 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67d46bcf94-kpph6" event={"ID":"5f543dc6-6a36-46e9-b01c-bb79931b13ac","Type":"ContainerStarted","Data":"77daf0bc051c91820775b31053034e8ee60ba431b87896db49a9ae1194b9f5c5"} Feb 20 12:10:57.433719 master-0 kubenswrapper[31420]: I0220 12:10:57.431869 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-67d46bcf94-kpph6" podStartSLOduration=1.431839788 podStartE2EDuration="1.431839788s" podCreationTimestamp="2026-02-20 12:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:10:57.422477309 +0000 UTC m=+362.141715650" watchObservedRunningTime="2026-02-20 12:10:57.431839788 +0000 UTC m=+362.151078059" Feb 20 12:11:06.601239 master-0 kubenswrapper[31420]: I0220 12:11:06.601179 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:11:06.601239 master-0 kubenswrapper[31420]: I0220 12:11:06.601244 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:11:06.608181 master-0 kubenswrapper[31420]: I0220 12:11:06.608112 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:11:07.489899 master-0 kubenswrapper[31420]: I0220 12:11:07.489846 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:11:08.134649 master-0 kubenswrapper[31420]: I0220 12:11:08.134586 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7f6444fbcc-rvd49"] Feb 20 12:11:08.660680 master-0 kubenswrapper[31420]: I0220 12:11:08.660593 31420 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 12:11:08.661030 master-0 kubenswrapper[31420]: I0220 12:11:08.660962 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="cluster-policy-controller" containerID="cri-o://f81c629f14de2675e27ed03b16f717338fc763727ad4d8279bef5f402d84b0bd" gracePeriod=30 Feb 20 12:11:08.661178 master-0 kubenswrapper[31420]: I0220 12:11:08.661144 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://b0d8055ab8671dd87e8c6f4600409f2168abd1ce04e2f64cb6ec241a84ad82db" gracePeriod=30 Feb 20 12:11:08.661256 master-0 kubenswrapper[31420]: I0220 12:11:08.661089 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="kube-controller-manager" containerID="cri-o://9018203c8a3208e7e4bcd5a26ed32c54dd1c05833036ee87e4b5bf9b3b7f996e" gracePeriod=30 Feb 20 12:11:08.661376 master-0 kubenswrapper[31420]: I0220 12:11:08.661202 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://7d1e40608e20859f716be438c8e8c5245ae85a9137bce4bf53bfccc4ff8fc568" gracePeriod=30 Feb 20 12:11:08.662906 master-0 kubenswrapper[31420]: I0220 12:11:08.662854 31420 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 12:11:08.663416 master-0 kubenswrapper[31420]: E0220 12:11:08.663374 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="kube-controller-manager" Feb 20 12:11:08.663416 master-0 kubenswrapper[31420]: I0220 12:11:08.663411 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="kube-controller-manager" Feb 20 12:11:08.663577 master-0 kubenswrapper[31420]: E0220 12:11:08.663468 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="kube-controller-manager-recovery-controller" Feb 20 12:11:08.663577 master-0 kubenswrapper[31420]: I0220 12:11:08.663484 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="kube-controller-manager-recovery-controller" Feb 20 12:11:08.663577 master-0 kubenswrapper[31420]: E0220 12:11:08.663560 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="kube-controller-manager-cert-syncer" Feb 20 12:11:08.663577 master-0 kubenswrapper[31420]: I0220 12:11:08.663575 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="kube-controller-manager-cert-syncer" Feb 20 12:11:08.663753 master-0 kubenswrapper[31420]: E0220 12:11:08.663616 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="cluster-policy-controller" Feb 20 12:11:08.663753 master-0 kubenswrapper[31420]: I0220 12:11:08.663631 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="cluster-policy-controller" Feb 20 12:11:08.663900 master-0 kubenswrapper[31420]: I0220 12:11:08.663868 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="kube-controller-manager" Feb 20 12:11:08.663967 master-0 kubenswrapper[31420]: I0220 12:11:08.663936 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="cluster-policy-controller" Feb 20 12:11:08.664019 master-0 kubenswrapper[31420]: I0220 12:11:08.663972 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="kube-controller-manager-recovery-controller" Feb 20 12:11:08.664019 master-0 kubenswrapper[31420]: I0220 12:11:08.664010 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="kube-controller-manager-cert-syncer" Feb 20 12:11:08.664364 master-0 kubenswrapper[31420]: E0220 12:11:08.664323 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="kube-controller-manager" Feb 20 12:11:08.664364 master-0 kubenswrapper[31420]: I0220 12:11:08.664353 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="kube-controller-manager" Feb 20 12:11:08.664726 master-0 kubenswrapper[31420]: I0220 12:11:08.664692 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="65774ccd44b6b404cec890cd0cfa3872" containerName="kube-controller-manager" Feb 20 12:11:08.757341 master-0 kubenswrapper[31420]: I0220 12:11:08.757267 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4215950490466d14c18f823ce9d5dee1-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4215950490466d14c18f823ce9d5dee1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:08.757341 master-0 kubenswrapper[31420]: I0220 12:11:08.757333 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4215950490466d14c18f823ce9d5dee1-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4215950490466d14c18f823ce9d5dee1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:08.868225 master-0 kubenswrapper[31420]: I0220 12:11:08.858800 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4215950490466d14c18f823ce9d5dee1-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4215950490466d14c18f823ce9d5dee1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:08.868225 master-0 kubenswrapper[31420]: I0220 12:11:08.858867 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4215950490466d14c18f823ce9d5dee1-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4215950490466d14c18f823ce9d5dee1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:08.868225 master-0 kubenswrapper[31420]: I0220 12:11:08.859026 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4215950490466d14c18f823ce9d5dee1-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4215950490466d14c18f823ce9d5dee1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:08.868225 master-0 kubenswrapper[31420]: I0220 12:11:08.859081 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4215950490466d14c18f823ce9d5dee1-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4215950490466d14c18f823ce9d5dee1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:09.525080 master-0 kubenswrapper[31420]: I0220 12:11:09.524995 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_65774ccd44b6b404cec890cd0cfa3872/kube-controller-manager-cert-syncer/0.log" Feb 20 12:11:09.526517 master-0 kubenswrapper[31420]: I0220 12:11:09.526457 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_65774ccd44b6b404cec890cd0cfa3872/kube-controller-manager/0.log" Feb 20 12:11:09.526695 master-0 kubenswrapper[31420]: I0220 12:11:09.526573 31420 generic.go:334] "Generic (PLEG): container finished" podID="65774ccd44b6b404cec890cd0cfa3872" containerID="9018203c8a3208e7e4bcd5a26ed32c54dd1c05833036ee87e4b5bf9b3b7f996e" exitCode=0 Feb 20 12:11:09.526695 master-0 kubenswrapper[31420]: I0220 12:11:09.526609 31420 generic.go:334] "Generic (PLEG): container finished" podID="65774ccd44b6b404cec890cd0cfa3872" containerID="b0d8055ab8671dd87e8c6f4600409f2168abd1ce04e2f64cb6ec241a84ad82db" exitCode=0 Feb 20 12:11:09.526695 master-0 kubenswrapper[31420]: I0220 12:11:09.526625 31420 generic.go:334] "Generic (PLEG): container finished" podID="65774ccd44b6b404cec890cd0cfa3872" containerID="7d1e40608e20859f716be438c8e8c5245ae85a9137bce4bf53bfccc4ff8fc568" exitCode=2 Feb 20 12:11:09.526695 master-0 kubenswrapper[31420]: I0220 12:11:09.526640 31420 generic.go:334] "Generic (PLEG): container finished" podID="65774ccd44b6b404cec890cd0cfa3872" containerID="f81c629f14de2675e27ed03b16f717338fc763727ad4d8279bef5f402d84b0bd" exitCode=0 Feb 20 12:11:09.527140 master-0 kubenswrapper[31420]: I0220 12:11:09.526784 31420 scope.go:117] "RemoveContainer" containerID="fb26f752e48be63937e70537d486ea02b5e41733fdb3b27eed62024dc371a88d" Feb 20 12:11:09.529655 master-0 kubenswrapper[31420]: I0220 12:11:09.529603 31420 generic.go:334] "Generic (PLEG): container finished" podID="cbbaa38c-b860-4dbd-9825-bc28b2d025bf" containerID="b84c311ffbf185011e1ed529b5a16a34d8bb89cf18fb9ad41458210ec9a27f14" exitCode=0 Feb 20 12:11:09.529852 master-0 kubenswrapper[31420]: I0220 12:11:09.529658 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"cbbaa38c-b860-4dbd-9825-bc28b2d025bf","Type":"ContainerDied","Data":"b84c311ffbf185011e1ed529b5a16a34d8bb89cf18fb9ad41458210ec9a27f14"} Feb 20 12:11:09.596834 master-0 kubenswrapper[31420]: I0220 12:11:09.596727 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_65774ccd44b6b404cec890cd0cfa3872/kube-controller-manager-cert-syncer/0.log" Feb 20 12:11:09.597825 master-0 kubenswrapper[31420]: I0220 12:11:09.597788 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:09.602614 master-0 kubenswrapper[31420]: I0220 12:11:09.602547 31420 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="65774ccd44b6b404cec890cd0cfa3872" podUID="4215950490466d14c18f823ce9d5dee1" Feb 20 12:11:09.674223 master-0 kubenswrapper[31420]: I0220 12:11:09.674051 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-cert-dir\") pod \"65774ccd44b6b404cec890cd0cfa3872\" (UID: \"65774ccd44b6b404cec890cd0cfa3872\") " Feb 20 12:11:09.674813 master-0 kubenswrapper[31420]: I0220 12:11:09.674247 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "65774ccd44b6b404cec890cd0cfa3872" (UID: "65774ccd44b6b404cec890cd0cfa3872"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:11:09.674813 master-0 kubenswrapper[31420]: I0220 12:11:09.674274 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-resource-dir\") pod \"65774ccd44b6b404cec890cd0cfa3872\" (UID: \"65774ccd44b6b404cec890cd0cfa3872\") " Feb 20 12:11:09.674813 master-0 kubenswrapper[31420]: I0220 12:11:09.674339 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "65774ccd44b6b404cec890cd0cfa3872" (UID: "65774ccd44b6b404cec890cd0cfa3872"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:11:09.675324 master-0 kubenswrapper[31420]: I0220 12:11:09.675270 31420 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:09.675324 master-0 kubenswrapper[31420]: I0220 12:11:09.675302 31420 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/65774ccd44b6b404cec890cd0cfa3872-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:10.541754 master-0 kubenswrapper[31420]: I0220 12:11:10.541688 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_65774ccd44b6b404cec890cd0cfa3872/kube-controller-manager-cert-syncer/0.log" Feb 20 12:11:10.542441 master-0 kubenswrapper[31420]: I0220 12:11:10.542137 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:10.542441 master-0 kubenswrapper[31420]: I0220 12:11:10.542148 31420 scope.go:117] "RemoveContainer" containerID="9018203c8a3208e7e4bcd5a26ed32c54dd1c05833036ee87e4b5bf9b3b7f996e" Feb 20 12:11:10.546600 master-0 kubenswrapper[31420]: I0220 12:11:10.546550 31420 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="65774ccd44b6b404cec890cd0cfa3872" podUID="4215950490466d14c18f823ce9d5dee1" Feb 20 12:11:10.558741 master-0 kubenswrapper[31420]: I0220 12:11:10.558671 31420 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="65774ccd44b6b404cec890cd0cfa3872" podUID="4215950490466d14c18f823ce9d5dee1" Feb 20 12:11:10.566446 master-0 kubenswrapper[31420]: I0220 12:11:10.566408 31420 scope.go:117] "RemoveContainer" containerID="b0d8055ab8671dd87e8c6f4600409f2168abd1ce04e2f64cb6ec241a84ad82db" Feb 20 12:11:10.583844 master-0 kubenswrapper[31420]: I0220 12:11:10.583498 31420 scope.go:117] "RemoveContainer" containerID="7d1e40608e20859f716be438c8e8c5245ae85a9137bce4bf53bfccc4ff8fc568" Feb 20 12:11:10.617438 master-0 kubenswrapper[31420]: I0220 12:11:10.617386 31420 scope.go:117] "RemoveContainer" containerID="f81c629f14de2675e27ed03b16f717338fc763727ad4d8279bef5f402d84b0bd" Feb 20 12:11:10.829215 master-0 kubenswrapper[31420]: I0220 12:11:10.829032 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 12:11:10.934419 master-0 kubenswrapper[31420]: I0220 12:11:10.934338 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-kubelet-dir\") pod \"cbbaa38c-b860-4dbd-9825-bc28b2d025bf\" (UID: \"cbbaa38c-b860-4dbd-9825-bc28b2d025bf\") " Feb 20 12:11:10.934733 master-0 kubenswrapper[31420]: I0220 12:11:10.934476 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cbbaa38c-b860-4dbd-9825-bc28b2d025bf" (UID: "cbbaa38c-b860-4dbd-9825-bc28b2d025bf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:11:10.934733 master-0 kubenswrapper[31420]: I0220 12:11:10.934692 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-var-lock\") pod \"cbbaa38c-b860-4dbd-9825-bc28b2d025bf\" (UID: \"cbbaa38c-b860-4dbd-9825-bc28b2d025bf\") " Feb 20 12:11:10.934870 master-0 kubenswrapper[31420]: I0220 12:11:10.934807 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-var-lock" (OuterVolumeSpecName: "var-lock") pod "cbbaa38c-b860-4dbd-9825-bc28b2d025bf" (UID: "cbbaa38c-b860-4dbd-9825-bc28b2d025bf"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:11:10.934870 master-0 kubenswrapper[31420]: I0220 12:11:10.934843 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-kube-api-access\") pod \"cbbaa38c-b860-4dbd-9825-bc28b2d025bf\" (UID: \"cbbaa38c-b860-4dbd-9825-bc28b2d025bf\") " Feb 20 12:11:10.935408 master-0 kubenswrapper[31420]: I0220 12:11:10.935357 31420 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:10.935408 master-0 kubenswrapper[31420]: I0220 12:11:10.935403 31420 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:10.938658 master-0 kubenswrapper[31420]: I0220 12:11:10.938599 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cbbaa38c-b860-4dbd-9825-bc28b2d025bf" (UID: "cbbaa38c-b860-4dbd-9825-bc28b2d025bf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:11:11.037684 master-0 kubenswrapper[31420]: I0220 12:11:11.037605 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbbaa38c-b860-4dbd-9825-bc28b2d025bf-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:11.514915 master-0 kubenswrapper[31420]: I0220 12:11:11.514817 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65774ccd44b6b404cec890cd0cfa3872" path="/var/lib/kubelet/pods/65774ccd44b6b404cec890cd0cfa3872/volumes" Feb 20 12:11:11.553355 master-0 kubenswrapper[31420]: I0220 12:11:11.553270 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"cbbaa38c-b860-4dbd-9825-bc28b2d025bf","Type":"ContainerDied","Data":"fbf8b5b0f331a64f5cc61c37255326f0225bdd85045a29707215558bb9746025"} Feb 20 12:11:11.553838 master-0 kubenswrapper[31420]: I0220 12:11:11.553361 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbf8b5b0f331a64f5cc61c37255326f0225bdd85045a29707215558bb9746025" Feb 20 12:11:11.553838 master-0 kubenswrapper[31420]: I0220 12:11:11.553455 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 12:11:20.497210 master-0 kubenswrapper[31420]: I0220 12:11:20.496971 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:20.524998 master-0 kubenswrapper[31420]: I0220 12:11:20.524919 31420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bfc2967-c0f9-4004-afdb-c822e5e54561" Feb 20 12:11:20.524998 master-0 kubenswrapper[31420]: I0220 12:11:20.524984 31420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5bfc2967-c0f9-4004-afdb-c822e5e54561" Feb 20 12:11:20.552824 master-0 kubenswrapper[31420]: I0220 12:11:20.551787 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 12:11:20.552824 master-0 kubenswrapper[31420]: I0220 12:11:20.552060 31420 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:20.563918 master-0 kubenswrapper[31420]: I0220 12:11:20.563854 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 12:11:20.579376 master-0 kubenswrapper[31420]: I0220 12:11:20.575832 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:20.581201 master-0 kubenswrapper[31420]: I0220 12:11:20.581123 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 12:11:20.622091 master-0 kubenswrapper[31420]: W0220 12:11:20.622012 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4215950490466d14c18f823ce9d5dee1.slice/crio-7d1d85989b6071fa6904a4151fb09a934d62c3eb0af25bbbafb2e2f15c185df6 WatchSource:0}: Error finding container 7d1d85989b6071fa6904a4151fb09a934d62c3eb0af25bbbafb2e2f15c185df6: Status 404 returned error can't find the container with id 7d1d85989b6071fa6904a4151fb09a934d62c3eb0af25bbbafb2e2f15c185df6 Feb 20 12:11:20.649834 master-0 kubenswrapper[31420]: I0220 12:11:20.649742 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4215950490466d14c18f823ce9d5dee1","Type":"ContainerStarted","Data":"7d1d85989b6071fa6904a4151fb09a934d62c3eb0af25bbbafb2e2f15c185df6"} Feb 20 12:11:21.218848 master-0 kubenswrapper[31420]: I0220 12:11:21.218743 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5b66c6d797-hsk56" podUID="1bccbfa1-2cb4-462d-b428-e51795ea2442" containerName="console" containerID="cri-o://ed895d7eb4ef4d65df7c0d8d39454c7eec115cf3f49e46a95425e0d6541e8f5d" gracePeriod=15 Feb 20 12:11:21.684981 master-0 kubenswrapper[31420]: I0220 12:11:21.684937 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b66c6d797-hsk56_1bccbfa1-2cb4-462d-b428-e51795ea2442/console/0.log" Feb 20 12:11:21.685433 master-0 kubenswrapper[31420]: I0220 12:11:21.684990 31420 generic.go:334] "Generic (PLEG): container finished" podID="1bccbfa1-2cb4-462d-b428-e51795ea2442" containerID="ed895d7eb4ef4d65df7c0d8d39454c7eec115cf3f49e46a95425e0d6541e8f5d" exitCode=2 Feb 20 12:11:21.685433 master-0 kubenswrapper[31420]: I0220 12:11:21.685050 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b66c6d797-hsk56" event={"ID":"1bccbfa1-2cb4-462d-b428-e51795ea2442","Type":"ContainerDied","Data":"ed895d7eb4ef4d65df7c0d8d39454c7eec115cf3f49e46a95425e0d6541e8f5d"} Feb 20 12:11:21.687129 master-0 kubenswrapper[31420]: I0220 12:11:21.687054 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4215950490466d14c18f823ce9d5dee1","Type":"ContainerStarted","Data":"b224a233c5e0f80f18e6945a7ffe677e0c44bbb26d2267f52d256d227c5d714c"} Feb 20 12:11:21.687129 master-0 kubenswrapper[31420]: I0220 12:11:21.687085 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4215950490466d14c18f823ce9d5dee1","Type":"ContainerStarted","Data":"8f0273d5b3515e457f6672eb4e4d106aeb68d91cc29568b2aeec318ea7e26ca7"} Feb 20 12:11:21.687129 master-0 kubenswrapper[31420]: I0220 12:11:21.687096 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4215950490466d14c18f823ce9d5dee1","Type":"ContainerStarted","Data":"e438b3c044ce71dcfc1d86ee0348ebec9a1e00dc32c61c8723153da039285985"} Feb 20 12:11:21.708486 master-0 kubenswrapper[31420]: I0220 12:11:21.708436 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b66c6d797-hsk56_1bccbfa1-2cb4-462d-b428-e51795ea2442/console/0.log" Feb 20 12:11:21.708588 master-0 kubenswrapper[31420]: I0220 12:11:21.708520 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:11:21.747503 master-0 kubenswrapper[31420]: I0220 12:11:21.746404 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-service-ca\") pod \"1bccbfa1-2cb4-462d-b428-e51795ea2442\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " Feb 20 12:11:21.747503 master-0 kubenswrapper[31420]: I0220 12:11:21.746544 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-serving-cert\") pod \"1bccbfa1-2cb4-462d-b428-e51795ea2442\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " Feb 20 12:11:21.747503 master-0 kubenswrapper[31420]: I0220 12:11:21.747205 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-service-ca" (OuterVolumeSpecName: "service-ca") pod "1bccbfa1-2cb4-462d-b428-e51795ea2442" (UID: "1bccbfa1-2cb4-462d-b428-e51795ea2442"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:11:21.747503 master-0 kubenswrapper[31420]: I0220 12:11:21.747407 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-oauth-config\") pod \"1bccbfa1-2cb4-462d-b428-e51795ea2442\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " Feb 20 12:11:21.747503 master-0 kubenswrapper[31420]: I0220 12:11:21.747464 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-oauth-serving-cert\") pod \"1bccbfa1-2cb4-462d-b428-e51795ea2442\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " Feb 20 12:11:21.747503 master-0 kubenswrapper[31420]: I0220 12:11:21.747543 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8488\" (UniqueName: \"kubernetes.io/projected/1bccbfa1-2cb4-462d-b428-e51795ea2442-kube-api-access-q8488\") pod \"1bccbfa1-2cb4-462d-b428-e51795ea2442\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " Feb 20 12:11:21.747503 master-0 kubenswrapper[31420]: I0220 12:11:21.747592 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-trusted-ca-bundle\") pod \"1bccbfa1-2cb4-462d-b428-e51795ea2442\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " Feb 20 12:11:21.748798 master-0 kubenswrapper[31420]: I0220 12:11:21.747636 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-config\") pod \"1bccbfa1-2cb4-462d-b428-e51795ea2442\" (UID: \"1bccbfa1-2cb4-462d-b428-e51795ea2442\") " Feb 20 12:11:21.748798 master-0 kubenswrapper[31420]: I0220 12:11:21.747875 31420 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:21.748798 master-0 kubenswrapper[31420]: I0220 12:11:21.748342 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1bccbfa1-2cb4-462d-b428-e51795ea2442" (UID: "1bccbfa1-2cb4-462d-b428-e51795ea2442"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:11:21.748798 master-0 kubenswrapper[31420]: I0220 12:11:21.748387 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bccbfa1-2cb4-462d-b428-e51795ea2442" (UID: "1bccbfa1-2cb4-462d-b428-e51795ea2442"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:11:21.752670 master-0 kubenswrapper[31420]: I0220 12:11:21.751233 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-config" (OuterVolumeSpecName: "console-config") pod "1bccbfa1-2cb4-462d-b428-e51795ea2442" (UID: "1bccbfa1-2cb4-462d-b428-e51795ea2442"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:11:21.752670 master-0 kubenswrapper[31420]: I0220 12:11:21.751344 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1bccbfa1-2cb4-462d-b428-e51795ea2442" (UID: "1bccbfa1-2cb4-462d-b428-e51795ea2442"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:11:21.758086 master-0 kubenswrapper[31420]: I0220 12:11:21.757035 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bccbfa1-2cb4-462d-b428-e51795ea2442-kube-api-access-q8488" (OuterVolumeSpecName: "kube-api-access-q8488") pod "1bccbfa1-2cb4-462d-b428-e51795ea2442" (UID: "1bccbfa1-2cb4-462d-b428-e51795ea2442"). InnerVolumeSpecName "kube-api-access-q8488". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:11:21.765957 master-0 kubenswrapper[31420]: I0220 12:11:21.765905 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1bccbfa1-2cb4-462d-b428-e51795ea2442" (UID: "1bccbfa1-2cb4-462d-b428-e51795ea2442"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:11:21.849078 master-0 kubenswrapper[31420]: I0220 12:11:21.849014 31420 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:21.849078 master-0 kubenswrapper[31420]: I0220 12:11:21.849059 31420 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:21.849078 master-0 kubenswrapper[31420]: I0220 12:11:21.849073 31420 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:21.849078 master-0 kubenswrapper[31420]: I0220 12:11:21.849086 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8488\" (UniqueName: \"kubernetes.io/projected/1bccbfa1-2cb4-462d-b428-e51795ea2442-kube-api-access-q8488\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:21.849388 master-0 kubenswrapper[31420]: I0220 12:11:21.849098 31420 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:21.849388 master-0 kubenswrapper[31420]: I0220 12:11:21.849111 31420 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1bccbfa1-2cb4-462d-b428-e51795ea2442-console-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:22.700274 master-0 kubenswrapper[31420]: I0220 12:11:22.700202 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b66c6d797-hsk56_1bccbfa1-2cb4-462d-b428-e51795ea2442/console/0.log" Feb 20 12:11:22.701346 master-0 kubenswrapper[31420]: I0220 12:11:22.700397 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b66c6d797-hsk56" Feb 20 12:11:22.701346 master-0 kubenswrapper[31420]: I0220 12:11:22.700391 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b66c6d797-hsk56" event={"ID":"1bccbfa1-2cb4-462d-b428-e51795ea2442","Type":"ContainerDied","Data":"2f349097989e1d4876bf24ffc1479a0cbeeb6f226fdd6e7c71980428aca0a813"} Feb 20 12:11:22.701346 master-0 kubenswrapper[31420]: I0220 12:11:22.700667 31420 scope.go:117] "RemoveContainer" containerID="ed895d7eb4ef4d65df7c0d8d39454c7eec115cf3f49e46a95425e0d6541e8f5d" Feb 20 12:11:22.704381 master-0 kubenswrapper[31420]: I0220 12:11:22.704320 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4215950490466d14c18f823ce9d5dee1","Type":"ContainerStarted","Data":"86ffaae102e43f0dd86ed40cb543f05937e046c1c4a8f2cf5644926ec2d7c961"} Feb 20 12:11:22.751405 master-0 kubenswrapper[31420]: I0220 12:11:22.751275 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.7512463929999997 podStartE2EDuration="2.751246393s" podCreationTimestamp="2026-02-20 12:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:11:22.736228116 +0000 UTC m=+387.455466397" watchObservedRunningTime="2026-02-20 12:11:22.751246393 +0000 UTC m=+387.470484674" Feb 20 12:11:22.765016 master-0 kubenswrapper[31420]: I0220 12:11:22.764944 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b66c6d797-hsk56"] Feb 20 12:11:22.782031 master-0 kubenswrapper[31420]: I0220 12:11:22.779975 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5b66c6d797-hsk56"] Feb 20 12:11:23.512318 master-0 kubenswrapper[31420]: I0220 12:11:23.512233 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bccbfa1-2cb4-462d-b428-e51795ea2442" path="/var/lib/kubelet/pods/1bccbfa1-2cb4-462d-b428-e51795ea2442/volumes" Feb 20 12:11:30.579737 master-0 kubenswrapper[31420]: I0220 12:11:30.577060 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:30.579737 master-0 kubenswrapper[31420]: I0220 12:11:30.577362 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:30.579737 master-0 kubenswrapper[31420]: I0220 12:11:30.577409 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:30.579737 master-0 kubenswrapper[31420]: I0220 12:11:30.577436 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:30.584789 master-0 kubenswrapper[31420]: I0220 12:11:30.584736 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:30.593735 master-0 kubenswrapper[31420]: I0220 12:11:30.593666 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:30.801777 master-0 kubenswrapper[31420]: I0220 12:11:30.801686 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:31.812816 master-0 kubenswrapper[31420]: I0220 12:11:31.812719 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 12:11:33.194242 master-0 kubenswrapper[31420]: I0220 12:11:33.193919 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7f6444fbcc-rvd49" podUID="79e3c3e3-1405-4ac9-b024-b6f2d25347b4" containerName="console" containerID="cri-o://813a359c8c6e4414bb8ec559afea2532cf036055bbb4319101d2daa4dfcd6e22" gracePeriod=15 Feb 20 12:11:33.803384 master-0 kubenswrapper[31420]: I0220 12:11:33.803318 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7f6444fbcc-rvd49_79e3c3e3-1405-4ac9-b024-b6f2d25347b4/console/0.log" Feb 20 12:11:33.803873 master-0 kubenswrapper[31420]: I0220 12:11:33.803419 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:11:33.826475 master-0 kubenswrapper[31420]: I0220 12:11:33.826307 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7f6444fbcc-rvd49_79e3c3e3-1405-4ac9-b024-b6f2d25347b4/console/0.log" Feb 20 12:11:33.826475 master-0 kubenswrapper[31420]: I0220 12:11:33.826449 31420 generic.go:334] "Generic (PLEG): container finished" podID="79e3c3e3-1405-4ac9-b024-b6f2d25347b4" containerID="813a359c8c6e4414bb8ec559afea2532cf036055bbb4319101d2daa4dfcd6e22" exitCode=2 Feb 20 12:11:33.827184 master-0 kubenswrapper[31420]: I0220 12:11:33.826574 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f6444fbcc-rvd49" Feb 20 12:11:33.827184 master-0 kubenswrapper[31420]: I0220 12:11:33.826632 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f6444fbcc-rvd49" event={"ID":"79e3c3e3-1405-4ac9-b024-b6f2d25347b4","Type":"ContainerDied","Data":"813a359c8c6e4414bb8ec559afea2532cf036055bbb4319101d2daa4dfcd6e22"} Feb 20 12:11:33.827184 master-0 kubenswrapper[31420]: I0220 12:11:33.826712 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f6444fbcc-rvd49" event={"ID":"79e3c3e3-1405-4ac9-b024-b6f2d25347b4","Type":"ContainerDied","Data":"3e5d2298e3d9e07ec065996b96601a8bf745dd995ddcab587dd163020295fac8"} Feb 20 12:11:33.827184 master-0 kubenswrapper[31420]: I0220 12:11:33.826745 31420 scope.go:117] "RemoveContainer" containerID="813a359c8c6e4414bb8ec559afea2532cf036055bbb4319101d2daa4dfcd6e22" Feb 20 12:11:33.857233 master-0 kubenswrapper[31420]: I0220 12:11:33.857079 31420 scope.go:117] "RemoveContainer" containerID="813a359c8c6e4414bb8ec559afea2532cf036055bbb4319101d2daa4dfcd6e22" Feb 20 12:11:33.857824 master-0 kubenswrapper[31420]: E0220 12:11:33.857761 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"813a359c8c6e4414bb8ec559afea2532cf036055bbb4319101d2daa4dfcd6e22\": container with ID starting with 813a359c8c6e4414bb8ec559afea2532cf036055bbb4319101d2daa4dfcd6e22 not found: ID does not exist" containerID="813a359c8c6e4414bb8ec559afea2532cf036055bbb4319101d2daa4dfcd6e22" Feb 20 12:11:33.858097 master-0 kubenswrapper[31420]: I0220 12:11:33.857886 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"813a359c8c6e4414bb8ec559afea2532cf036055bbb4319101d2daa4dfcd6e22"} err="failed to get container status \"813a359c8c6e4414bb8ec559afea2532cf036055bbb4319101d2daa4dfcd6e22\": rpc error: code = NotFound desc = could not find container \"813a359c8c6e4414bb8ec559afea2532cf036055bbb4319101d2daa4dfcd6e22\": container with ID starting with 813a359c8c6e4414bb8ec559afea2532cf036055bbb4319101d2daa4dfcd6e22 not found: ID does not exist" Feb 20 12:11:33.996064 master-0 kubenswrapper[31420]: I0220 12:11:33.995813 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-trusted-ca-bundle\") pod \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " Feb 20 12:11:33.996064 master-0 kubenswrapper[31420]: I0220 12:11:33.995971 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-oauth-config\") pod \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " Feb 20 12:11:33.996064 master-0 kubenswrapper[31420]: I0220 12:11:33.996026 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert\") pod \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " Feb 20 12:11:33.996902 master-0 kubenswrapper[31420]: I0220 12:11:33.996090 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-service-ca\") pod \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " Feb 20 12:11:33.996902 master-0 kubenswrapper[31420]: I0220 12:11:33.996172 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-config\") pod \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " Feb 20 12:11:33.996902 master-0 kubenswrapper[31420]: I0220 12:11:33.996205 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-oauth-serving-cert\") pod \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " Feb 20 12:11:33.996902 master-0 kubenswrapper[31420]: I0220 12:11:33.996260 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvx8\" (UniqueName: \"kubernetes.io/projected/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-kube-api-access-twvx8\") pod \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\" (UID: \"79e3c3e3-1405-4ac9-b024-b6f2d25347b4\") " Feb 20 12:11:33.996902 master-0 kubenswrapper[31420]: I0220 12:11:33.996413 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "79e3c3e3-1405-4ac9-b024-b6f2d25347b4" (UID: "79e3c3e3-1405-4ac9-b024-b6f2d25347b4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:11:33.996902 master-0 kubenswrapper[31420]: I0220 12:11:33.996785 31420 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:33.996902 master-0 kubenswrapper[31420]: I0220 12:11:33.996900 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-service-ca" (OuterVolumeSpecName: "service-ca") pod "79e3c3e3-1405-4ac9-b024-b6f2d25347b4" (UID: "79e3c3e3-1405-4ac9-b024-b6f2d25347b4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:11:33.997831 master-0 kubenswrapper[31420]: I0220 12:11:33.997747 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-config" (OuterVolumeSpecName: "console-config") pod "79e3c3e3-1405-4ac9-b024-b6f2d25347b4" (UID: "79e3c3e3-1405-4ac9-b024-b6f2d25347b4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:11:33.998166 master-0 kubenswrapper[31420]: I0220 12:11:33.997881 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "79e3c3e3-1405-4ac9-b024-b6f2d25347b4" (UID: "79e3c3e3-1405-4ac9-b024-b6f2d25347b4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:11:34.000739 master-0 kubenswrapper[31420]: I0220 12:11:34.000677 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "79e3c3e3-1405-4ac9-b024-b6f2d25347b4" (UID: "79e3c3e3-1405-4ac9-b024-b6f2d25347b4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:11:34.000891 master-0 kubenswrapper[31420]: I0220 12:11:34.000718 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "79e3c3e3-1405-4ac9-b024-b6f2d25347b4" (UID: "79e3c3e3-1405-4ac9-b024-b6f2d25347b4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:11:34.004765 master-0 kubenswrapper[31420]: I0220 12:11:34.004720 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-kube-api-access-twvx8" (OuterVolumeSpecName: "kube-api-access-twvx8") pod "79e3c3e3-1405-4ac9-b024-b6f2d25347b4" (UID: "79e3c3e3-1405-4ac9-b024-b6f2d25347b4"). InnerVolumeSpecName "kube-api-access-twvx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:11:34.098914 master-0 kubenswrapper[31420]: I0220 12:11:34.098768 31420 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:34.098914 master-0 kubenswrapper[31420]: I0220 12:11:34.098859 31420 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:34.098914 master-0 kubenswrapper[31420]: I0220 12:11:34.098889 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twvx8\" (UniqueName: \"kubernetes.io/projected/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-kube-api-access-twvx8\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:34.098914 master-0 kubenswrapper[31420]: I0220 12:11:34.098915 31420 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:34.099457 master-0 kubenswrapper[31420]: I0220 12:11:34.098942 31420 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:34.099457 master-0 kubenswrapper[31420]: I0220 12:11:34.098968 31420 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79e3c3e3-1405-4ac9-b024-b6f2d25347b4-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:34.190077 master-0 kubenswrapper[31420]: I0220 12:11:34.189944 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7f6444fbcc-rvd49"] Feb 20 12:11:34.203391 master-0 kubenswrapper[31420]: I0220 12:11:34.201563 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7f6444fbcc-rvd49"] Feb 20 12:11:35.511757 master-0 kubenswrapper[31420]: I0220 12:11:35.511653 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e3c3e3-1405-4ac9-b024-b6f2d25347b4" path="/var/lib/kubelet/pods/79e3c3e3-1405-4ac9-b024-b6f2d25347b4/volumes" Feb 20 12:11:40.858473 master-0 kubenswrapper[31420]: I0220 12:11:40.858383 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:11:40.902648 master-0 kubenswrapper[31420]: I0220 12:11:40.902561 31420 generic.go:334] "Generic (PLEG): container finished" podID="6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" containerID="4b07f9abf3972d915c07bfcfda610b6bf504d961e92676742b5c0b96fc8aab1a" exitCode=0 Feb 20 12:11:40.902648 master-0 kubenswrapper[31420]: I0220 12:11:40.902641 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" event={"ID":"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c","Type":"ContainerDied","Data":"4b07f9abf3972d915c07bfcfda610b6bf504d961e92676742b5c0b96fc8aab1a"} Feb 20 12:11:40.903028 master-0 kubenswrapper[31420]: I0220 12:11:40.902680 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" Feb 20 12:11:40.903028 master-0 kubenswrapper[31420]: I0220 12:11:40.902713 31420 scope.go:117] "RemoveContainer" containerID="4b07f9abf3972d915c07bfcfda610b6bf504d961e92676742b5c0b96fc8aab1a" Feb 20 12:11:40.903028 master-0 kubenswrapper[31420]: I0220 12:11:40.902694 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l" event={"ID":"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c","Type":"ContainerDied","Data":"166c259337ecc4f073ab3f6650460578e7d7cb947fe167df547591f1f002809b"} Feb 20 12:11:40.936593 master-0 kubenswrapper[31420]: I0220 12:11:40.934040 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-configmap-kubelet-serving-ca-bundle\") pod \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " Feb 20 12:11:40.936593 master-0 kubenswrapper[31420]: I0220 12:11:40.934128 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-server-tls\") pod \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " Feb 20 12:11:40.936593 master-0 kubenswrapper[31420]: I0220 12:11:40.934153 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-client-ca-bundle\") pod \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " Feb 20 12:11:40.936593 master-0 kubenswrapper[31420]: I0220 12:11:40.934722 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:11:40.936593 master-0 kubenswrapper[31420]: I0220 12:11:40.934744 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-metrics-server-audit-profiles\") pod \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " Feb 20 12:11:40.936593 master-0 kubenswrapper[31420]: I0220 12:11:40.934876 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfzqt\" (UniqueName: \"kubernetes.io/projected/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-kube-api-access-kfzqt\") pod \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " Feb 20 12:11:40.936593 master-0 kubenswrapper[31420]: I0220 12:11:40.934937 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-audit-log\") pod \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " Feb 20 12:11:40.936593 master-0 kubenswrapper[31420]: I0220 12:11:40.935159 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-client-certs\") pod \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\" (UID: \"6717f0b4-c2f6-4ed5-94fb-778e5c7c983c\") " Feb 20 12:11:40.936593 master-0 kubenswrapper[31420]: I0220 12:11:40.935314 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:11:40.936593 master-0 kubenswrapper[31420]: I0220 12:11:40.936032 31420 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:40.936593 master-0 kubenswrapper[31420]: I0220 12:11:40.936060 31420 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:40.938241 master-0 kubenswrapper[31420]: I0220 12:11:40.938149 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-audit-log" (OuterVolumeSpecName: "audit-log") pod "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:11:40.938416 master-0 kubenswrapper[31420]: I0220 12:11:40.938394 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:11:40.939582 master-0 kubenswrapper[31420]: I0220 12:11:40.939499 31420 scope.go:117] "RemoveContainer" containerID="4b07f9abf3972d915c07bfcfda610b6bf504d961e92676742b5c0b96fc8aab1a" Feb 20 12:11:40.941093 master-0 kubenswrapper[31420]: E0220 12:11:40.940264 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b07f9abf3972d915c07bfcfda610b6bf504d961e92676742b5c0b96fc8aab1a\": container with ID starting with 4b07f9abf3972d915c07bfcfda610b6bf504d961e92676742b5c0b96fc8aab1a not found: ID does not exist" containerID="4b07f9abf3972d915c07bfcfda610b6bf504d961e92676742b5c0b96fc8aab1a" Feb 20 12:11:40.941093 master-0 kubenswrapper[31420]: I0220 12:11:40.940318 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b07f9abf3972d915c07bfcfda610b6bf504d961e92676742b5c0b96fc8aab1a"} err="failed to get container status \"4b07f9abf3972d915c07bfcfda610b6bf504d961e92676742b5c0b96fc8aab1a\": rpc error: code = NotFound desc = could not find container \"4b07f9abf3972d915c07bfcfda610b6bf504d961e92676742b5c0b96fc8aab1a\": container with ID starting with 4b07f9abf3972d915c07bfcfda610b6bf504d961e92676742b5c0b96fc8aab1a not found: ID does not exist" Feb 20 12:11:40.941562 master-0 kubenswrapper[31420]: I0220 12:11:40.941515 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:11:40.942764 master-0 kubenswrapper[31420]: I0220 12:11:40.942694 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:11:40.946999 master-0 kubenswrapper[31420]: I0220 12:11:40.946945 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-kube-api-access-kfzqt" (OuterVolumeSpecName: "kube-api-access-kfzqt") pod "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" (UID: "6717f0b4-c2f6-4ed5-94fb-778e5c7c983c"). InnerVolumeSpecName "kube-api-access-kfzqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:11:41.037695 master-0 kubenswrapper[31420]: I0220 12:11:41.037552 31420 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:41.037695 master-0 kubenswrapper[31420]: I0220 12:11:41.037607 31420 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:41.037695 master-0 kubenswrapper[31420]: I0220 12:11:41.037623 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfzqt\" (UniqueName: \"kubernetes.io/projected/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-kube-api-access-kfzqt\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:41.037695 master-0 kubenswrapper[31420]: I0220 12:11:41.037674 31420 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-audit-log\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:41.037695 master-0 kubenswrapper[31420]: I0220 12:11:41.037689 31420 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:11:41.265859 master-0 kubenswrapper[31420]: I0220 12:11:41.265688 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l"] Feb 20 12:11:41.278742 master-0 kubenswrapper[31420]: I0220 12:11:41.278664 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-7dcc9fb5fb-2fx9l"] Feb 20 12:11:41.514485 master-0 kubenswrapper[31420]: I0220 12:11:41.514322 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" path="/var/lib/kubelet/pods/6717f0b4-c2f6-4ed5-94fb-778e5c7c983c/volumes" Feb 20 12:11:43.657044 master-0 kubenswrapper[31420]: I0220 12:11:43.656922 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-skcgc"] Feb 20 12:11:43.657957 master-0 kubenswrapper[31420]: E0220 12:11:43.657319 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e3c3e3-1405-4ac9-b024-b6f2d25347b4" containerName="console" Feb 20 12:11:43.657957 master-0 kubenswrapper[31420]: I0220 12:11:43.657335 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e3c3e3-1405-4ac9-b024-b6f2d25347b4" containerName="console" Feb 20 12:11:43.657957 master-0 kubenswrapper[31420]: E0220 12:11:43.657362 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" containerName="metrics-server" Feb 20 12:11:43.657957 master-0 kubenswrapper[31420]: I0220 12:11:43.657371 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" containerName="metrics-server" Feb 20 12:11:43.657957 master-0 kubenswrapper[31420]: E0220 12:11:43.657393 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bccbfa1-2cb4-462d-b428-e51795ea2442" containerName="console" Feb 20 12:11:43.657957 master-0 kubenswrapper[31420]: I0220 12:11:43.657402 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bccbfa1-2cb4-462d-b428-e51795ea2442" containerName="console" Feb 20 12:11:43.657957 master-0 kubenswrapper[31420]: E0220 12:11:43.657443 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbbaa38c-b860-4dbd-9825-bc28b2d025bf" containerName="installer" Feb 20 12:11:43.657957 master-0 kubenswrapper[31420]: I0220 12:11:43.657451 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbbaa38c-b860-4dbd-9825-bc28b2d025bf" containerName="installer" Feb 20 12:11:43.657957 master-0 kubenswrapper[31420]: I0220 12:11:43.657635 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="6717f0b4-c2f6-4ed5-94fb-778e5c7c983c" containerName="metrics-server" Feb 20 12:11:43.657957 master-0 kubenswrapper[31420]: I0220 12:11:43.657672 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e3c3e3-1405-4ac9-b024-b6f2d25347b4" containerName="console" Feb 20 12:11:43.657957 master-0 kubenswrapper[31420]: I0220 12:11:43.657682 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bccbfa1-2cb4-462d-b428-e51795ea2442" containerName="console" Feb 20 12:11:43.657957 master-0 kubenswrapper[31420]: I0220 12:11:43.657701 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbbaa38c-b860-4dbd-9825-bc28b2d025bf" containerName="installer" Feb 20 12:11:43.658732 master-0 kubenswrapper[31420]: I0220 12:11:43.658279 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:11:43.662513 master-0 kubenswrapper[31420]: I0220 12:11:43.662428 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Feb 20 12:11:43.662716 master-0 kubenswrapper[31420]: I0220 12:11:43.662588 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Feb 20 12:11:43.662839 master-0 kubenswrapper[31420]: I0220 12:11:43.662443 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Feb 20 12:11:43.664682 master-0 kubenswrapper[31420]: I0220 12:11:43.664627 31420 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Feb 20 12:11:43.692452 master-0 kubenswrapper[31420]: I0220 12:11:43.692380 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/c44ebcae-003d-4347-8ab6-36cc5b16e2df-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-skcgc\" (UID: \"c44ebcae-003d-4347-8ab6-36cc5b16e2df\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:11:43.692803 master-0 kubenswrapper[31420]: I0220 12:11:43.692492 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c44ebcae-003d-4347-8ab6-36cc5b16e2df-os-client-config\") pod \"sushy-emulator-78f6d7d749-skcgc\" (UID: \"c44ebcae-003d-4347-8ab6-36cc5b16e2df\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:11:43.692930 master-0 kubenswrapper[31420]: I0220 12:11:43.692786 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgrkx\" (UniqueName: \"kubernetes.io/projected/c44ebcae-003d-4347-8ab6-36cc5b16e2df-kube-api-access-fgrkx\") pod \"sushy-emulator-78f6d7d749-skcgc\" (UID: \"c44ebcae-003d-4347-8ab6-36cc5b16e2df\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:11:43.785112 master-0 kubenswrapper[31420]: I0220 12:11:43.785012 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-skcgc"] Feb 20 12:11:43.795662 master-0 kubenswrapper[31420]: I0220 12:11:43.795576 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/c44ebcae-003d-4347-8ab6-36cc5b16e2df-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-skcgc\" (UID: \"c44ebcae-003d-4347-8ab6-36cc5b16e2df\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:11:43.795821 master-0 kubenswrapper[31420]: I0220 12:11:43.795724 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c44ebcae-003d-4347-8ab6-36cc5b16e2df-os-client-config\") pod \"sushy-emulator-78f6d7d749-skcgc\" (UID: \"c44ebcae-003d-4347-8ab6-36cc5b16e2df\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:11:43.795821 master-0 kubenswrapper[31420]: I0220 12:11:43.795801 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgrkx\" (UniqueName: \"kubernetes.io/projected/c44ebcae-003d-4347-8ab6-36cc5b16e2df-kube-api-access-fgrkx\") pod \"sushy-emulator-78f6d7d749-skcgc\" (UID: \"c44ebcae-003d-4347-8ab6-36cc5b16e2df\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:11:43.797024 master-0 kubenswrapper[31420]: I0220 12:11:43.796957 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/c44ebcae-003d-4347-8ab6-36cc5b16e2df-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-skcgc\" (UID: \"c44ebcae-003d-4347-8ab6-36cc5b16e2df\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:11:43.800845 master-0 kubenswrapper[31420]: I0220 12:11:43.800794 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c44ebcae-003d-4347-8ab6-36cc5b16e2df-os-client-config\") pod \"sushy-emulator-78f6d7d749-skcgc\" (UID: \"c44ebcae-003d-4347-8ab6-36cc5b16e2df\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:11:43.870575 master-0 kubenswrapper[31420]: I0220 12:11:43.864712 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgrkx\" (UniqueName: \"kubernetes.io/projected/c44ebcae-003d-4347-8ab6-36cc5b16e2df-kube-api-access-fgrkx\") pod \"sushy-emulator-78f6d7d749-skcgc\" (UID: \"c44ebcae-003d-4347-8ab6-36cc5b16e2df\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:11:44.006587 master-0 kubenswrapper[31420]: I0220 12:11:44.006482 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:11:44.557315 master-0 kubenswrapper[31420]: I0220 12:11:44.557231 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-skcgc"] Feb 20 12:11:44.560984 master-0 kubenswrapper[31420]: I0220 12:11:44.560890 31420 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 12:11:44.950255 master-0 kubenswrapper[31420]: I0220 12:11:44.950139 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" event={"ID":"c44ebcae-003d-4347-8ab6-36cc5b16e2df","Type":"ContainerStarted","Data":"8c3861b0a28b63b1084d2fbdac77c6fd1b306f40ecedd4690b672e10f13f1843"} Feb 20 12:11:53.018076 master-0 kubenswrapper[31420]: I0220 12:11:53.017980 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" event={"ID":"c44ebcae-003d-4347-8ab6-36cc5b16e2df","Type":"ContainerStarted","Data":"cb10b6e24e617d07b45499c736c68763ec59ffb74c9f30677d7b1dabe8eea533"} Feb 20 12:11:53.054613 master-0 kubenswrapper[31420]: I0220 12:11:53.053824 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" podStartSLOduration=2.165645556 podStartE2EDuration="10.053791442s" podCreationTimestamp="2026-02-20 12:11:43 +0000 UTC" firstStartedPulling="2026-02-20 12:11:44.560839499 +0000 UTC m=+409.280077740" lastFinishedPulling="2026-02-20 12:11:52.448985365 +0000 UTC m=+417.168223626" observedRunningTime="2026-02-20 12:11:53.04364129 +0000 UTC m=+417.762879571" watchObservedRunningTime="2026-02-20 12:11:53.053791442 +0000 UTC m=+417.773029723" Feb 20 12:11:54.007515 master-0 kubenswrapper[31420]: I0220 12:11:54.007361 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:11:54.007515 master-0 kubenswrapper[31420]: I0220 12:11:54.007456 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:11:54.021754 master-0 kubenswrapper[31420]: I0220 12:11:54.021706 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:11:54.039629 master-0 kubenswrapper[31420]: I0220 12:11:54.030068 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:12:14.178202 master-0 kubenswrapper[31420]: I0220 12:12:14.178096 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-84bc95cd67-2mt9w"] Feb 20 12:12:14.180845 master-0 kubenswrapper[31420]: I0220 12:12:14.180786 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-84bc95cd67-2mt9w" Feb 20 12:12:14.211688 master-0 kubenswrapper[31420]: I0220 12:12:14.211357 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-84bc95cd67-2mt9w"] Feb 20 12:12:14.255109 master-0 kubenswrapper[31420]: I0220 12:12:14.255018 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5294\" (UniqueName: \"kubernetes.io/projected/30f6dfd7-5ba6-42e8-be72-7a0297222446-kube-api-access-k5294\") pod \"nova-console-poller-84bc95cd67-2mt9w\" (UID: \"30f6dfd7-5ba6-42e8-be72-7a0297222446\") " pod="sushy-emulator/nova-console-poller-84bc95cd67-2mt9w" Feb 20 12:12:14.255322 master-0 kubenswrapper[31420]: I0220 12:12:14.255123 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/30f6dfd7-5ba6-42e8-be72-7a0297222446-os-client-config\") pod \"nova-console-poller-84bc95cd67-2mt9w\" (UID: \"30f6dfd7-5ba6-42e8-be72-7a0297222446\") " pod="sushy-emulator/nova-console-poller-84bc95cd67-2mt9w" Feb 20 12:12:14.356219 master-0 kubenswrapper[31420]: I0220 12:12:14.356098 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5294\" (UniqueName: \"kubernetes.io/projected/30f6dfd7-5ba6-42e8-be72-7a0297222446-kube-api-access-k5294\") pod \"nova-console-poller-84bc95cd67-2mt9w\" (UID: \"30f6dfd7-5ba6-42e8-be72-7a0297222446\") " pod="sushy-emulator/nova-console-poller-84bc95cd67-2mt9w" Feb 20 12:12:14.356219 master-0 kubenswrapper[31420]: I0220 12:12:14.356215 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/30f6dfd7-5ba6-42e8-be72-7a0297222446-os-client-config\") pod \"nova-console-poller-84bc95cd67-2mt9w\" (UID: \"30f6dfd7-5ba6-42e8-be72-7a0297222446\") " pod="sushy-emulator/nova-console-poller-84bc95cd67-2mt9w" Feb 20 12:12:14.362492 master-0 kubenswrapper[31420]: I0220 12:12:14.362414 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/30f6dfd7-5ba6-42e8-be72-7a0297222446-os-client-config\") pod \"nova-console-poller-84bc95cd67-2mt9w\" (UID: \"30f6dfd7-5ba6-42e8-be72-7a0297222446\") " pod="sushy-emulator/nova-console-poller-84bc95cd67-2mt9w" Feb 20 12:12:14.409096 master-0 kubenswrapper[31420]: I0220 12:12:14.409038 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5294\" (UniqueName: \"kubernetes.io/projected/30f6dfd7-5ba6-42e8-be72-7a0297222446-kube-api-access-k5294\") pod \"nova-console-poller-84bc95cd67-2mt9w\" (UID: \"30f6dfd7-5ba6-42e8-be72-7a0297222446\") " pod="sushy-emulator/nova-console-poller-84bc95cd67-2mt9w" Feb 20 12:12:14.504728 master-0 kubenswrapper[31420]: I0220 12:12:14.504669 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-84bc95cd67-2mt9w" Feb 20 12:12:14.953266 master-0 kubenswrapper[31420]: I0220 12:12:14.953179 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-84bc95cd67-2mt9w"] Feb 20 12:12:14.959049 master-0 kubenswrapper[31420]: W0220 12:12:14.958977 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30f6dfd7_5ba6_42e8_be72_7a0297222446.slice/crio-f33455e94190dda1c9fb4f7a3c85f99a516a5d300a2e6f9e77fc8da3f4229320 WatchSource:0}: Error finding container f33455e94190dda1c9fb4f7a3c85f99a516a5d300a2e6f9e77fc8da3f4229320: Status 404 returned error can't find the container with id f33455e94190dda1c9fb4f7a3c85f99a516a5d300a2e6f9e77fc8da3f4229320 Feb 20 12:12:15.218907 master-0 kubenswrapper[31420]: I0220 12:12:15.218353 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-84bc95cd67-2mt9w" event={"ID":"30f6dfd7-5ba6-42e8-be72-7a0297222446","Type":"ContainerStarted","Data":"f33455e94190dda1c9fb4f7a3c85f99a516a5d300a2e6f9e77fc8da3f4229320"} Feb 20 12:12:21.274918 master-0 kubenswrapper[31420]: I0220 12:12:21.274804 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-84bc95cd67-2mt9w" event={"ID":"30f6dfd7-5ba6-42e8-be72-7a0297222446","Type":"ContainerStarted","Data":"f670b4c90ffefca386c812036e6a085d79b8d01c7523533aea7660379af8e5bb"} Feb 20 12:12:21.274918 master-0 kubenswrapper[31420]: I0220 12:12:21.274907 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-84bc95cd67-2mt9w" event={"ID":"30f6dfd7-5ba6-42e8-be72-7a0297222446","Type":"ContainerStarted","Data":"96bb49c5fd2d003f1d556365f27f0d94ea115f078266aa0f50832622f3e0c830"} Feb 20 12:12:21.309109 master-0 kubenswrapper[31420]: I0220 12:12:21.308956 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-84bc95cd67-2mt9w" podStartSLOduration=1.390503015 podStartE2EDuration="7.308928553s" podCreationTimestamp="2026-02-20 12:12:14 +0000 UTC" firstStartedPulling="2026-02-20 12:12:14.962587953 +0000 UTC m=+439.681826224" lastFinishedPulling="2026-02-20 12:12:20.881013491 +0000 UTC m=+445.600251762" observedRunningTime="2026-02-20 12:12:21.299809099 +0000 UTC m=+446.019047370" watchObservedRunningTime="2026-02-20 12:12:21.308928553 +0000 UTC m=+446.028166804" Feb 20 12:12:46.413593 master-0 kubenswrapper[31420]: I0220 12:12:46.413467 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-c4dfff755-jpb76"] Feb 20 12:12:46.417411 master-0 kubenswrapper[31420]: I0220 12:12:46.417365 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-c4dfff755-jpb76" Feb 20 12:12:46.457175 master-0 kubenswrapper[31420]: I0220 12:12:46.454817 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-c4dfff755-jpb76"] Feb 20 12:12:46.470038 master-0 kubenswrapper[31420]: I0220 12:12:46.469962 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/b115fad1-0e4a-4117-839b-7ce0def6f786-nova-console-recordings-pv\") pod \"nova-console-recorder-c4dfff755-jpb76\" (UID: \"b115fad1-0e4a-4117-839b-7ce0def6f786\") " pod="sushy-emulator/nova-console-recorder-c4dfff755-jpb76" Feb 20 12:12:46.470300 master-0 kubenswrapper[31420]: I0220 12:12:46.470094 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/b115fad1-0e4a-4117-839b-7ce0def6f786-os-client-config\") pod \"nova-console-recorder-c4dfff755-jpb76\" (UID: \"b115fad1-0e4a-4117-839b-7ce0def6f786\") " pod="sushy-emulator/nova-console-recorder-c4dfff755-jpb76" Feb 20 12:12:46.470300 master-0 kubenswrapper[31420]: I0220 12:12:46.470165 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54cwz\" (UniqueName: \"kubernetes.io/projected/b115fad1-0e4a-4117-839b-7ce0def6f786-kube-api-access-54cwz\") pod \"nova-console-recorder-c4dfff755-jpb76\" (UID: \"b115fad1-0e4a-4117-839b-7ce0def6f786\") " pod="sushy-emulator/nova-console-recorder-c4dfff755-jpb76" Feb 20 12:12:46.572389 master-0 kubenswrapper[31420]: I0220 12:12:46.572329 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54cwz\" (UniqueName: \"kubernetes.io/projected/b115fad1-0e4a-4117-839b-7ce0def6f786-kube-api-access-54cwz\") pod \"nova-console-recorder-c4dfff755-jpb76\" (UID: \"b115fad1-0e4a-4117-839b-7ce0def6f786\") " pod="sushy-emulator/nova-console-recorder-c4dfff755-jpb76" Feb 20 12:12:46.572843 master-0 kubenswrapper[31420]: I0220 12:12:46.572810 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/b115fad1-0e4a-4117-839b-7ce0def6f786-nova-console-recordings-pv\") pod \"nova-console-recorder-c4dfff755-jpb76\" (UID: \"b115fad1-0e4a-4117-839b-7ce0def6f786\") " pod="sushy-emulator/nova-console-recorder-c4dfff755-jpb76" Feb 20 12:12:46.573897 master-0 kubenswrapper[31420]: I0220 12:12:46.573818 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/b115fad1-0e4a-4117-839b-7ce0def6f786-os-client-config\") pod \"nova-console-recorder-c4dfff755-jpb76\" (UID: \"b115fad1-0e4a-4117-839b-7ce0def6f786\") " pod="sushy-emulator/nova-console-recorder-c4dfff755-jpb76" Feb 20 12:12:46.590373 master-0 kubenswrapper[31420]: I0220 12:12:46.590304 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/b115fad1-0e4a-4117-839b-7ce0def6f786-os-client-config\") pod \"nova-console-recorder-c4dfff755-jpb76\" (UID: \"b115fad1-0e4a-4117-839b-7ce0def6f786\") " pod="sushy-emulator/nova-console-recorder-c4dfff755-jpb76" Feb 20 12:12:46.593135 master-0 kubenswrapper[31420]: I0220 12:12:46.593078 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54cwz\" (UniqueName: \"kubernetes.io/projected/b115fad1-0e4a-4117-839b-7ce0def6f786-kube-api-access-54cwz\") pod \"nova-console-recorder-c4dfff755-jpb76\" (UID: \"b115fad1-0e4a-4117-839b-7ce0def6f786\") " pod="sushy-emulator/nova-console-recorder-c4dfff755-jpb76" Feb 20 12:12:47.252766 master-0 kubenswrapper[31420]: I0220 12:12:47.252680 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/b115fad1-0e4a-4117-839b-7ce0def6f786-nova-console-recordings-pv\") pod \"nova-console-recorder-c4dfff755-jpb76\" (UID: \"b115fad1-0e4a-4117-839b-7ce0def6f786\") " pod="sushy-emulator/nova-console-recorder-c4dfff755-jpb76" Feb 20 12:12:47.350154 master-0 kubenswrapper[31420]: I0220 12:12:47.350075 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-c4dfff755-jpb76" Feb 20 12:12:47.895711 master-0 kubenswrapper[31420]: I0220 12:12:47.895644 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-c4dfff755-jpb76"] Feb 20 12:12:47.904644 master-0 kubenswrapper[31420]: W0220 12:12:47.904590 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb115fad1_0e4a_4117_839b_7ce0def6f786.slice/crio-9435824e8c9d99f7ef3a23b74600f1610ab12bc262e1955557ccb80e7d95dd99 WatchSource:0}: Error finding container 9435824e8c9d99f7ef3a23b74600f1610ab12bc262e1955557ccb80e7d95dd99: Status 404 returned error can't find the container with id 9435824e8c9d99f7ef3a23b74600f1610ab12bc262e1955557ccb80e7d95dd99 Feb 20 12:12:48.547028 master-0 kubenswrapper[31420]: I0220 12:12:48.546973 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-c4dfff755-jpb76" event={"ID":"b115fad1-0e4a-4117-839b-7ce0def6f786","Type":"ContainerStarted","Data":"9435824e8c9d99f7ef3a23b74600f1610ab12bc262e1955557ccb80e7d95dd99"} Feb 20 12:12:59.926044 master-0 kubenswrapper[31420]: I0220 12:12:59.925869 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-c4dfff755-jpb76" event={"ID":"b115fad1-0e4a-4117-839b-7ce0def6f786","Type":"ContainerStarted","Data":"af8229cdbfa7a211f0d6aef7be9df93a176317cbb314742aff3f15007d8a99b2"} Feb 20 12:13:02.047286 master-0 kubenswrapper[31420]: I0220 12:13:02.047171 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-c4dfff755-jpb76" event={"ID":"b115fad1-0e4a-4117-839b-7ce0def6f786","Type":"ContainerStarted","Data":"8953ff84be26f364d6bb1db31ac4aba13c9aaab119a17e25d940c34cfc5f3cbd"} Feb 20 12:13:02.079748 master-0 kubenswrapper[31420]: I0220 12:13:02.078911 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-c4dfff755-jpb76" podStartSLOduration=2.587581428 podStartE2EDuration="16.078888602s" podCreationTimestamp="2026-02-20 12:12:46 +0000 UTC" firstStartedPulling="2026-02-20 12:12:47.90788458 +0000 UTC m=+472.627122831" lastFinishedPulling="2026-02-20 12:13:01.399191734 +0000 UTC m=+486.118430005" observedRunningTime="2026-02-20 12:13:02.073334408 +0000 UTC m=+486.792572719" watchObservedRunningTime="2026-02-20 12:13:02.078888602 +0000 UTC m=+486.798126853" Feb 20 12:13:27.818224 master-0 kubenswrapper[31420]: I0220 12:13:27.818143 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d"] Feb 20 12:13:27.819769 master-0 kubenswrapper[31420]: I0220 12:13:27.819734 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" Feb 20 12:13:27.825114 master-0 kubenswrapper[31420]: I0220 12:13:27.822713 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-j4skv" Feb 20 12:13:27.843422 master-0 kubenswrapper[31420]: I0220 12:13:27.843331 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d"] Feb 20 12:13:27.936835 master-0 kubenswrapper[31420]: I0220 12:13:27.936745 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04dbb2e6-09f7-4521-8edc-e032a4d0e239-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d\" (UID: \"04dbb2e6-09f7-4521-8edc-e032a4d0e239\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" Feb 20 12:13:27.937080 master-0 kubenswrapper[31420]: I0220 12:13:27.936852 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrg79\" (UniqueName: \"kubernetes.io/projected/04dbb2e6-09f7-4521-8edc-e032a4d0e239-kube-api-access-wrg79\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d\" (UID: \"04dbb2e6-09f7-4521-8edc-e032a4d0e239\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" Feb 20 12:13:27.937080 master-0 kubenswrapper[31420]: I0220 12:13:27.937049 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04dbb2e6-09f7-4521-8edc-e032a4d0e239-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d\" (UID: \"04dbb2e6-09f7-4521-8edc-e032a4d0e239\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" Feb 20 12:13:28.038380 master-0 kubenswrapper[31420]: I0220 12:13:28.038299 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04dbb2e6-09f7-4521-8edc-e032a4d0e239-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d\" (UID: \"04dbb2e6-09f7-4521-8edc-e032a4d0e239\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" Feb 20 12:13:28.038607 master-0 kubenswrapper[31420]: I0220 12:13:28.038408 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrg79\" (UniqueName: \"kubernetes.io/projected/04dbb2e6-09f7-4521-8edc-e032a4d0e239-kube-api-access-wrg79\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d\" (UID: \"04dbb2e6-09f7-4521-8edc-e032a4d0e239\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" Feb 20 12:13:28.038607 master-0 kubenswrapper[31420]: I0220 12:13:28.038469 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04dbb2e6-09f7-4521-8edc-e032a4d0e239-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d\" (UID: \"04dbb2e6-09f7-4521-8edc-e032a4d0e239\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" Feb 20 12:13:28.039005 master-0 kubenswrapper[31420]: I0220 12:13:28.038970 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04dbb2e6-09f7-4521-8edc-e032a4d0e239-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d\" (UID: \"04dbb2e6-09f7-4521-8edc-e032a4d0e239\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" Feb 20 12:13:28.039065 master-0 kubenswrapper[31420]: I0220 12:13:28.039014 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04dbb2e6-09f7-4521-8edc-e032a4d0e239-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d\" (UID: \"04dbb2e6-09f7-4521-8edc-e032a4d0e239\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" Feb 20 12:13:28.055723 master-0 kubenswrapper[31420]: I0220 12:13:28.055609 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrg79\" (UniqueName: \"kubernetes.io/projected/04dbb2e6-09f7-4521-8edc-e032a4d0e239-kube-api-access-wrg79\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d\" (UID: \"04dbb2e6-09f7-4521-8edc-e032a4d0e239\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" Feb 20 12:13:28.137601 master-0 kubenswrapper[31420]: I0220 12:13:28.137415 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" Feb 20 12:13:28.630949 master-0 kubenswrapper[31420]: I0220 12:13:28.630082 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d"] Feb 20 12:13:28.639196 master-0 kubenswrapper[31420]: W0220 12:13:28.639128 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04dbb2e6_09f7_4521_8edc_e032a4d0e239.slice/crio-ea4c0cc6d5d5b94a1777b47f619d127fbf2c34753a314d7692c733451a182657 WatchSource:0}: Error finding container ea4c0cc6d5d5b94a1777b47f619d127fbf2c34753a314d7692c733451a182657: Status 404 returned error can't find the container with id ea4c0cc6d5d5b94a1777b47f619d127fbf2c34753a314d7692c733451a182657 Feb 20 12:13:29.306111 master-0 kubenswrapper[31420]: I0220 12:13:29.305988 31420 generic.go:334] "Generic (PLEG): container finished" podID="04dbb2e6-09f7-4521-8edc-e032a4d0e239" containerID="ff8c643e675534925102c6c5018a69afef8f62d1e39d6c47ff5327ff0020818a" exitCode=0 Feb 20 12:13:29.306694 master-0 kubenswrapper[31420]: I0220 12:13:29.306126 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" event={"ID":"04dbb2e6-09f7-4521-8edc-e032a4d0e239","Type":"ContainerDied","Data":"ff8c643e675534925102c6c5018a69afef8f62d1e39d6c47ff5327ff0020818a"} Feb 20 12:13:29.306694 master-0 kubenswrapper[31420]: I0220 12:13:29.306223 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" event={"ID":"04dbb2e6-09f7-4521-8edc-e032a4d0e239","Type":"ContainerStarted","Data":"ea4c0cc6d5d5b94a1777b47f619d127fbf2c34753a314d7692c733451a182657"} Feb 20 12:13:32.333837 master-0 kubenswrapper[31420]: I0220 12:13:32.333750 31420 generic.go:334] "Generic (PLEG): container finished" podID="04dbb2e6-09f7-4521-8edc-e032a4d0e239" containerID="ba2718ba3e0abccb24e8dedd2a57034b0eb87fdbad5abc0c90a265fcc0ebddfb" exitCode=0 Feb 20 12:13:32.334791 master-0 kubenswrapper[31420]: I0220 12:13:32.333847 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" event={"ID":"04dbb2e6-09f7-4521-8edc-e032a4d0e239","Type":"ContainerDied","Data":"ba2718ba3e0abccb24e8dedd2a57034b0eb87fdbad5abc0c90a265fcc0ebddfb"} Feb 20 12:13:33.351810 master-0 kubenswrapper[31420]: I0220 12:13:33.351731 31420 generic.go:334] "Generic (PLEG): container finished" podID="04dbb2e6-09f7-4521-8edc-e032a4d0e239" containerID="b8548e3a45c84d278450fab763f6dd01d29882501d51e27ac52b610b486e7d92" exitCode=0 Feb 20 12:13:33.353030 master-0 kubenswrapper[31420]: I0220 12:13:33.351812 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" event={"ID":"04dbb2e6-09f7-4521-8edc-e032a4d0e239","Type":"ContainerDied","Data":"b8548e3a45c84d278450fab763f6dd01d29882501d51e27ac52b610b486e7d92"} Feb 20 12:13:34.722159 master-0 kubenswrapper[31420]: I0220 12:13:34.722088 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" Feb 20 12:13:34.861253 master-0 kubenswrapper[31420]: I0220 12:13:34.861151 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrg79\" (UniqueName: \"kubernetes.io/projected/04dbb2e6-09f7-4521-8edc-e032a4d0e239-kube-api-access-wrg79\") pod \"04dbb2e6-09f7-4521-8edc-e032a4d0e239\" (UID: \"04dbb2e6-09f7-4521-8edc-e032a4d0e239\") " Feb 20 12:13:34.861694 master-0 kubenswrapper[31420]: I0220 12:13:34.861366 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04dbb2e6-09f7-4521-8edc-e032a4d0e239-util\") pod \"04dbb2e6-09f7-4521-8edc-e032a4d0e239\" (UID: \"04dbb2e6-09f7-4521-8edc-e032a4d0e239\") " Feb 20 12:13:34.861694 master-0 kubenswrapper[31420]: I0220 12:13:34.861441 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04dbb2e6-09f7-4521-8edc-e032a4d0e239-bundle\") pod \"04dbb2e6-09f7-4521-8edc-e032a4d0e239\" (UID: \"04dbb2e6-09f7-4521-8edc-e032a4d0e239\") " Feb 20 12:13:34.863445 master-0 kubenswrapper[31420]: I0220 12:13:34.863376 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04dbb2e6-09f7-4521-8edc-e032a4d0e239-bundle" (OuterVolumeSpecName: "bundle") pod "04dbb2e6-09f7-4521-8edc-e032a4d0e239" (UID: "04dbb2e6-09f7-4521-8edc-e032a4d0e239"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:13:34.869185 master-0 kubenswrapper[31420]: I0220 12:13:34.869120 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04dbb2e6-09f7-4521-8edc-e032a4d0e239-kube-api-access-wrg79" (OuterVolumeSpecName: "kube-api-access-wrg79") pod "04dbb2e6-09f7-4521-8edc-e032a4d0e239" (UID: "04dbb2e6-09f7-4521-8edc-e032a4d0e239"). InnerVolumeSpecName "kube-api-access-wrg79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:13:34.876496 master-0 kubenswrapper[31420]: I0220 12:13:34.876416 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04dbb2e6-09f7-4521-8edc-e032a4d0e239-util" (OuterVolumeSpecName: "util") pod "04dbb2e6-09f7-4521-8edc-e032a4d0e239" (UID: "04dbb2e6-09f7-4521-8edc-e032a4d0e239"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:13:34.965151 master-0 kubenswrapper[31420]: I0220 12:13:34.964999 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrg79\" (UniqueName: \"kubernetes.io/projected/04dbb2e6-09f7-4521-8edc-e032a4d0e239-kube-api-access-wrg79\") on node \"master-0\" DevicePath \"\"" Feb 20 12:13:34.965151 master-0 kubenswrapper[31420]: I0220 12:13:34.965074 31420 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04dbb2e6-09f7-4521-8edc-e032a4d0e239-util\") on node \"master-0\" DevicePath \"\"" Feb 20 12:13:34.965151 master-0 kubenswrapper[31420]: I0220 12:13:34.965100 31420 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04dbb2e6-09f7-4521-8edc-e032a4d0e239-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:13:35.379698 master-0 kubenswrapper[31420]: I0220 12:13:35.379562 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" event={"ID":"04dbb2e6-09f7-4521-8edc-e032a4d0e239","Type":"ContainerDied","Data":"ea4c0cc6d5d5b94a1777b47f619d127fbf2c34753a314d7692c733451a182657"} Feb 20 12:13:35.379698 master-0 kubenswrapper[31420]: I0220 12:13:35.379671 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea4c0cc6d5d5b94a1777b47f619d127fbf2c34753a314d7692c733451a182657" Feb 20 12:13:35.379698 master-0 kubenswrapper[31420]: I0220 12:13:35.379611 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4m4h9d" Feb 20 12:13:42.745603 master-0 kubenswrapper[31420]: I0220 12:13:42.745486 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-7767fff477-dzwhs"] Feb 20 12:13:42.746915 master-0 kubenswrapper[31420]: E0220 12:13:42.746891 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dbb2e6-09f7-4521-8edc-e032a4d0e239" containerName="pull" Feb 20 12:13:42.747024 master-0 kubenswrapper[31420]: I0220 12:13:42.747009 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dbb2e6-09f7-4521-8edc-e032a4d0e239" containerName="pull" Feb 20 12:13:42.747175 master-0 kubenswrapper[31420]: E0220 12:13:42.747161 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dbb2e6-09f7-4521-8edc-e032a4d0e239" containerName="util" Feb 20 12:13:42.747253 master-0 kubenswrapper[31420]: I0220 12:13:42.747240 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dbb2e6-09f7-4521-8edc-e032a4d0e239" containerName="util" Feb 20 12:13:42.747337 master-0 kubenswrapper[31420]: E0220 12:13:42.747324 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dbb2e6-09f7-4521-8edc-e032a4d0e239" containerName="extract" Feb 20 12:13:42.747418 master-0 kubenswrapper[31420]: I0220 12:13:42.747405 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dbb2e6-09f7-4521-8edc-e032a4d0e239" containerName="extract" Feb 20 12:13:42.747748 master-0 kubenswrapper[31420]: I0220 12:13:42.747729 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dbb2e6-09f7-4521-8edc-e032a4d0e239" containerName="extract" Feb 20 12:13:42.748485 master-0 kubenswrapper[31420]: I0220 12:13:42.748463 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:42.751067 master-0 kubenswrapper[31420]: I0220 12:13:42.750922 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Feb 20 12:13:42.756653 master-0 kubenswrapper[31420]: I0220 12:13:42.752590 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Feb 20 12:13:42.756653 master-0 kubenswrapper[31420]: I0220 12:13:42.752996 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Feb 20 12:13:42.756653 master-0 kubenswrapper[31420]: I0220 12:13:42.753170 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Feb 20 12:13:42.756653 master-0 kubenswrapper[31420]: I0220 12:13:42.753309 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Feb 20 12:13:42.767611 master-0 kubenswrapper[31420]: I0220 12:13:42.767048 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7767fff477-dzwhs"] Feb 20 12:13:42.822359 master-0 kubenswrapper[31420]: I0220 12:13:42.822208 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8v4q\" (UniqueName: \"kubernetes.io/projected/998b82f2-21a8-4ea6-9b96-89ddcb1142e1-kube-api-access-s8v4q\") pod \"lvms-operator-7767fff477-dzwhs\" (UID: \"998b82f2-21a8-4ea6-9b96-89ddcb1142e1\") " pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:42.822359 master-0 kubenswrapper[31420]: I0220 12:13:42.822270 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/998b82f2-21a8-4ea6-9b96-89ddcb1142e1-apiservice-cert\") pod \"lvms-operator-7767fff477-dzwhs\" (UID: \"998b82f2-21a8-4ea6-9b96-89ddcb1142e1\") " pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:42.822359 master-0 kubenswrapper[31420]: I0220 12:13:42.822314 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/998b82f2-21a8-4ea6-9b96-89ddcb1142e1-metrics-cert\") pod \"lvms-operator-7767fff477-dzwhs\" (UID: \"998b82f2-21a8-4ea6-9b96-89ddcb1142e1\") " pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:42.822646 master-0 kubenswrapper[31420]: I0220 12:13:42.822441 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/998b82f2-21a8-4ea6-9b96-89ddcb1142e1-webhook-cert\") pod \"lvms-operator-7767fff477-dzwhs\" (UID: \"998b82f2-21a8-4ea6-9b96-89ddcb1142e1\") " pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:42.822646 master-0 kubenswrapper[31420]: I0220 12:13:42.822639 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/998b82f2-21a8-4ea6-9b96-89ddcb1142e1-socket-dir\") pod \"lvms-operator-7767fff477-dzwhs\" (UID: \"998b82f2-21a8-4ea6-9b96-89ddcb1142e1\") " pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:42.924428 master-0 kubenswrapper[31420]: I0220 12:13:42.924369 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/998b82f2-21a8-4ea6-9b96-89ddcb1142e1-socket-dir\") pod \"lvms-operator-7767fff477-dzwhs\" (UID: \"998b82f2-21a8-4ea6-9b96-89ddcb1142e1\") " pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:42.924807 master-0 kubenswrapper[31420]: I0220 12:13:42.924740 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8v4q\" (UniqueName: \"kubernetes.io/projected/998b82f2-21a8-4ea6-9b96-89ddcb1142e1-kube-api-access-s8v4q\") pod \"lvms-operator-7767fff477-dzwhs\" (UID: \"998b82f2-21a8-4ea6-9b96-89ddcb1142e1\") " pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:42.924890 master-0 kubenswrapper[31420]: I0220 12:13:42.924870 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/998b82f2-21a8-4ea6-9b96-89ddcb1142e1-apiservice-cert\") pod \"lvms-operator-7767fff477-dzwhs\" (UID: \"998b82f2-21a8-4ea6-9b96-89ddcb1142e1\") " pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:42.924942 master-0 kubenswrapper[31420]: I0220 12:13:42.924930 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/998b82f2-21a8-4ea6-9b96-89ddcb1142e1-metrics-cert\") pod \"lvms-operator-7767fff477-dzwhs\" (UID: \"998b82f2-21a8-4ea6-9b96-89ddcb1142e1\") " pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:42.925015 master-0 kubenswrapper[31420]: I0220 12:13:42.924979 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/998b82f2-21a8-4ea6-9b96-89ddcb1142e1-socket-dir\") pod \"lvms-operator-7767fff477-dzwhs\" (UID: \"998b82f2-21a8-4ea6-9b96-89ddcb1142e1\") " pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:42.925070 master-0 kubenswrapper[31420]: I0220 12:13:42.924994 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/998b82f2-21a8-4ea6-9b96-89ddcb1142e1-webhook-cert\") pod \"lvms-operator-7767fff477-dzwhs\" (UID: \"998b82f2-21a8-4ea6-9b96-89ddcb1142e1\") " pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:42.928508 master-0 kubenswrapper[31420]: I0220 12:13:42.928476 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/998b82f2-21a8-4ea6-9b96-89ddcb1142e1-metrics-cert\") pod \"lvms-operator-7767fff477-dzwhs\" (UID: \"998b82f2-21a8-4ea6-9b96-89ddcb1142e1\") " pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:42.928844 master-0 kubenswrapper[31420]: I0220 12:13:42.928805 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/998b82f2-21a8-4ea6-9b96-89ddcb1142e1-webhook-cert\") pod \"lvms-operator-7767fff477-dzwhs\" (UID: \"998b82f2-21a8-4ea6-9b96-89ddcb1142e1\") " pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:42.929726 master-0 kubenswrapper[31420]: I0220 12:13:42.929679 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/998b82f2-21a8-4ea6-9b96-89ddcb1142e1-apiservice-cert\") pod \"lvms-operator-7767fff477-dzwhs\" (UID: \"998b82f2-21a8-4ea6-9b96-89ddcb1142e1\") " pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:42.948345 master-0 kubenswrapper[31420]: I0220 12:13:42.948291 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8v4q\" (UniqueName: \"kubernetes.io/projected/998b82f2-21a8-4ea6-9b96-89ddcb1142e1-kube-api-access-s8v4q\") pod \"lvms-operator-7767fff477-dzwhs\" (UID: \"998b82f2-21a8-4ea6-9b96-89ddcb1142e1\") " pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:43.073287 master-0 kubenswrapper[31420]: I0220 12:13:43.073221 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:43.514459 master-0 kubenswrapper[31420]: I0220 12:13:43.514380 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7767fff477-dzwhs"] Feb 20 12:13:43.517362 master-0 kubenswrapper[31420]: W0220 12:13:43.517276 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod998b82f2_21a8_4ea6_9b96_89ddcb1142e1.slice/crio-62046966a67a295995271a0fcc3df7757768544165b39aeddf813cb74f5ab25e WatchSource:0}: Error finding container 62046966a67a295995271a0fcc3df7757768544165b39aeddf813cb74f5ab25e: Status 404 returned error can't find the container with id 62046966a67a295995271a0fcc3df7757768544165b39aeddf813cb74f5ab25e Feb 20 12:13:44.467077 master-0 kubenswrapper[31420]: I0220 12:13:44.466976 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7767fff477-dzwhs" event={"ID":"998b82f2-21a8-4ea6-9b96-89ddcb1142e1","Type":"ContainerStarted","Data":"62046966a67a295995271a0fcc3df7757768544165b39aeddf813cb74f5ab25e"} Feb 20 12:13:48.513472 master-0 kubenswrapper[31420]: I0220 12:13:48.513398 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7767fff477-dzwhs" event={"ID":"998b82f2-21a8-4ea6-9b96-89ddcb1142e1","Type":"ContainerStarted","Data":"83eeab04203191479d90d3ab69cef45ad1db7c40f1d648f68d2b49df729c6e5e"} Feb 20 12:13:48.514591 master-0 kubenswrapper[31420]: I0220 12:13:48.513718 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:48.534589 master-0 kubenswrapper[31420]: I0220 12:13:48.534475 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-7767fff477-dzwhs" podStartSLOduration=1.994834012 podStartE2EDuration="6.534453652s" podCreationTimestamp="2026-02-20 12:13:42 +0000 UTC" firstStartedPulling="2026-02-20 12:13:43.521235625 +0000 UTC m=+528.240473866" lastFinishedPulling="2026-02-20 12:13:48.060855265 +0000 UTC m=+532.780093506" observedRunningTime="2026-02-20 12:13:48.532687683 +0000 UTC m=+533.251925934" watchObservedRunningTime="2026-02-20 12:13:48.534453652 +0000 UTC m=+533.253691893" Feb 20 12:13:49.526171 master-0 kubenswrapper[31420]: I0220 12:13:49.525906 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-7767fff477-dzwhs" Feb 20 12:13:53.399238 master-0 kubenswrapper[31420]: I0220 12:13:53.399135 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h"] Feb 20 12:13:53.402268 master-0 kubenswrapper[31420]: I0220 12:13:53.402214 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" Feb 20 12:13:53.406367 master-0 kubenswrapper[31420]: I0220 12:13:53.406304 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-j4skv" Feb 20 12:13:53.417967 master-0 kubenswrapper[31420]: I0220 12:13:53.417881 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h"] Feb 20 12:13:53.541976 master-0 kubenswrapper[31420]: I0220 12:13:53.541872 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h\" (UID: \"8dc6066f-00aa-4a1c-962d-b18bfcb2890b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" Feb 20 12:13:53.541976 master-0 kubenswrapper[31420]: I0220 12:13:53.541952 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h\" (UID: \"8dc6066f-00aa-4a1c-962d-b18bfcb2890b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" Feb 20 12:13:53.541976 master-0 kubenswrapper[31420]: I0220 12:13:53.541996 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chpwf\" (UniqueName: \"kubernetes.io/projected/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-kube-api-access-chpwf\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h\" (UID: \"8dc6066f-00aa-4a1c-962d-b18bfcb2890b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" Feb 20 12:13:53.644760 master-0 kubenswrapper[31420]: I0220 12:13:53.644638 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h\" (UID: \"8dc6066f-00aa-4a1c-962d-b18bfcb2890b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" Feb 20 12:13:53.645518 master-0 kubenswrapper[31420]: I0220 12:13:53.645219 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h\" (UID: \"8dc6066f-00aa-4a1c-962d-b18bfcb2890b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" Feb 20 12:13:53.645518 master-0 kubenswrapper[31420]: I0220 12:13:53.645401 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chpwf\" (UniqueName: \"kubernetes.io/projected/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-kube-api-access-chpwf\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h\" (UID: \"8dc6066f-00aa-4a1c-962d-b18bfcb2890b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" Feb 20 12:13:53.645799 master-0 kubenswrapper[31420]: I0220 12:13:53.645705 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h\" (UID: \"8dc6066f-00aa-4a1c-962d-b18bfcb2890b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" Feb 20 12:13:53.646009 master-0 kubenswrapper[31420]: I0220 12:13:53.645930 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h\" (UID: \"8dc6066f-00aa-4a1c-962d-b18bfcb2890b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" Feb 20 12:13:53.664562 master-0 kubenswrapper[31420]: I0220 12:13:53.664353 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chpwf\" (UniqueName: \"kubernetes.io/projected/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-kube-api-access-chpwf\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h\" (UID: \"8dc6066f-00aa-4a1c-962d-b18bfcb2890b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" Feb 20 12:13:53.731111 master-0 kubenswrapper[31420]: I0220 12:13:53.730988 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" Feb 20 12:13:53.782748 master-0 kubenswrapper[31420]: I0220 12:13:53.782652 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb"] Feb 20 12:13:53.789267 master-0 kubenswrapper[31420]: I0220 12:13:53.789095 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" Feb 20 12:13:53.793976 master-0 kubenswrapper[31420]: I0220 12:13:53.793902 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb"] Feb 20 12:13:53.950862 master-0 kubenswrapper[31420]: I0220 12:13:53.950805 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hdt5\" (UniqueName: \"kubernetes.io/projected/a745c583-8c85-4cd5-9495-c57fb3f69812-kube-api-access-9hdt5\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb\" (UID: \"a745c583-8c85-4cd5-9495-c57fb3f69812\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" Feb 20 12:13:53.951152 master-0 kubenswrapper[31420]: I0220 12:13:53.951096 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a745c583-8c85-4cd5-9495-c57fb3f69812-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb\" (UID: \"a745c583-8c85-4cd5-9495-c57fb3f69812\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" Feb 20 12:13:53.951327 master-0 kubenswrapper[31420]: I0220 12:13:53.951298 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a745c583-8c85-4cd5-9495-c57fb3f69812-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb\" (UID: \"a745c583-8c85-4cd5-9495-c57fb3f69812\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" Feb 20 12:13:54.052593 master-0 kubenswrapper[31420]: I0220 12:13:54.052502 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a745c583-8c85-4cd5-9495-c57fb3f69812-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb\" (UID: \"a745c583-8c85-4cd5-9495-c57fb3f69812\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" Feb 20 12:13:54.052854 master-0 kubenswrapper[31420]: I0220 12:13:54.052672 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hdt5\" (UniqueName: \"kubernetes.io/projected/a745c583-8c85-4cd5-9495-c57fb3f69812-kube-api-access-9hdt5\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb\" (UID: \"a745c583-8c85-4cd5-9495-c57fb3f69812\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" Feb 20 12:13:54.052924 master-0 kubenswrapper[31420]: I0220 12:13:54.052887 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a745c583-8c85-4cd5-9495-c57fb3f69812-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb\" (UID: \"a745c583-8c85-4cd5-9495-c57fb3f69812\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" Feb 20 12:13:54.053259 master-0 kubenswrapper[31420]: I0220 12:13:54.053215 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a745c583-8c85-4cd5-9495-c57fb3f69812-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb\" (UID: \"a745c583-8c85-4cd5-9495-c57fb3f69812\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" Feb 20 12:13:54.053561 master-0 kubenswrapper[31420]: I0220 12:13:54.053400 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a745c583-8c85-4cd5-9495-c57fb3f69812-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb\" (UID: \"a745c583-8c85-4cd5-9495-c57fb3f69812\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" Feb 20 12:13:54.084436 master-0 kubenswrapper[31420]: I0220 12:13:54.084359 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hdt5\" (UniqueName: \"kubernetes.io/projected/a745c583-8c85-4cd5-9495-c57fb3f69812-kube-api-access-9hdt5\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb\" (UID: \"a745c583-8c85-4cd5-9495-c57fb3f69812\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" Feb 20 12:13:54.165330 master-0 kubenswrapper[31420]: I0220 12:13:54.164900 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" Feb 20 12:13:54.536904 master-0 kubenswrapper[31420]: W0220 12:13:54.536808 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dc6066f_00aa_4a1c_962d_b18bfcb2890b.slice/crio-8f379181b1e9723f93733cb55c25158cb7fe410f719192b04245e5a223233cb0 WatchSource:0}: Error finding container 8f379181b1e9723f93733cb55c25158cb7fe410f719192b04245e5a223233cb0: Status 404 returned error can't find the container with id 8f379181b1e9723f93733cb55c25158cb7fe410f719192b04245e5a223233cb0 Feb 20 12:13:54.547378 master-0 kubenswrapper[31420]: I0220 12:13:54.547283 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h"] Feb 20 12:13:54.563681 master-0 kubenswrapper[31420]: I0220 12:13:54.563583 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" event={"ID":"8dc6066f-00aa-4a1c-962d-b18bfcb2890b","Type":"ContainerStarted","Data":"8f379181b1e9723f93733cb55c25158cb7fe410f719192b04245e5a223233cb0"} Feb 20 12:13:54.649647 master-0 kubenswrapper[31420]: I0220 12:13:54.649044 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb"] Feb 20 12:13:54.651446 master-0 kubenswrapper[31420]: W0220 12:13:54.651386 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda745c583_8c85_4cd5_9495_c57fb3f69812.slice/crio-be216b7142fa741f4b4b846d698ad2f1fe05585f3a185d0059f3974b5919bdad WatchSource:0}: Error finding container be216b7142fa741f4b4b846d698ad2f1fe05585f3a185d0059f3974b5919bdad: Status 404 returned error can't find the container with id be216b7142fa741f4b4b846d698ad2f1fe05585f3a185d0059f3974b5919bdad Feb 20 12:13:55.393187 master-0 kubenswrapper[31420]: I0220 12:13:55.392645 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j"] Feb 20 12:13:55.397518 master-0 kubenswrapper[31420]: I0220 12:13:55.397449 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" Feb 20 12:13:55.410427 master-0 kubenswrapper[31420]: I0220 12:13:55.410352 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j"] Feb 20 12:13:55.484511 master-0 kubenswrapper[31420]: I0220 12:13:55.484403 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljsj8\" (UniqueName: \"kubernetes.io/projected/88be940c-d575-424c-8bc3-e3dc038eb1fb-kube-api-access-ljsj8\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j\" (UID: \"88be940c-d575-424c-8bc3-e3dc038eb1fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" Feb 20 12:13:55.484511 master-0 kubenswrapper[31420]: I0220 12:13:55.484513 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88be940c-d575-424c-8bc3-e3dc038eb1fb-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j\" (UID: \"88be940c-d575-424c-8bc3-e3dc038eb1fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" Feb 20 12:13:55.484833 master-0 kubenswrapper[31420]: I0220 12:13:55.484704 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88be940c-d575-424c-8bc3-e3dc038eb1fb-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j\" (UID: \"88be940c-d575-424c-8bc3-e3dc038eb1fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" Feb 20 12:13:55.585661 master-0 kubenswrapper[31420]: I0220 12:13:55.585611 31420 generic.go:334] "Generic (PLEG): container finished" podID="8dc6066f-00aa-4a1c-962d-b18bfcb2890b" containerID="5715459fb24d4f97104b503b364869e4b05565fc1cba7c0c4ad95b95cfed32d3" exitCode=0 Feb 20 12:13:55.586230 master-0 kubenswrapper[31420]: I0220 12:13:55.585686 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" event={"ID":"8dc6066f-00aa-4a1c-962d-b18bfcb2890b","Type":"ContainerDied","Data":"5715459fb24d4f97104b503b364869e4b05565fc1cba7c0c4ad95b95cfed32d3"} Feb 20 12:13:55.586589 master-0 kubenswrapper[31420]: I0220 12:13:55.586501 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88be940c-d575-424c-8bc3-e3dc038eb1fb-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j\" (UID: \"88be940c-d575-424c-8bc3-e3dc038eb1fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" Feb 20 12:13:55.587454 master-0 kubenswrapper[31420]: I0220 12:13:55.586652 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljsj8\" (UniqueName: \"kubernetes.io/projected/88be940c-d575-424c-8bc3-e3dc038eb1fb-kube-api-access-ljsj8\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j\" (UID: \"88be940c-d575-424c-8bc3-e3dc038eb1fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" Feb 20 12:13:55.587454 master-0 kubenswrapper[31420]: I0220 12:13:55.586700 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88be940c-d575-424c-8bc3-e3dc038eb1fb-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j\" (UID: \"88be940c-d575-424c-8bc3-e3dc038eb1fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" Feb 20 12:13:55.587454 master-0 kubenswrapper[31420]: I0220 12:13:55.587389 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88be940c-d575-424c-8bc3-e3dc038eb1fb-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j\" (UID: \"88be940c-d575-424c-8bc3-e3dc038eb1fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" Feb 20 12:13:55.587454 master-0 kubenswrapper[31420]: I0220 12:13:55.587405 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88be940c-d575-424c-8bc3-e3dc038eb1fb-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j\" (UID: \"88be940c-d575-424c-8bc3-e3dc038eb1fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" Feb 20 12:13:55.590301 master-0 kubenswrapper[31420]: I0220 12:13:55.590224 31420 generic.go:334] "Generic (PLEG): container finished" podID="a745c583-8c85-4cd5-9495-c57fb3f69812" containerID="286b8b6fa8614cb314732edfeb321b14ef8d6da6c239d9e21a1667bde4e774ba" exitCode=0 Feb 20 12:13:55.590301 master-0 kubenswrapper[31420]: I0220 12:13:55.590272 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" event={"ID":"a745c583-8c85-4cd5-9495-c57fb3f69812","Type":"ContainerDied","Data":"286b8b6fa8614cb314732edfeb321b14ef8d6da6c239d9e21a1667bde4e774ba"} Feb 20 12:13:55.591401 master-0 kubenswrapper[31420]: I0220 12:13:55.590313 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" event={"ID":"a745c583-8c85-4cd5-9495-c57fb3f69812","Type":"ContainerStarted","Data":"be216b7142fa741f4b4b846d698ad2f1fe05585f3a185d0059f3974b5919bdad"} Feb 20 12:13:55.609296 master-0 kubenswrapper[31420]: I0220 12:13:55.609229 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljsj8\" (UniqueName: \"kubernetes.io/projected/88be940c-d575-424c-8bc3-e3dc038eb1fb-kube-api-access-ljsj8\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j\" (UID: \"88be940c-d575-424c-8bc3-e3dc038eb1fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" Feb 20 12:13:55.717768 master-0 kubenswrapper[31420]: I0220 12:13:55.717590 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" Feb 20 12:13:56.174282 master-0 kubenswrapper[31420]: I0220 12:13:56.172936 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j"] Feb 20 12:13:56.182607 master-0 kubenswrapper[31420]: W0220 12:13:56.182563 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88be940c_d575_424c_8bc3_e3dc038eb1fb.slice/crio-7558fe2962f836487a02a996788fefea172b3dd84ae23e671fe9bd31600d3666 WatchSource:0}: Error finding container 7558fe2962f836487a02a996788fefea172b3dd84ae23e671fe9bd31600d3666: Status 404 returned error can't find the container with id 7558fe2962f836487a02a996788fefea172b3dd84ae23e671fe9bd31600d3666 Feb 20 12:13:56.604165 master-0 kubenswrapper[31420]: I0220 12:13:56.604026 31420 generic.go:334] "Generic (PLEG): container finished" podID="88be940c-d575-424c-8bc3-e3dc038eb1fb" containerID="017fa434eb05b8ef8944020428db44b106589bb88ea5224bbb249aa759ed258d" exitCode=0 Feb 20 12:13:56.604165 master-0 kubenswrapper[31420]: I0220 12:13:56.604109 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" event={"ID":"88be940c-d575-424c-8bc3-e3dc038eb1fb","Type":"ContainerDied","Data":"017fa434eb05b8ef8944020428db44b106589bb88ea5224bbb249aa759ed258d"} Feb 20 12:13:56.604165 master-0 kubenswrapper[31420]: I0220 12:13:56.604151 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" event={"ID":"88be940c-d575-424c-8bc3-e3dc038eb1fb","Type":"ContainerStarted","Data":"7558fe2962f836487a02a996788fefea172b3dd84ae23e671fe9bd31600d3666"} Feb 20 12:13:57.614431 master-0 kubenswrapper[31420]: I0220 12:13:57.614361 31420 generic.go:334] "Generic (PLEG): container finished" podID="a745c583-8c85-4cd5-9495-c57fb3f69812" containerID="8b86fb8a7e3f99b93c3c4dca80e195ee4680165cbf1fc4b42047f015e51cefba" exitCode=0 Feb 20 12:13:57.615610 master-0 kubenswrapper[31420]: I0220 12:13:57.614409 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" event={"ID":"a745c583-8c85-4cd5-9495-c57fb3f69812","Type":"ContainerDied","Data":"8b86fb8a7e3f99b93c3c4dca80e195ee4680165cbf1fc4b42047f015e51cefba"} Feb 20 12:13:58.623299 master-0 kubenswrapper[31420]: I0220 12:13:58.623211 31420 generic.go:334] "Generic (PLEG): container finished" podID="a745c583-8c85-4cd5-9495-c57fb3f69812" containerID="cdda52fb858114f939529952aee73e86406f502f9e3cbc48c8bb3e8c48b864ca" exitCode=0 Feb 20 12:13:58.623299 master-0 kubenswrapper[31420]: I0220 12:13:58.623261 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" event={"ID":"a745c583-8c85-4cd5-9495-c57fb3f69812","Type":"ContainerDied","Data":"cdda52fb858114f939529952aee73e86406f502f9e3cbc48c8bb3e8c48b864ca"} Feb 20 12:13:59.636350 master-0 kubenswrapper[31420]: I0220 12:13:59.636248 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" event={"ID":"88be940c-d575-424c-8bc3-e3dc038eb1fb","Type":"ContainerStarted","Data":"72fed2b468e09f9bb3aed88243f2ad79ddff5dfc60b162782d8664594398d6ab"} Feb 20 12:13:59.640272 master-0 kubenswrapper[31420]: I0220 12:13:59.640224 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" event={"ID":"8dc6066f-00aa-4a1c-962d-b18bfcb2890b","Type":"ContainerStarted","Data":"63e6b9a919d89af4cd0e4d74c540f7ac5b9ec6e46645fa74bed2fc1d78138442"} Feb 20 12:14:00.016974 master-0 kubenswrapper[31420]: I0220 12:14:00.016911 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" Feb 20 12:14:00.184421 master-0 kubenswrapper[31420]: I0220 12:14:00.184362 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a745c583-8c85-4cd5-9495-c57fb3f69812-bundle\") pod \"a745c583-8c85-4cd5-9495-c57fb3f69812\" (UID: \"a745c583-8c85-4cd5-9495-c57fb3f69812\") " Feb 20 12:14:00.184658 master-0 kubenswrapper[31420]: I0220 12:14:00.184467 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hdt5\" (UniqueName: \"kubernetes.io/projected/a745c583-8c85-4cd5-9495-c57fb3f69812-kube-api-access-9hdt5\") pod \"a745c583-8c85-4cd5-9495-c57fb3f69812\" (UID: \"a745c583-8c85-4cd5-9495-c57fb3f69812\") " Feb 20 12:14:00.184658 master-0 kubenswrapper[31420]: I0220 12:14:00.184597 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a745c583-8c85-4cd5-9495-c57fb3f69812-util\") pod \"a745c583-8c85-4cd5-9495-c57fb3f69812\" (UID: \"a745c583-8c85-4cd5-9495-c57fb3f69812\") " Feb 20 12:14:00.186307 master-0 kubenswrapper[31420]: I0220 12:14:00.186238 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a745c583-8c85-4cd5-9495-c57fb3f69812-bundle" (OuterVolumeSpecName: "bundle") pod "a745c583-8c85-4cd5-9495-c57fb3f69812" (UID: "a745c583-8c85-4cd5-9495-c57fb3f69812"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:14:00.188915 master-0 kubenswrapper[31420]: I0220 12:14:00.188866 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a745c583-8c85-4cd5-9495-c57fb3f69812-kube-api-access-9hdt5" (OuterVolumeSpecName: "kube-api-access-9hdt5") pod "a745c583-8c85-4cd5-9495-c57fb3f69812" (UID: "a745c583-8c85-4cd5-9495-c57fb3f69812"). InnerVolumeSpecName "kube-api-access-9hdt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:14:00.216671 master-0 kubenswrapper[31420]: I0220 12:14:00.216486 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a745c583-8c85-4cd5-9495-c57fb3f69812-util" (OuterVolumeSpecName: "util") pod "a745c583-8c85-4cd5-9495-c57fb3f69812" (UID: "a745c583-8c85-4cd5-9495-c57fb3f69812"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:14:00.287899 master-0 kubenswrapper[31420]: I0220 12:14:00.287788 31420 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a745c583-8c85-4cd5-9495-c57fb3f69812-util\") on node \"master-0\" DevicePath \"\"" Feb 20 12:14:00.288166 master-0 kubenswrapper[31420]: I0220 12:14:00.287886 31420 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a745c583-8c85-4cd5-9495-c57fb3f69812-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:14:00.288166 master-0 kubenswrapper[31420]: I0220 12:14:00.287949 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hdt5\" (UniqueName: \"kubernetes.io/projected/a745c583-8c85-4cd5-9495-c57fb3f69812-kube-api-access-9hdt5\") on node \"master-0\" DevicePath \"\"" Feb 20 12:14:00.656758 master-0 kubenswrapper[31420]: I0220 12:14:00.656678 31420 generic.go:334] "Generic (PLEG): container finished" podID="88be940c-d575-424c-8bc3-e3dc038eb1fb" containerID="72fed2b468e09f9bb3aed88243f2ad79ddff5dfc60b162782d8664594398d6ab" exitCode=0 Feb 20 12:14:00.657791 master-0 kubenswrapper[31420]: I0220 12:14:00.656781 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" event={"ID":"88be940c-d575-424c-8bc3-e3dc038eb1fb","Type":"ContainerDied","Data":"72fed2b468e09f9bb3aed88243f2ad79ddff5dfc60b162782d8664594398d6ab"} Feb 20 12:14:00.660495 master-0 kubenswrapper[31420]: I0220 12:14:00.660032 31420 generic.go:334] "Generic (PLEG): container finished" podID="8dc6066f-00aa-4a1c-962d-b18bfcb2890b" containerID="63e6b9a919d89af4cd0e4d74c540f7ac5b9ec6e46645fa74bed2fc1d78138442" exitCode=0 Feb 20 12:14:00.660495 master-0 kubenswrapper[31420]: I0220 12:14:00.660085 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" event={"ID":"8dc6066f-00aa-4a1c-962d-b18bfcb2890b","Type":"ContainerDied","Data":"63e6b9a919d89af4cd0e4d74c540f7ac5b9ec6e46645fa74bed2fc1d78138442"} Feb 20 12:14:00.664958 master-0 kubenswrapper[31420]: I0220 12:14:00.664857 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" event={"ID":"a745c583-8c85-4cd5-9495-c57fb3f69812","Type":"ContainerDied","Data":"be216b7142fa741f4b4b846d698ad2f1fe05585f3a185d0059f3974b5919bdad"} Feb 20 12:14:00.664958 master-0 kubenswrapper[31420]: I0220 12:14:00.664929 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213skgcb" Feb 20 12:14:00.665206 master-0 kubenswrapper[31420]: I0220 12:14:00.664939 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be216b7142fa741f4b4b846d698ad2f1fe05585f3a185d0059f3974b5919bdad" Feb 20 12:14:01.705451 master-0 kubenswrapper[31420]: I0220 12:14:01.681436 31420 generic.go:334] "Generic (PLEG): container finished" podID="88be940c-d575-424c-8bc3-e3dc038eb1fb" containerID="0af4954c05db50d3dfee47781547b1e19b5809c980d498ac0f053ddd80c2510f" exitCode=0 Feb 20 12:14:01.705451 master-0 kubenswrapper[31420]: I0220 12:14:01.681571 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" event={"ID":"88be940c-d575-424c-8bc3-e3dc038eb1fb","Type":"ContainerDied","Data":"0af4954c05db50d3dfee47781547b1e19b5809c980d498ac0f053ddd80c2510f"} Feb 20 12:14:01.708632 master-0 kubenswrapper[31420]: I0220 12:14:01.707212 31420 generic.go:334] "Generic (PLEG): container finished" podID="8dc6066f-00aa-4a1c-962d-b18bfcb2890b" containerID="2f87d3cb47a6f8018feb16c495174de7caa0a9c0531801071e7fb22ac3b0004c" exitCode=0 Feb 20 12:14:01.708632 master-0 kubenswrapper[31420]: I0220 12:14:01.707309 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" event={"ID":"8dc6066f-00aa-4a1c-962d-b18bfcb2890b","Type":"ContainerDied","Data":"2f87d3cb47a6f8018feb16c495174de7caa0a9c0531801071e7fb22ac3b0004c"} Feb 20 12:14:02.346469 master-0 kubenswrapper[31420]: I0220 12:14:02.346365 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd"] Feb 20 12:14:02.347005 master-0 kubenswrapper[31420]: E0220 12:14:02.346956 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a745c583-8c85-4cd5-9495-c57fb3f69812" containerName="pull" Feb 20 12:14:02.347005 master-0 kubenswrapper[31420]: I0220 12:14:02.346991 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="a745c583-8c85-4cd5-9495-c57fb3f69812" containerName="pull" Feb 20 12:14:02.347170 master-0 kubenswrapper[31420]: E0220 12:14:02.347012 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a745c583-8c85-4cd5-9495-c57fb3f69812" containerName="extract" Feb 20 12:14:02.347170 master-0 kubenswrapper[31420]: I0220 12:14:02.347024 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="a745c583-8c85-4cd5-9495-c57fb3f69812" containerName="extract" Feb 20 12:14:02.347170 master-0 kubenswrapper[31420]: E0220 12:14:02.347065 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a745c583-8c85-4cd5-9495-c57fb3f69812" containerName="util" Feb 20 12:14:02.347170 master-0 kubenswrapper[31420]: I0220 12:14:02.347075 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="a745c583-8c85-4cd5-9495-c57fb3f69812" containerName="util" Feb 20 12:14:02.347409 master-0 kubenswrapper[31420]: I0220 12:14:02.347337 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="a745c583-8c85-4cd5-9495-c57fb3f69812" containerName="extract" Feb 20 12:14:02.349467 master-0 kubenswrapper[31420]: I0220 12:14:02.349369 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" Feb 20 12:14:02.402380 master-0 kubenswrapper[31420]: I0220 12:14:02.402312 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd"] Feb 20 12:14:02.535796 master-0 kubenswrapper[31420]: I0220 12:14:02.535703 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c645142-0e23-4b4d-8415-4c58f0ce467f-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd\" (UID: \"1c645142-0e23-4b4d-8415-4c58f0ce467f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" Feb 20 12:14:02.536411 master-0 kubenswrapper[31420]: I0220 12:14:02.536308 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsns6\" (UniqueName: \"kubernetes.io/projected/1c645142-0e23-4b4d-8415-4c58f0ce467f-kube-api-access-jsns6\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd\" (UID: \"1c645142-0e23-4b4d-8415-4c58f0ce467f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" Feb 20 12:14:02.536511 master-0 kubenswrapper[31420]: I0220 12:14:02.536441 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c645142-0e23-4b4d-8415-4c58f0ce467f-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd\" (UID: \"1c645142-0e23-4b4d-8415-4c58f0ce467f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" Feb 20 12:14:02.638794 master-0 kubenswrapper[31420]: I0220 12:14:02.638605 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsns6\" (UniqueName: \"kubernetes.io/projected/1c645142-0e23-4b4d-8415-4c58f0ce467f-kube-api-access-jsns6\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd\" (UID: \"1c645142-0e23-4b4d-8415-4c58f0ce467f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" Feb 20 12:14:02.638794 master-0 kubenswrapper[31420]: I0220 12:14:02.638688 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c645142-0e23-4b4d-8415-4c58f0ce467f-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd\" (UID: \"1c645142-0e23-4b4d-8415-4c58f0ce467f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" Feb 20 12:14:02.639211 master-0 kubenswrapper[31420]: I0220 12:14:02.639032 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c645142-0e23-4b4d-8415-4c58f0ce467f-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd\" (UID: \"1c645142-0e23-4b4d-8415-4c58f0ce467f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" Feb 20 12:14:02.639706 master-0 kubenswrapper[31420]: I0220 12:14:02.639660 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c645142-0e23-4b4d-8415-4c58f0ce467f-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd\" (UID: \"1c645142-0e23-4b4d-8415-4c58f0ce467f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" Feb 20 12:14:02.640056 master-0 kubenswrapper[31420]: I0220 12:14:02.639994 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c645142-0e23-4b4d-8415-4c58f0ce467f-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd\" (UID: \"1c645142-0e23-4b4d-8415-4c58f0ce467f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" Feb 20 12:14:02.663738 master-0 kubenswrapper[31420]: I0220 12:14:02.663682 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsns6\" (UniqueName: \"kubernetes.io/projected/1c645142-0e23-4b4d-8415-4c58f0ce467f-kube-api-access-jsns6\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd\" (UID: \"1c645142-0e23-4b4d-8415-4c58f0ce467f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" Feb 20 12:14:02.670482 master-0 kubenswrapper[31420]: I0220 12:14:02.670402 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" Feb 20 12:14:03.198013 master-0 kubenswrapper[31420]: I0220 12:14:03.196806 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd"] Feb 20 12:14:03.199243 master-0 kubenswrapper[31420]: W0220 12:14:03.199206 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c645142_0e23_4b4d_8415_4c58f0ce467f.slice/crio-afcf6ca2843bed9269e6931a732e108b850f9e32d88e2c1c2506bbe747f15569 WatchSource:0}: Error finding container afcf6ca2843bed9269e6931a732e108b850f9e32d88e2c1c2506bbe747f15569: Status 404 returned error can't find the container with id afcf6ca2843bed9269e6931a732e108b850f9e32d88e2c1c2506bbe747f15569 Feb 20 12:14:03.245779 master-0 kubenswrapper[31420]: I0220 12:14:03.245751 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" Feb 20 12:14:03.295672 master-0 kubenswrapper[31420]: I0220 12:14:03.295626 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" Feb 20 12:14:03.357881 master-0 kubenswrapper[31420]: I0220 12:14:03.357808 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88be940c-d575-424c-8bc3-e3dc038eb1fb-bundle\") pod \"88be940c-d575-424c-8bc3-e3dc038eb1fb\" (UID: \"88be940c-d575-424c-8bc3-e3dc038eb1fb\") " Feb 20 12:14:03.358288 master-0 kubenswrapper[31420]: I0220 12:14:03.357962 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88be940c-d575-424c-8bc3-e3dc038eb1fb-util\") pod \"88be940c-d575-424c-8bc3-e3dc038eb1fb\" (UID: \"88be940c-d575-424c-8bc3-e3dc038eb1fb\") " Feb 20 12:14:03.358288 master-0 kubenswrapper[31420]: I0220 12:14:03.358051 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljsj8\" (UniqueName: \"kubernetes.io/projected/88be940c-d575-424c-8bc3-e3dc038eb1fb-kube-api-access-ljsj8\") pod \"88be940c-d575-424c-8bc3-e3dc038eb1fb\" (UID: \"88be940c-d575-424c-8bc3-e3dc038eb1fb\") " Feb 20 12:14:03.358638 master-0 kubenswrapper[31420]: I0220 12:14:03.358582 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88be940c-d575-424c-8bc3-e3dc038eb1fb-bundle" (OuterVolumeSpecName: "bundle") pod "88be940c-d575-424c-8bc3-e3dc038eb1fb" (UID: "88be940c-d575-424c-8bc3-e3dc038eb1fb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:14:03.362061 master-0 kubenswrapper[31420]: I0220 12:14:03.362016 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88be940c-d575-424c-8bc3-e3dc038eb1fb-kube-api-access-ljsj8" (OuterVolumeSpecName: "kube-api-access-ljsj8") pod "88be940c-d575-424c-8bc3-e3dc038eb1fb" (UID: "88be940c-d575-424c-8bc3-e3dc038eb1fb"). InnerVolumeSpecName "kube-api-access-ljsj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:14:03.371334 master-0 kubenswrapper[31420]: I0220 12:14:03.371253 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88be940c-d575-424c-8bc3-e3dc038eb1fb-util" (OuterVolumeSpecName: "util") pod "88be940c-d575-424c-8bc3-e3dc038eb1fb" (UID: "88be940c-d575-424c-8bc3-e3dc038eb1fb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:14:03.459928 master-0 kubenswrapper[31420]: I0220 12:14:03.459270 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chpwf\" (UniqueName: \"kubernetes.io/projected/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-kube-api-access-chpwf\") pod \"8dc6066f-00aa-4a1c-962d-b18bfcb2890b\" (UID: \"8dc6066f-00aa-4a1c-962d-b18bfcb2890b\") " Feb 20 12:14:03.459928 master-0 kubenswrapper[31420]: I0220 12:14:03.459338 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-bundle\") pod \"8dc6066f-00aa-4a1c-962d-b18bfcb2890b\" (UID: \"8dc6066f-00aa-4a1c-962d-b18bfcb2890b\") " Feb 20 12:14:03.459928 master-0 kubenswrapper[31420]: I0220 12:14:03.459403 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-util\") pod \"8dc6066f-00aa-4a1c-962d-b18bfcb2890b\" (UID: \"8dc6066f-00aa-4a1c-962d-b18bfcb2890b\") " Feb 20 12:14:03.459928 master-0 kubenswrapper[31420]: I0220 12:14:03.459867 31420 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88be940c-d575-424c-8bc3-e3dc038eb1fb-util\") on node \"master-0\" DevicePath \"\"" Feb 20 12:14:03.459928 master-0 kubenswrapper[31420]: I0220 12:14:03.459885 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljsj8\" (UniqueName: \"kubernetes.io/projected/88be940c-d575-424c-8bc3-e3dc038eb1fb-kube-api-access-ljsj8\") on node \"master-0\" DevicePath \"\"" Feb 20 12:14:03.459928 master-0 kubenswrapper[31420]: I0220 12:14:03.459901 31420 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88be940c-d575-424c-8bc3-e3dc038eb1fb-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:14:03.460549 master-0 kubenswrapper[31420]: I0220 12:14:03.460489 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-bundle" (OuterVolumeSpecName: "bundle") pod "8dc6066f-00aa-4a1c-962d-b18bfcb2890b" (UID: "8dc6066f-00aa-4a1c-962d-b18bfcb2890b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:14:03.462569 master-0 kubenswrapper[31420]: I0220 12:14:03.462453 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-kube-api-access-chpwf" (OuterVolumeSpecName: "kube-api-access-chpwf") pod "8dc6066f-00aa-4a1c-962d-b18bfcb2890b" (UID: "8dc6066f-00aa-4a1c-962d-b18bfcb2890b"). InnerVolumeSpecName "kube-api-access-chpwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:14:03.473647 master-0 kubenswrapper[31420]: I0220 12:14:03.473522 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-util" (OuterVolumeSpecName: "util") pod "8dc6066f-00aa-4a1c-962d-b18bfcb2890b" (UID: "8dc6066f-00aa-4a1c-962d-b18bfcb2890b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:14:03.561677 master-0 kubenswrapper[31420]: I0220 12:14:03.561619 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chpwf\" (UniqueName: \"kubernetes.io/projected/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-kube-api-access-chpwf\") on node \"master-0\" DevicePath \"\"" Feb 20 12:14:03.562165 master-0 kubenswrapper[31420]: I0220 12:14:03.562132 31420 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:14:03.562165 master-0 kubenswrapper[31420]: I0220 12:14:03.562149 31420 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dc6066f-00aa-4a1c-962d-b18bfcb2890b-util\") on node \"master-0\" DevicePath \"\"" Feb 20 12:14:03.736053 master-0 kubenswrapper[31420]: I0220 12:14:03.735968 31420 generic.go:334] "Generic (PLEG): container finished" podID="1c645142-0e23-4b4d-8415-4c58f0ce467f" containerID="ce1279a8a20bc48469f3c846b313e17557814aaf408ac38e15421cb6504b4645" exitCode=0 Feb 20 12:14:03.736244 master-0 kubenswrapper[31420]: I0220 12:14:03.736060 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" event={"ID":"1c645142-0e23-4b4d-8415-4c58f0ce467f","Type":"ContainerDied","Data":"ce1279a8a20bc48469f3c846b313e17557814aaf408ac38e15421cb6504b4645"} Feb 20 12:14:03.736244 master-0 kubenswrapper[31420]: I0220 12:14:03.736161 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" event={"ID":"1c645142-0e23-4b4d-8415-4c58f0ce467f","Type":"ContainerStarted","Data":"afcf6ca2843bed9269e6931a732e108b850f9e32d88e2c1c2506bbe747f15569"} Feb 20 12:14:03.741133 master-0 kubenswrapper[31420]: I0220 12:14:03.741076 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" Feb 20 12:14:03.741822 master-0 kubenswrapper[31420]: I0220 12:14:03.741070 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecascn6j" event={"ID":"88be940c-d575-424c-8bc3-e3dc038eb1fb","Type":"ContainerDied","Data":"7558fe2962f836487a02a996788fefea172b3dd84ae23e671fe9bd31600d3666"} Feb 20 12:14:03.741975 master-0 kubenswrapper[31420]: I0220 12:14:03.741883 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7558fe2962f836487a02a996788fefea172b3dd84ae23e671fe9bd31600d3666" Feb 20 12:14:03.746194 master-0 kubenswrapper[31420]: I0220 12:14:03.746141 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" event={"ID":"8dc6066f-00aa-4a1c-962d-b18bfcb2890b","Type":"ContainerDied","Data":"8f379181b1e9723f93733cb55c25158cb7fe410f719192b04245e5a223233cb0"} Feb 20 12:14:03.746310 master-0 kubenswrapper[31420]: I0220 12:14:03.746200 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f379181b1e9723f93733cb55c25158cb7fe410f719192b04245e5a223233cb0" Feb 20 12:14:03.746310 master-0 kubenswrapper[31420]: I0220 12:14:03.746249 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zhl9h" Feb 20 12:14:06.774947 master-0 kubenswrapper[31420]: I0220 12:14:06.774891 31420 generic.go:334] "Generic (PLEG): container finished" podID="1c645142-0e23-4b4d-8415-4c58f0ce467f" containerID="4f2acce852b94f893c557f9b91842a9e8fa77cd541729eaec5d555fd4df2290d" exitCode=0 Feb 20 12:14:06.774947 master-0 kubenswrapper[31420]: I0220 12:14:06.774946 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" event={"ID":"1c645142-0e23-4b4d-8415-4c58f0ce467f","Type":"ContainerDied","Data":"4f2acce852b94f893c557f9b91842a9e8fa77cd541729eaec5d555fd4df2290d"} Feb 20 12:14:07.782821 master-0 kubenswrapper[31420]: I0220 12:14:07.782764 31420 generic.go:334] "Generic (PLEG): container finished" podID="1c645142-0e23-4b4d-8415-4c58f0ce467f" containerID="1538fb8f72a6c61aa6814e2a4269117a19d747e4d219f2bf8cf995df00ffed54" exitCode=0 Feb 20 12:14:07.782821 master-0 kubenswrapper[31420]: I0220 12:14:07.782816 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" event={"ID":"1c645142-0e23-4b4d-8415-4c58f0ce467f","Type":"ContainerDied","Data":"1538fb8f72a6c61aa6814e2a4269117a19d747e4d219f2bf8cf995df00ffed54"} Feb 20 12:14:09.214981 master-0 kubenswrapper[31420]: I0220 12:14:09.214932 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" Feb 20 12:14:09.272705 master-0 kubenswrapper[31420]: I0220 12:14:09.272645 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c645142-0e23-4b4d-8415-4c58f0ce467f-bundle\") pod \"1c645142-0e23-4b4d-8415-4c58f0ce467f\" (UID: \"1c645142-0e23-4b4d-8415-4c58f0ce467f\") " Feb 20 12:14:09.272705 master-0 kubenswrapper[31420]: I0220 12:14:09.272710 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c645142-0e23-4b4d-8415-4c58f0ce467f-util\") pod \"1c645142-0e23-4b4d-8415-4c58f0ce467f\" (UID: \"1c645142-0e23-4b4d-8415-4c58f0ce467f\") " Feb 20 12:14:09.272982 master-0 kubenswrapper[31420]: I0220 12:14:09.272774 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsns6\" (UniqueName: \"kubernetes.io/projected/1c645142-0e23-4b4d-8415-4c58f0ce467f-kube-api-access-jsns6\") pod \"1c645142-0e23-4b4d-8415-4c58f0ce467f\" (UID: \"1c645142-0e23-4b4d-8415-4c58f0ce467f\") " Feb 20 12:14:09.276259 master-0 kubenswrapper[31420]: I0220 12:14:09.276184 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c645142-0e23-4b4d-8415-4c58f0ce467f-kube-api-access-jsns6" (OuterVolumeSpecName: "kube-api-access-jsns6") pod "1c645142-0e23-4b4d-8415-4c58f0ce467f" (UID: "1c645142-0e23-4b4d-8415-4c58f0ce467f"). InnerVolumeSpecName "kube-api-access-jsns6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:14:09.276513 master-0 kubenswrapper[31420]: I0220 12:14:09.276481 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c645142-0e23-4b4d-8415-4c58f0ce467f-bundle" (OuterVolumeSpecName: "bundle") pod "1c645142-0e23-4b4d-8415-4c58f0ce467f" (UID: "1c645142-0e23-4b4d-8415-4c58f0ce467f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:14:09.287333 master-0 kubenswrapper[31420]: I0220 12:14:09.287262 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c645142-0e23-4b4d-8415-4c58f0ce467f-util" (OuterVolumeSpecName: "util") pod "1c645142-0e23-4b4d-8415-4c58f0ce467f" (UID: "1c645142-0e23-4b4d-8415-4c58f0ce467f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:14:09.375492 master-0 kubenswrapper[31420]: I0220 12:14:09.375403 31420 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c645142-0e23-4b4d-8415-4c58f0ce467f-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:14:09.375492 master-0 kubenswrapper[31420]: I0220 12:14:09.375485 31420 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c645142-0e23-4b4d-8415-4c58f0ce467f-util\") on node \"master-0\" DevicePath \"\"" Feb 20 12:14:09.375738 master-0 kubenswrapper[31420]: I0220 12:14:09.375508 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsns6\" (UniqueName: \"kubernetes.io/projected/1c645142-0e23-4b4d-8415-4c58f0ce467f-kube-api-access-jsns6\") on node \"master-0\" DevicePath \"\"" Feb 20 12:14:09.800806 master-0 kubenswrapper[31420]: I0220 12:14:09.800689 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" event={"ID":"1c645142-0e23-4b4d-8415-4c58f0ce467f","Type":"ContainerDied","Data":"afcf6ca2843bed9269e6931a732e108b850f9e32d88e2c1c2506bbe747f15569"} Feb 20 12:14:09.800806 master-0 kubenswrapper[31420]: I0220 12:14:09.800741 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afcf6ca2843bed9269e6931a732e108b850f9e32d88e2c1c2506bbe747f15569" Feb 20 12:14:09.800806 master-0 kubenswrapper[31420]: I0220 12:14:09.800766 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gq2rd" Feb 20 12:14:13.790233 master-0 kubenswrapper[31420]: I0220 12:14:13.790146 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-slnc4"] Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: E0220 12:14:13.790476 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88be940c-d575-424c-8bc3-e3dc038eb1fb" containerName="pull" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: I0220 12:14:13.790491 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="88be940c-d575-424c-8bc3-e3dc038eb1fb" containerName="pull" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: E0220 12:14:13.790514 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc6066f-00aa-4a1c-962d-b18bfcb2890b" containerName="util" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: I0220 12:14:13.790521 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc6066f-00aa-4a1c-962d-b18bfcb2890b" containerName="util" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: E0220 12:14:13.790552 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc6066f-00aa-4a1c-962d-b18bfcb2890b" containerName="extract" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: I0220 12:14:13.790561 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc6066f-00aa-4a1c-962d-b18bfcb2890b" containerName="extract" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: E0220 12:14:13.790574 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c645142-0e23-4b4d-8415-4c58f0ce467f" containerName="util" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: I0220 12:14:13.790582 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c645142-0e23-4b4d-8415-4c58f0ce467f" containerName="util" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: E0220 12:14:13.790596 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88be940c-d575-424c-8bc3-e3dc038eb1fb" containerName="extract" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: I0220 12:14:13.790603 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="88be940c-d575-424c-8bc3-e3dc038eb1fb" containerName="extract" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: E0220 12:14:13.790617 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c645142-0e23-4b4d-8415-4c58f0ce467f" containerName="extract" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: I0220 12:14:13.790626 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c645142-0e23-4b4d-8415-4c58f0ce467f" containerName="extract" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: E0220 12:14:13.790644 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88be940c-d575-424c-8bc3-e3dc038eb1fb" containerName="util" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: I0220 12:14:13.790652 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="88be940c-d575-424c-8bc3-e3dc038eb1fb" containerName="util" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: E0220 12:14:13.790665 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc6066f-00aa-4a1c-962d-b18bfcb2890b" containerName="pull" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: I0220 12:14:13.790673 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc6066f-00aa-4a1c-962d-b18bfcb2890b" containerName="pull" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: E0220 12:14:13.790697 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c645142-0e23-4b4d-8415-4c58f0ce467f" containerName="pull" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: I0220 12:14:13.790704 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c645142-0e23-4b4d-8415-4c58f0ce467f" containerName="pull" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: I0220 12:14:13.790853 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc6066f-00aa-4a1c-962d-b18bfcb2890b" containerName="extract" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: I0220 12:14:13.790867 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="88be940c-d575-424c-8bc3-e3dc038eb1fb" containerName="extract" Feb 20 12:14:13.790994 master-0 kubenswrapper[31420]: I0220 12:14:13.790907 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c645142-0e23-4b4d-8415-4c58f0ce467f" containerName="extract" Feb 20 12:14:13.792139 master-0 kubenswrapper[31420]: I0220 12:14:13.791453 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-slnc4" Feb 20 12:14:13.794460 master-0 kubenswrapper[31420]: I0220 12:14:13.794384 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 20 12:14:13.795655 master-0 kubenswrapper[31420]: I0220 12:14:13.795593 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 20 12:14:13.844739 master-0 kubenswrapper[31420]: I0220 12:14:13.841469 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-slnc4"] Feb 20 12:14:13.858115 master-0 kubenswrapper[31420]: I0220 12:14:13.858023 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbkpb\" (UniqueName: \"kubernetes.io/projected/ba64f39d-a56c-45b2-8dcb-b796be88d71b-kube-api-access-mbkpb\") pod \"nmstate-operator-694c9596b7-slnc4\" (UID: \"ba64f39d-a56c-45b2-8dcb-b796be88d71b\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-slnc4" Feb 20 12:14:13.960084 master-0 kubenswrapper[31420]: I0220 12:14:13.960005 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbkpb\" (UniqueName: \"kubernetes.io/projected/ba64f39d-a56c-45b2-8dcb-b796be88d71b-kube-api-access-mbkpb\") pod \"nmstate-operator-694c9596b7-slnc4\" (UID: \"ba64f39d-a56c-45b2-8dcb-b796be88d71b\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-slnc4" Feb 20 12:14:14.083794 master-0 kubenswrapper[31420]: I0220 12:14:14.083688 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbkpb\" (UniqueName: \"kubernetes.io/projected/ba64f39d-a56c-45b2-8dcb-b796be88d71b-kube-api-access-mbkpb\") pod \"nmstate-operator-694c9596b7-slnc4\" (UID: \"ba64f39d-a56c-45b2-8dcb-b796be88d71b\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-slnc4" Feb 20 12:14:14.111151 master-0 kubenswrapper[31420]: I0220 12:14:14.111085 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-slnc4" Feb 20 12:14:14.671845 master-0 kubenswrapper[31420]: I0220 12:14:14.671536 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-slnc4"] Feb 20 12:14:14.837837 master-0 kubenswrapper[31420]: I0220 12:14:14.837763 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-slnc4" event={"ID":"ba64f39d-a56c-45b2-8dcb-b796be88d71b","Type":"ContainerStarted","Data":"2ea91a820ba73614e3e55eea3bd56147beb15b3db0bf224daa80d7323ab60e29"} Feb 20 12:14:17.885589 master-0 kubenswrapper[31420]: I0220 12:14:17.885518 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-slnc4" event={"ID":"ba64f39d-a56c-45b2-8dcb-b796be88d71b","Type":"ContainerStarted","Data":"916f0ca5a11eeb0cbc1018400a64d8f0e387811a09c35f8ae8bce9b6de53e2a6"} Feb 20 12:14:17.929940 master-0 kubenswrapper[31420]: I0220 12:14:17.929833 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-slnc4" podStartSLOduration=2.49362961 podStartE2EDuration="4.929808786s" podCreationTimestamp="2026-02-20 12:14:13 +0000 UTC" firstStartedPulling="2026-02-20 12:14:14.674672078 +0000 UTC m=+559.393910329" lastFinishedPulling="2026-02-20 12:14:17.110851264 +0000 UTC m=+561.830089505" observedRunningTime="2026-02-20 12:14:17.924406905 +0000 UTC m=+562.643645156" watchObservedRunningTime="2026-02-20 12:14:17.929808786 +0000 UTC m=+562.649047037" Feb 20 12:14:18.328309 master-0 kubenswrapper[31420]: I0220 12:14:18.328254 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78"] Feb 20 12:14:18.329156 master-0 kubenswrapper[31420]: I0220 12:14:18.329126 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" Feb 20 12:14:18.338069 master-0 kubenswrapper[31420]: I0220 12:14:18.335474 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 20 12:14:18.338069 master-0 kubenswrapper[31420]: I0220 12:14:18.335783 31420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 20 12:14:18.338069 master-0 kubenswrapper[31420]: I0220 12:14:18.336001 31420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 20 12:14:18.338069 master-0 kubenswrapper[31420]: I0220 12:14:18.336369 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 20 12:14:18.394379 master-0 kubenswrapper[31420]: I0220 12:14:18.394251 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78"] Feb 20 12:14:18.443903 master-0 kubenswrapper[31420]: I0220 12:14:18.442600 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05b963e1-7eca-4b48-b411-ce2bbf48fbf2-apiservice-cert\") pod \"metallb-operator-controller-manager-7865667bdc-lwg78\" (UID: \"05b963e1-7eca-4b48-b411-ce2bbf48fbf2\") " pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" Feb 20 12:14:18.443903 master-0 kubenswrapper[31420]: I0220 12:14:18.442661 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05b963e1-7eca-4b48-b411-ce2bbf48fbf2-webhook-cert\") pod \"metallb-operator-controller-manager-7865667bdc-lwg78\" (UID: \"05b963e1-7eca-4b48-b411-ce2bbf48fbf2\") " pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" Feb 20 12:14:18.443903 master-0 kubenswrapper[31420]: I0220 12:14:18.442712 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr2pk\" (UniqueName: \"kubernetes.io/projected/05b963e1-7eca-4b48-b411-ce2bbf48fbf2-kube-api-access-xr2pk\") pod \"metallb-operator-controller-manager-7865667bdc-lwg78\" (UID: \"05b963e1-7eca-4b48-b411-ce2bbf48fbf2\") " pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" Feb 20 12:14:18.544771 master-0 kubenswrapper[31420]: I0220 12:14:18.544707 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05b963e1-7eca-4b48-b411-ce2bbf48fbf2-apiservice-cert\") pod \"metallb-operator-controller-manager-7865667bdc-lwg78\" (UID: \"05b963e1-7eca-4b48-b411-ce2bbf48fbf2\") " pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" Feb 20 12:14:18.544998 master-0 kubenswrapper[31420]: I0220 12:14:18.544794 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05b963e1-7eca-4b48-b411-ce2bbf48fbf2-webhook-cert\") pod \"metallb-operator-controller-manager-7865667bdc-lwg78\" (UID: \"05b963e1-7eca-4b48-b411-ce2bbf48fbf2\") " pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" Feb 20 12:14:18.544998 master-0 kubenswrapper[31420]: I0220 12:14:18.544871 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr2pk\" (UniqueName: \"kubernetes.io/projected/05b963e1-7eca-4b48-b411-ce2bbf48fbf2-kube-api-access-xr2pk\") pod \"metallb-operator-controller-manager-7865667bdc-lwg78\" (UID: \"05b963e1-7eca-4b48-b411-ce2bbf48fbf2\") " pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" Feb 20 12:14:18.548542 master-0 kubenswrapper[31420]: I0220 12:14:18.548055 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05b963e1-7eca-4b48-b411-ce2bbf48fbf2-apiservice-cert\") pod \"metallb-operator-controller-manager-7865667bdc-lwg78\" (UID: \"05b963e1-7eca-4b48-b411-ce2bbf48fbf2\") " pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" Feb 20 12:14:18.551687 master-0 kubenswrapper[31420]: I0220 12:14:18.550375 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05b963e1-7eca-4b48-b411-ce2bbf48fbf2-webhook-cert\") pod \"metallb-operator-controller-manager-7865667bdc-lwg78\" (UID: \"05b963e1-7eca-4b48-b411-ce2bbf48fbf2\") " pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" Feb 20 12:14:18.576552 master-0 kubenswrapper[31420]: I0220 12:14:18.575744 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr2pk\" (UniqueName: \"kubernetes.io/projected/05b963e1-7eca-4b48-b411-ce2bbf48fbf2-kube-api-access-xr2pk\") pod \"metallb-operator-controller-manager-7865667bdc-lwg78\" (UID: \"05b963e1-7eca-4b48-b411-ce2bbf48fbf2\") " pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" Feb 20 12:14:18.643018 master-0 kubenswrapper[31420]: I0220 12:14:18.642869 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" Feb 20 12:14:18.829293 master-0 kubenswrapper[31420]: I0220 12:14:18.829133 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87"] Feb 20 12:14:18.830214 master-0 kubenswrapper[31420]: I0220 12:14:18.830183 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" Feb 20 12:14:18.836451 master-0 kubenswrapper[31420]: I0220 12:14:18.836419 31420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 20 12:14:18.836626 master-0 kubenswrapper[31420]: I0220 12:14:18.836606 31420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 20 12:14:18.855260 master-0 kubenswrapper[31420]: I0220 12:14:18.855209 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87"] Feb 20 12:14:18.962447 master-0 kubenswrapper[31420]: I0220 12:14:18.962333 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/882e49fa-c8b8-4f18-a340-4dfdd950a449-webhook-cert\") pod \"metallb-operator-webhook-server-8486f65d77-9ck87\" (UID: \"882e49fa-c8b8-4f18-a340-4dfdd950a449\") " pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" Feb 20 12:14:18.963006 master-0 kubenswrapper[31420]: I0220 12:14:18.962988 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/882e49fa-c8b8-4f18-a340-4dfdd950a449-apiservice-cert\") pod \"metallb-operator-webhook-server-8486f65d77-9ck87\" (UID: \"882e49fa-c8b8-4f18-a340-4dfdd950a449\") " pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" Feb 20 12:14:18.963130 master-0 kubenswrapper[31420]: I0220 12:14:18.963114 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdwzt\" (UniqueName: \"kubernetes.io/projected/882e49fa-c8b8-4f18-a340-4dfdd950a449-kube-api-access-sdwzt\") pod \"metallb-operator-webhook-server-8486f65d77-9ck87\" (UID: \"882e49fa-c8b8-4f18-a340-4dfdd950a449\") " pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" Feb 20 12:14:19.070545 master-0 kubenswrapper[31420]: I0220 12:14:19.070401 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/882e49fa-c8b8-4f18-a340-4dfdd950a449-webhook-cert\") pod \"metallb-operator-webhook-server-8486f65d77-9ck87\" (UID: \"882e49fa-c8b8-4f18-a340-4dfdd950a449\") " pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" Feb 20 12:14:19.070545 master-0 kubenswrapper[31420]: I0220 12:14:19.070474 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/882e49fa-c8b8-4f18-a340-4dfdd950a449-apiservice-cert\") pod \"metallb-operator-webhook-server-8486f65d77-9ck87\" (UID: \"882e49fa-c8b8-4f18-a340-4dfdd950a449\") " pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" Feb 20 12:14:19.070545 master-0 kubenswrapper[31420]: I0220 12:14:19.070544 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdwzt\" (UniqueName: \"kubernetes.io/projected/882e49fa-c8b8-4f18-a340-4dfdd950a449-kube-api-access-sdwzt\") pod \"metallb-operator-webhook-server-8486f65d77-9ck87\" (UID: \"882e49fa-c8b8-4f18-a340-4dfdd950a449\") " pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" Feb 20 12:14:19.080543 master-0 kubenswrapper[31420]: I0220 12:14:19.077347 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/882e49fa-c8b8-4f18-a340-4dfdd950a449-webhook-cert\") pod \"metallb-operator-webhook-server-8486f65d77-9ck87\" (UID: \"882e49fa-c8b8-4f18-a340-4dfdd950a449\") " pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" Feb 20 12:14:19.086542 master-0 kubenswrapper[31420]: I0220 12:14:19.081326 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/882e49fa-c8b8-4f18-a340-4dfdd950a449-apiservice-cert\") pod \"metallb-operator-webhook-server-8486f65d77-9ck87\" (UID: \"882e49fa-c8b8-4f18-a340-4dfdd950a449\") " pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" Feb 20 12:14:19.133549 master-0 kubenswrapper[31420]: I0220 12:14:19.132837 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdwzt\" (UniqueName: \"kubernetes.io/projected/882e49fa-c8b8-4f18-a340-4dfdd950a449-kube-api-access-sdwzt\") pod \"metallb-operator-webhook-server-8486f65d77-9ck87\" (UID: \"882e49fa-c8b8-4f18-a340-4dfdd950a449\") " pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" Feb 20 12:14:19.163553 master-0 kubenswrapper[31420]: I0220 12:14:19.154907 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" Feb 20 12:14:19.184660 master-0 kubenswrapper[31420]: I0220 12:14:19.184608 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78"] Feb 20 12:14:19.199672 master-0 kubenswrapper[31420]: W0220 12:14:19.194765 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05b963e1_7eca_4b48_b411_ce2bbf48fbf2.slice/crio-947c774df030eae1d70e3d09aef6929cfdcf27f737b7a67367a437d29da359bb WatchSource:0}: Error finding container 947c774df030eae1d70e3d09aef6929cfdcf27f737b7a67367a437d29da359bb: Status 404 returned error can't find the container with id 947c774df030eae1d70e3d09aef6929cfdcf27f737b7a67367a437d29da359bb Feb 20 12:14:19.629725 master-0 kubenswrapper[31420]: I0220 12:14:19.629581 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87"] Feb 20 12:14:19.918692 master-0 kubenswrapper[31420]: I0220 12:14:19.915412 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" event={"ID":"05b963e1-7eca-4b48-b411-ce2bbf48fbf2","Type":"ContainerStarted","Data":"947c774df030eae1d70e3d09aef6929cfdcf27f737b7a67367a437d29da359bb"} Feb 20 12:14:19.918692 master-0 kubenswrapper[31420]: I0220 12:14:19.917398 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" event={"ID":"882e49fa-c8b8-4f18-a340-4dfdd950a449","Type":"ContainerStarted","Data":"0268ce140a36c1386bdc734fcdda0324d2ff31adaf53622bf6fbe0bffc5990e6"} Feb 20 12:14:22.941447 master-0 kubenswrapper[31420]: I0220 12:14:22.941312 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" event={"ID":"05b963e1-7eca-4b48-b411-ce2bbf48fbf2","Type":"ContainerStarted","Data":"ad27c1ce9328fad6e3806e220d149362479daeb5b57a4f4e82c48facd22393df"} Feb 20 12:14:22.941957 master-0 kubenswrapper[31420]: I0220 12:14:22.941603 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" Feb 20 12:14:22.969996 master-0 kubenswrapper[31420]: I0220 12:14:22.969905 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" podStartSLOduration=1.560237029 podStartE2EDuration="4.969884335s" podCreationTimestamp="2026-02-20 12:14:18 +0000 UTC" firstStartedPulling="2026-02-20 12:14:19.201083238 +0000 UTC m=+563.920321479" lastFinishedPulling="2026-02-20 12:14:22.610730544 +0000 UTC m=+567.329968785" observedRunningTime="2026-02-20 12:14:22.967514899 +0000 UTC m=+567.686753140" watchObservedRunningTime="2026-02-20 12:14:22.969884335 +0000 UTC m=+567.689122586" Feb 20 12:14:25.967227 master-0 kubenswrapper[31420]: I0220 12:14:25.967181 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" event={"ID":"882e49fa-c8b8-4f18-a340-4dfdd950a449","Type":"ContainerStarted","Data":"8a111e0966839dc32281ecefb47618913ee6fbe67c28a5c5bb53a1c45775d35b"} Feb 20 12:14:26.037146 master-0 kubenswrapper[31420]: I0220 12:14:26.037068 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" podStartSLOduration=2.837815979 podStartE2EDuration="8.037045557s" podCreationTimestamp="2026-02-20 12:14:18 +0000 UTC" firstStartedPulling="2026-02-20 12:14:19.647461583 +0000 UTC m=+564.366699824" lastFinishedPulling="2026-02-20 12:14:24.846691151 +0000 UTC m=+569.565929402" observedRunningTime="2026-02-20 12:14:26.028714183 +0000 UTC m=+570.747952434" watchObservedRunningTime="2026-02-20 12:14:26.037045557 +0000 UTC m=+570.756283788" Feb 20 12:14:26.975794 master-0 kubenswrapper[31420]: I0220 12:14:26.975720 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" Feb 20 12:14:34.213831 master-0 kubenswrapper[31420]: I0220 12:14:34.213658 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-mmc4b"] Feb 20 12:14:34.215358 master-0 kubenswrapper[31420]: I0220 12:14:34.215277 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mmc4b" Feb 20 12:14:34.220266 master-0 kubenswrapper[31420]: I0220 12:14:34.220169 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 20 12:14:34.220837 master-0 kubenswrapper[31420]: I0220 12:14:34.220772 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 20 12:14:34.294468 master-0 kubenswrapper[31420]: I0220 12:14:34.255447 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-mmc4b"] Feb 20 12:14:34.397407 master-0 kubenswrapper[31420]: I0220 12:14:34.397333 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwzbv\" (UniqueName: \"kubernetes.io/projected/39357b50-66a2-4e48-be49-9a45a73e7b7f-kube-api-access-lwzbv\") pod \"obo-prometheus-operator-68bc856cb9-mmc4b\" (UID: \"39357b50-66a2-4e48-be49-9a45a73e7b7f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mmc4b" Feb 20 12:14:34.499345 master-0 kubenswrapper[31420]: I0220 12:14:34.499184 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwzbv\" (UniqueName: \"kubernetes.io/projected/39357b50-66a2-4e48-be49-9a45a73e7b7f-kube-api-access-lwzbv\") pod \"obo-prometheus-operator-68bc856cb9-mmc4b\" (UID: \"39357b50-66a2-4e48-be49-9a45a73e7b7f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mmc4b" Feb 20 12:14:34.565654 master-0 kubenswrapper[31420]: I0220 12:14:34.561618 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwzbv\" (UniqueName: \"kubernetes.io/projected/39357b50-66a2-4e48-be49-9a45a73e7b7f-kube-api-access-lwzbv\") pod \"obo-prometheus-operator-68bc856cb9-mmc4b\" (UID: \"39357b50-66a2-4e48-be49-9a45a73e7b7f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mmc4b" Feb 20 12:14:34.598240 master-0 kubenswrapper[31420]: I0220 12:14:34.598116 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mmc4b" Feb 20 12:14:34.782077 master-0 kubenswrapper[31420]: I0220 12:14:34.776996 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc"] Feb 20 12:14:34.811554 master-0 kubenswrapper[31420]: I0220 12:14:34.810728 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc" Feb 20 12:14:34.812858 master-0 kubenswrapper[31420]: I0220 12:14:34.812817 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 20 12:14:34.911362 master-0 kubenswrapper[31420]: I0220 12:14:34.911283 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b97ff9c7-bfe0-4eec-85ac-6c3311bbb6eb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc\" (UID: \"b97ff9c7-bfe0-4eec-85ac-6c3311bbb6eb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc" Feb 20 12:14:34.911630 master-0 kubenswrapper[31420]: I0220 12:14:34.911401 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b97ff9c7-bfe0-4eec-85ac-6c3311bbb6eb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc\" (UID: \"b97ff9c7-bfe0-4eec-85ac-6c3311bbb6eb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc" Feb 20 12:14:34.935951 master-0 kubenswrapper[31420]: I0220 12:14:34.935828 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z"] Feb 20 12:14:34.938884 master-0 kubenswrapper[31420]: I0220 12:14:34.938796 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z" Feb 20 12:14:34.947803 master-0 kubenswrapper[31420]: I0220 12:14:34.947728 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc"] Feb 20 12:14:35.034640 master-0 kubenswrapper[31420]: I0220 12:14:35.031514 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b97ff9c7-bfe0-4eec-85ac-6c3311bbb6eb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc\" (UID: \"b97ff9c7-bfe0-4eec-85ac-6c3311bbb6eb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc" Feb 20 12:14:35.034640 master-0 kubenswrapper[31420]: I0220 12:14:35.031633 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cee5fd3c-ffb3-47a1-a219-51b2c6f756b1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z\" (UID: \"cee5fd3c-ffb3-47a1-a219-51b2c6f756b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z" Feb 20 12:14:35.034640 master-0 kubenswrapper[31420]: I0220 12:14:35.031662 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b97ff9c7-bfe0-4eec-85ac-6c3311bbb6eb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc\" (UID: \"b97ff9c7-bfe0-4eec-85ac-6c3311bbb6eb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc" Feb 20 12:14:35.034640 master-0 kubenswrapper[31420]: I0220 12:14:35.031701 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cee5fd3c-ffb3-47a1-a219-51b2c6f756b1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z\" (UID: \"cee5fd3c-ffb3-47a1-a219-51b2c6f756b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z" Feb 20 12:14:35.038559 master-0 kubenswrapper[31420]: I0220 12:14:35.035467 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b97ff9c7-bfe0-4eec-85ac-6c3311bbb6eb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc\" (UID: \"b97ff9c7-bfe0-4eec-85ac-6c3311bbb6eb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc" Feb 20 12:14:35.038559 master-0 kubenswrapper[31420]: I0220 12:14:35.038341 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b97ff9c7-bfe0-4eec-85ac-6c3311bbb6eb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc\" (UID: \"b97ff9c7-bfe0-4eec-85ac-6c3311bbb6eb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc" Feb 20 12:14:35.052563 master-0 kubenswrapper[31420]: I0220 12:14:35.049213 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z"] Feb 20 12:14:35.131969 master-0 kubenswrapper[31420]: I0220 12:14:35.130967 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc" Feb 20 12:14:35.144592 master-0 kubenswrapper[31420]: I0220 12:14:35.144515 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cee5fd3c-ffb3-47a1-a219-51b2c6f756b1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z\" (UID: \"cee5fd3c-ffb3-47a1-a219-51b2c6f756b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z" Feb 20 12:14:35.144752 master-0 kubenswrapper[31420]: I0220 12:14:35.144612 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cee5fd3c-ffb3-47a1-a219-51b2c6f756b1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z\" (UID: \"cee5fd3c-ffb3-47a1-a219-51b2c6f756b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z" Feb 20 12:14:35.147696 master-0 kubenswrapper[31420]: I0220 12:14:35.147667 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cee5fd3c-ffb3-47a1-a219-51b2c6f756b1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z\" (UID: \"cee5fd3c-ffb3-47a1-a219-51b2c6f756b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z" Feb 20 12:14:35.147757 master-0 kubenswrapper[31420]: I0220 12:14:35.147728 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cee5fd3c-ffb3-47a1-a219-51b2c6f756b1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z\" (UID: \"cee5fd3c-ffb3-47a1-a219-51b2c6f756b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z" Feb 20 12:14:35.259474 master-0 kubenswrapper[31420]: I0220 12:14:35.259409 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z" Feb 20 12:14:35.336345 master-0 kubenswrapper[31420]: I0220 12:14:35.334098 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-mmc4b"] Feb 20 12:14:35.655011 master-0 kubenswrapper[31420]: I0220 12:14:35.654858 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pmdps"] Feb 20 12:14:35.656127 master-0 kubenswrapper[31420]: I0220 12:14:35.656084 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pmdps" Feb 20 12:14:35.681760 master-0 kubenswrapper[31420]: I0220 12:14:35.660940 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 20 12:14:35.763016 master-0 kubenswrapper[31420]: I0220 12:14:35.762909 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm2nv\" (UniqueName: \"kubernetes.io/projected/70386252-80d5-4afd-a6a4-ea5e26258dd5-kube-api-access-dm2nv\") pod \"observability-operator-59bdc8b94-pmdps\" (UID: \"70386252-80d5-4afd-a6a4-ea5e26258dd5\") " pod="openshift-operators/observability-operator-59bdc8b94-pmdps" Feb 20 12:14:35.763270 master-0 kubenswrapper[31420]: I0220 12:14:35.763028 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/70386252-80d5-4afd-a6a4-ea5e26258dd5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pmdps\" (UID: \"70386252-80d5-4afd-a6a4-ea5e26258dd5\") " pod="openshift-operators/observability-operator-59bdc8b94-pmdps" Feb 20 12:14:35.819136 master-0 kubenswrapper[31420]: W0220 12:14:35.818250 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97ff9c7_bfe0_4eec_85ac_6c3311bbb6eb.slice/crio-d766f0e911b298b3f497f2f29fb6bed213d892d07923f99c79eb085dbd540f0f WatchSource:0}: Error finding container d766f0e911b298b3f497f2f29fb6bed213d892d07923f99c79eb085dbd540f0f: Status 404 returned error can't find the container with id d766f0e911b298b3f497f2f29fb6bed213d892d07923f99c79eb085dbd540f0f Feb 20 12:14:35.839565 master-0 kubenswrapper[31420]: I0220 12:14:35.837131 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pmdps"] Feb 20 12:14:35.855765 master-0 kubenswrapper[31420]: I0220 12:14:35.854036 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc"] Feb 20 12:14:35.876605 master-0 kubenswrapper[31420]: I0220 12:14:35.864807 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm2nv\" (UniqueName: \"kubernetes.io/projected/70386252-80d5-4afd-a6a4-ea5e26258dd5-kube-api-access-dm2nv\") pod \"observability-operator-59bdc8b94-pmdps\" (UID: \"70386252-80d5-4afd-a6a4-ea5e26258dd5\") " pod="openshift-operators/observability-operator-59bdc8b94-pmdps" Feb 20 12:14:35.876605 master-0 kubenswrapper[31420]: I0220 12:14:35.864899 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/70386252-80d5-4afd-a6a4-ea5e26258dd5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pmdps\" (UID: \"70386252-80d5-4afd-a6a4-ea5e26258dd5\") " pod="openshift-operators/observability-operator-59bdc8b94-pmdps" Feb 20 12:14:35.876605 master-0 kubenswrapper[31420]: I0220 12:14:35.872611 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/70386252-80d5-4afd-a6a4-ea5e26258dd5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pmdps\" (UID: \"70386252-80d5-4afd-a6a4-ea5e26258dd5\") " pod="openshift-operators/observability-operator-59bdc8b94-pmdps" Feb 20 12:14:35.958170 master-0 kubenswrapper[31420]: W0220 12:14:35.958081 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcee5fd3c_ffb3_47a1_a219_51b2c6f756b1.slice/crio-89ffc2f74ddbfae23c312beb795ae491a93e9e924a3959fc89a853dd4ddddeb1 WatchSource:0}: Error finding container 89ffc2f74ddbfae23c312beb795ae491a93e9e924a3959fc89a853dd4ddddeb1: Status 404 returned error can't find the container with id 89ffc2f74ddbfae23c312beb795ae491a93e9e924a3959fc89a853dd4ddddeb1 Feb 20 12:14:35.962715 master-0 kubenswrapper[31420]: I0220 12:14:35.962516 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm2nv\" (UniqueName: \"kubernetes.io/projected/70386252-80d5-4afd-a6a4-ea5e26258dd5-kube-api-access-dm2nv\") pod \"observability-operator-59bdc8b94-pmdps\" (UID: \"70386252-80d5-4afd-a6a4-ea5e26258dd5\") " pod="openshift-operators/observability-operator-59bdc8b94-pmdps" Feb 20 12:14:35.963454 master-0 kubenswrapper[31420]: I0220 12:14:35.963423 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z"] Feb 20 12:14:35.988987 master-0 kubenswrapper[31420]: I0220 12:14:35.987272 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pmdps" Feb 20 12:14:36.061660 master-0 kubenswrapper[31420]: I0220 12:14:36.061605 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z" event={"ID":"cee5fd3c-ffb3-47a1-a219-51b2c6f756b1","Type":"ContainerStarted","Data":"89ffc2f74ddbfae23c312beb795ae491a93e9e924a3959fc89a853dd4ddddeb1"} Feb 20 12:14:36.063921 master-0 kubenswrapper[31420]: I0220 12:14:36.063775 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc" event={"ID":"b97ff9c7-bfe0-4eec-85ac-6c3311bbb6eb","Type":"ContainerStarted","Data":"d766f0e911b298b3f497f2f29fb6bed213d892d07923f99c79eb085dbd540f0f"} Feb 20 12:14:36.072428 master-0 kubenswrapper[31420]: I0220 12:14:36.072258 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mmc4b" event={"ID":"39357b50-66a2-4e48-be49-9a45a73e7b7f","Type":"ContainerStarted","Data":"5b29bf9459830e570beda46e481208ad90aa5174aee9b2be8c06a81a240248b6"} Feb 20 12:14:36.223667 master-0 kubenswrapper[31420]: I0220 12:14:36.223387 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-4zsq8"] Feb 20 12:14:36.224494 master-0 kubenswrapper[31420]: I0220 12:14:36.224441 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4zsq8" Feb 20 12:14:36.257543 master-0 kubenswrapper[31420]: I0220 12:14:36.257469 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-4zsq8"] Feb 20 12:14:36.385571 master-0 kubenswrapper[31420]: I0220 12:14:36.384855 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e1db6b7e-63ed-4821-85b8-609d0c656f86-openshift-service-ca\") pod \"perses-operator-5bf474d74f-4zsq8\" (UID: \"e1db6b7e-63ed-4821-85b8-609d0c656f86\") " pod="openshift-operators/perses-operator-5bf474d74f-4zsq8" Feb 20 12:14:36.385571 master-0 kubenswrapper[31420]: I0220 12:14:36.384988 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp6pn\" (UniqueName: \"kubernetes.io/projected/e1db6b7e-63ed-4821-85b8-609d0c656f86-kube-api-access-sp6pn\") pod \"perses-operator-5bf474d74f-4zsq8\" (UID: \"e1db6b7e-63ed-4821-85b8-609d0c656f86\") " pod="openshift-operators/perses-operator-5bf474d74f-4zsq8" Feb 20 12:14:36.486259 master-0 kubenswrapper[31420]: I0220 12:14:36.486115 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e1db6b7e-63ed-4821-85b8-609d0c656f86-openshift-service-ca\") pod \"perses-operator-5bf474d74f-4zsq8\" (UID: \"e1db6b7e-63ed-4821-85b8-609d0c656f86\") " pod="openshift-operators/perses-operator-5bf474d74f-4zsq8" Feb 20 12:14:36.486259 master-0 kubenswrapper[31420]: I0220 12:14:36.486197 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp6pn\" (UniqueName: \"kubernetes.io/projected/e1db6b7e-63ed-4821-85b8-609d0c656f86-kube-api-access-sp6pn\") pod \"perses-operator-5bf474d74f-4zsq8\" (UID: \"e1db6b7e-63ed-4821-85b8-609d0c656f86\") " pod="openshift-operators/perses-operator-5bf474d74f-4zsq8" Feb 20 12:14:36.489542 master-0 kubenswrapper[31420]: I0220 12:14:36.487259 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e1db6b7e-63ed-4821-85b8-609d0c656f86-openshift-service-ca\") pod \"perses-operator-5bf474d74f-4zsq8\" (UID: \"e1db6b7e-63ed-4821-85b8-609d0c656f86\") " pod="openshift-operators/perses-operator-5bf474d74f-4zsq8" Feb 20 12:14:36.561598 master-0 kubenswrapper[31420]: I0220 12:14:36.561246 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pmdps"] Feb 20 12:14:36.586892 master-0 kubenswrapper[31420]: I0220 12:14:36.583235 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp6pn\" (UniqueName: \"kubernetes.io/projected/e1db6b7e-63ed-4821-85b8-609d0c656f86-kube-api-access-sp6pn\") pod \"perses-operator-5bf474d74f-4zsq8\" (UID: \"e1db6b7e-63ed-4821-85b8-609d0c656f86\") " pod="openshift-operators/perses-operator-5bf474d74f-4zsq8" Feb 20 12:14:36.870931 master-0 kubenswrapper[31420]: I0220 12:14:36.870857 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4zsq8" Feb 20 12:14:37.079831 master-0 kubenswrapper[31420]: I0220 12:14:37.079777 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-pmdps" event={"ID":"70386252-80d5-4afd-a6a4-ea5e26258dd5","Type":"ContainerStarted","Data":"6c005ea20d25a193c3505852a0edd30baa7aea85a252ab8738bd2780af9be193"} Feb 20 12:14:37.380122 master-0 kubenswrapper[31420]: W0220 12:14:37.380035 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1db6b7e_63ed_4821_85b8_609d0c656f86.slice/crio-923dea44a9a3f87bc30f4dc354b8feee539a044d94900b06611094dfe516d24d WatchSource:0}: Error finding container 923dea44a9a3f87bc30f4dc354b8feee539a044d94900b06611094dfe516d24d: Status 404 returned error can't find the container with id 923dea44a9a3f87bc30f4dc354b8feee539a044d94900b06611094dfe516d24d Feb 20 12:14:37.390390 master-0 kubenswrapper[31420]: I0220 12:14:37.390337 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-4zsq8"] Feb 20 12:14:38.101825 master-0 kubenswrapper[31420]: I0220 12:14:38.101743 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-4zsq8" event={"ID":"e1db6b7e-63ed-4821-85b8-609d0c656f86","Type":"ContainerStarted","Data":"923dea44a9a3f87bc30f4dc354b8feee539a044d94900b06611094dfe516d24d"} Feb 20 12:14:38.474751 master-0 kubenswrapper[31420]: I0220 12:14:38.473960 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-p6qcs"] Feb 20 12:14:38.475296 master-0 kubenswrapper[31420]: I0220 12:14:38.475040 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-p6qcs" Feb 20 12:14:38.483913 master-0 kubenswrapper[31420]: I0220 12:14:38.483879 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 20 12:14:38.484176 master-0 kubenswrapper[31420]: I0220 12:14:38.484149 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 20 12:14:38.654876 master-0 kubenswrapper[31420]: I0220 12:14:38.651297 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5c5c9c30-4380-4a52-a4fd-fe878839ea4b-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-p6qcs\" (UID: \"5c5c9c30-4380-4a52-a4fd-fe878839ea4b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-p6qcs" Feb 20 12:14:38.654876 master-0 kubenswrapper[31420]: I0220 12:14:38.651621 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6z2z\" (UniqueName: \"kubernetes.io/projected/5c5c9c30-4380-4a52-a4fd-fe878839ea4b-kube-api-access-s6z2z\") pod \"cert-manager-operator-controller-manager-66c8bdd694-p6qcs\" (UID: \"5c5c9c30-4380-4a52-a4fd-fe878839ea4b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-p6qcs" Feb 20 12:14:38.664667 master-0 kubenswrapper[31420]: I0220 12:14:38.664475 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-p6qcs"] Feb 20 12:14:38.754704 master-0 kubenswrapper[31420]: I0220 12:14:38.754612 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6z2z\" (UniqueName: \"kubernetes.io/projected/5c5c9c30-4380-4a52-a4fd-fe878839ea4b-kube-api-access-s6z2z\") pod \"cert-manager-operator-controller-manager-66c8bdd694-p6qcs\" (UID: \"5c5c9c30-4380-4a52-a4fd-fe878839ea4b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-p6qcs" Feb 20 12:14:38.754917 master-0 kubenswrapper[31420]: I0220 12:14:38.754742 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5c5c9c30-4380-4a52-a4fd-fe878839ea4b-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-p6qcs\" (UID: \"5c5c9c30-4380-4a52-a4fd-fe878839ea4b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-p6qcs" Feb 20 12:14:38.755300 master-0 kubenswrapper[31420]: I0220 12:14:38.755268 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5c5c9c30-4380-4a52-a4fd-fe878839ea4b-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-p6qcs\" (UID: \"5c5c9c30-4380-4a52-a4fd-fe878839ea4b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-p6qcs" Feb 20 12:14:39.166891 master-0 kubenswrapper[31420]: I0220 12:14:39.166254 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-8486f65d77-9ck87" Feb 20 12:14:39.204006 master-0 kubenswrapper[31420]: I0220 12:14:39.202370 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6z2z\" (UniqueName: \"kubernetes.io/projected/5c5c9c30-4380-4a52-a4fd-fe878839ea4b-kube-api-access-s6z2z\") pod \"cert-manager-operator-controller-manager-66c8bdd694-p6qcs\" (UID: \"5c5c9c30-4380-4a52-a4fd-fe878839ea4b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-p6qcs" Feb 20 12:14:39.271702 master-0 kubenswrapper[31420]: I0220 12:14:39.268758 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-p6qcs" Feb 20 12:14:40.231214 master-0 kubenswrapper[31420]: W0220 12:14:40.231130 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c5c9c30_4380_4a52_a4fd_fe878839ea4b.slice/crio-909da92d5ca335a1962444090e2bc8707fd8381091b2ae5d8287135709fda9bf WatchSource:0}: Error finding container 909da92d5ca335a1962444090e2bc8707fd8381091b2ae5d8287135709fda9bf: Status 404 returned error can't find the container with id 909da92d5ca335a1962444090e2bc8707fd8381091b2ae5d8287135709fda9bf Feb 20 12:14:40.264639 master-0 kubenswrapper[31420]: I0220 12:14:40.263730 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-p6qcs"] Feb 20 12:14:41.145233 master-0 kubenswrapper[31420]: I0220 12:14:41.145161 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-p6qcs" event={"ID":"5c5c9c30-4380-4a52-a4fd-fe878839ea4b","Type":"ContainerStarted","Data":"909da92d5ca335a1962444090e2bc8707fd8381091b2ae5d8287135709fda9bf"} Feb 20 12:14:45.197069 master-0 kubenswrapper[31420]: I0220 12:14:45.197009 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z" event={"ID":"cee5fd3c-ffb3-47a1-a219-51b2c6f756b1","Type":"ContainerStarted","Data":"369873783a307d707fa8e246197c78f79b3285dffbf087791fadbcde7d5fd728"} Feb 20 12:14:45.199911 master-0 kubenswrapper[31420]: I0220 12:14:45.199856 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc" event={"ID":"b97ff9c7-bfe0-4eec-85ac-6c3311bbb6eb","Type":"ContainerStarted","Data":"aff57ee28657633118c7b64e1a7576786e4fe8a510fbda876e68f8fb0fd36b96"} Feb 20 12:14:45.201858 master-0 kubenswrapper[31420]: I0220 12:14:45.201831 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mmc4b" event={"ID":"39357b50-66a2-4e48-be49-9a45a73e7b7f","Type":"ContainerStarted","Data":"0bf71ce3486541889c1edfce9817f9109b6c77a424edc6647fc74dcc4fb8de38"} Feb 20 12:14:45.242685 master-0 kubenswrapper[31420]: I0220 12:14:45.242574 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-xhd6z" podStartSLOduration=2.809317584 podStartE2EDuration="11.242552328s" podCreationTimestamp="2026-02-20 12:14:34 +0000 UTC" firstStartedPulling="2026-02-20 12:14:35.962959165 +0000 UTC m=+580.682197406" lastFinishedPulling="2026-02-20 12:14:44.396193909 +0000 UTC m=+589.115432150" observedRunningTime="2026-02-20 12:14:45.234542304 +0000 UTC m=+589.953780805" watchObservedRunningTime="2026-02-20 12:14:45.242552328 +0000 UTC m=+589.961790569" Feb 20 12:14:45.273754 master-0 kubenswrapper[31420]: I0220 12:14:45.273662 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9fff69d4f-7zkkc" podStartSLOduration=2.69738744 podStartE2EDuration="11.273642469s" podCreationTimestamp="2026-02-20 12:14:34 +0000 UTC" firstStartedPulling="2026-02-20 12:14:35.823566361 +0000 UTC m=+580.542804612" lastFinishedPulling="2026-02-20 12:14:44.3998214 +0000 UTC m=+589.119059641" observedRunningTime="2026-02-20 12:14:45.269833172 +0000 UTC m=+589.989071413" watchObservedRunningTime="2026-02-20 12:14:45.273642469 +0000 UTC m=+589.992880710" Feb 20 12:14:45.362396 master-0 kubenswrapper[31420]: I0220 12:14:45.361640 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mmc4b" podStartSLOduration=4.369557906 podStartE2EDuration="13.361613844s" podCreationTimestamp="2026-02-20 12:14:32 +0000 UTC" firstStartedPulling="2026-02-20 12:14:35.401434805 +0000 UTC m=+580.120673066" lastFinishedPulling="2026-02-20 12:14:44.393490763 +0000 UTC m=+589.112729004" observedRunningTime="2026-02-20 12:14:45.356829179 +0000 UTC m=+590.076067420" watchObservedRunningTime="2026-02-20 12:14:45.361613844 +0000 UTC m=+590.080852085" Feb 20 12:14:50.255449 master-0 kubenswrapper[31420]: I0220 12:14:50.254867 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-pmdps" event={"ID":"70386252-80d5-4afd-a6a4-ea5e26258dd5","Type":"ContainerStarted","Data":"ee8a81e644d0a1d803c5ce7652b0744f2b530a86fb43bdee2e8fd7f9050cd8ec"} Feb 20 12:14:50.256159 master-0 kubenswrapper[31420]: I0220 12:14:50.255794 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-pmdps" Feb 20 12:14:50.260559 master-0 kubenswrapper[31420]: I0220 12:14:50.258204 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-4zsq8" event={"ID":"e1db6b7e-63ed-4821-85b8-609d0c656f86","Type":"ContainerStarted","Data":"4e7033c0e64d08ed249e617513fe5d80d8fa5eab7d72eab866194a197ce5a6b3"} Feb 20 12:14:50.260559 master-0 kubenswrapper[31420]: I0220 12:14:50.258448 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-4zsq8" Feb 20 12:14:50.264548 master-0 kubenswrapper[31420]: I0220 12:14:50.262029 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-p6qcs" event={"ID":"5c5c9c30-4380-4a52-a4fd-fe878839ea4b","Type":"ContainerStarted","Data":"010a31ef5d05b1bf069a34c2842fdecaad0d247f5ca58620b8e5be20885790d7"} Feb 20 12:14:50.303484 master-0 kubenswrapper[31420]: I0220 12:14:50.303375 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-pmdps" podStartSLOduration=2.656433685 podStartE2EDuration="15.303348808s" podCreationTimestamp="2026-02-20 12:14:35 +0000 UTC" firstStartedPulling="2026-02-20 12:14:36.575204306 +0000 UTC m=+581.294442547" lastFinishedPulling="2026-02-20 12:14:49.222119419 +0000 UTC m=+593.941357670" observedRunningTime="2026-02-20 12:14:50.291020482 +0000 UTC m=+595.010258723" watchObservedRunningTime="2026-02-20 12:14:50.303348808 +0000 UTC m=+595.022587059" Feb 20 12:14:50.320225 master-0 kubenswrapper[31420]: I0220 12:14:50.320125 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-4zsq8" podStartSLOduration=2.459302696 podStartE2EDuration="14.320101807s" podCreationTimestamp="2026-02-20 12:14:36 +0000 UTC" firstStartedPulling="2026-02-20 12:14:37.383405967 +0000 UTC m=+582.102644208" lastFinishedPulling="2026-02-20 12:14:49.244205068 +0000 UTC m=+593.963443319" observedRunningTime="2026-02-20 12:14:50.317604367 +0000 UTC m=+595.036842648" watchObservedRunningTime="2026-02-20 12:14:50.320101807 +0000 UTC m=+595.039340048" Feb 20 12:14:50.325465 master-0 kubenswrapper[31420]: I0220 12:14:50.325405 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-pmdps" Feb 20 12:14:50.363331 master-0 kubenswrapper[31420]: I0220 12:14:50.363248 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-p6qcs" podStartSLOduration=3.395813097 podStartE2EDuration="12.363227505s" podCreationTimestamp="2026-02-20 12:14:38 +0000 UTC" firstStartedPulling="2026-02-20 12:14:40.27572124 +0000 UTC m=+584.994959481" lastFinishedPulling="2026-02-20 12:14:49.243135648 +0000 UTC m=+593.962373889" observedRunningTime="2026-02-20 12:14:50.360987452 +0000 UTC m=+595.080225693" watchObservedRunningTime="2026-02-20 12:14:50.363227505 +0000 UTC m=+595.082465756" Feb 20 12:14:55.645443 master-0 kubenswrapper[31420]: I0220 12:14:55.645348 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-xhskg"] Feb 20 12:14:55.646802 master-0 kubenswrapper[31420]: I0220 12:14:55.646692 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-xhskg" Feb 20 12:14:55.649479 master-0 kubenswrapper[31420]: I0220 12:14:55.649441 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 20 12:14:55.649619 master-0 kubenswrapper[31420]: I0220 12:14:55.649564 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 20 12:14:55.753140 master-0 kubenswrapper[31420]: I0220 12:14:55.753024 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1be75269-37d2-4d2b-9114-1454bb8447f4-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-xhskg\" (UID: \"1be75269-37d2-4d2b-9114-1454bb8447f4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-xhskg" Feb 20 12:14:55.753419 master-0 kubenswrapper[31420]: I0220 12:14:55.753265 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krdg7\" (UniqueName: \"kubernetes.io/projected/1be75269-37d2-4d2b-9114-1454bb8447f4-kube-api-access-krdg7\") pod \"cert-manager-cainjector-5545bd876-xhskg\" (UID: \"1be75269-37d2-4d2b-9114-1454bb8447f4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-xhskg" Feb 20 12:14:55.820460 master-0 kubenswrapper[31420]: I0220 12:14:55.820373 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-xhskg"] Feb 20 12:14:55.855274 master-0 kubenswrapper[31420]: I0220 12:14:55.855178 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krdg7\" (UniqueName: \"kubernetes.io/projected/1be75269-37d2-4d2b-9114-1454bb8447f4-kube-api-access-krdg7\") pod \"cert-manager-cainjector-5545bd876-xhskg\" (UID: \"1be75269-37d2-4d2b-9114-1454bb8447f4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-xhskg" Feb 20 12:14:55.855516 master-0 kubenswrapper[31420]: I0220 12:14:55.855307 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1be75269-37d2-4d2b-9114-1454bb8447f4-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-xhskg\" (UID: \"1be75269-37d2-4d2b-9114-1454bb8447f4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-xhskg" Feb 20 12:14:55.995177 master-0 kubenswrapper[31420]: I0220 12:14:55.994952 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krdg7\" (UniqueName: \"kubernetes.io/projected/1be75269-37d2-4d2b-9114-1454bb8447f4-kube-api-access-krdg7\") pod \"cert-manager-cainjector-5545bd876-xhskg\" (UID: \"1be75269-37d2-4d2b-9114-1454bb8447f4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-xhskg" Feb 20 12:14:56.012817 master-0 kubenswrapper[31420]: I0220 12:14:56.012673 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1be75269-37d2-4d2b-9114-1454bb8447f4-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-xhskg\" (UID: \"1be75269-37d2-4d2b-9114-1454bb8447f4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-xhskg" Feb 20 12:14:56.264234 master-0 kubenswrapper[31420]: I0220 12:14:56.264117 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-xhskg" Feb 20 12:14:56.398692 master-0 kubenswrapper[31420]: I0220 12:14:56.398297 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-nlg42"] Feb 20 12:14:56.399904 master-0 kubenswrapper[31420]: I0220 12:14:56.399855 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-nlg42" Feb 20 12:14:56.433071 master-0 kubenswrapper[31420]: I0220 12:14:56.432921 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-nlg42"] Feb 20 12:14:56.479210 master-0 kubenswrapper[31420]: I0220 12:14:56.479140 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d92d6cf-2648-4bf8-8495-83f9428178d0-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-nlg42\" (UID: \"0d92d6cf-2648-4bf8-8495-83f9428178d0\") " pod="cert-manager/cert-manager-webhook-6888856db4-nlg42" Feb 20 12:14:56.479405 master-0 kubenswrapper[31420]: I0220 12:14:56.479223 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz2fb\" (UniqueName: \"kubernetes.io/projected/0d92d6cf-2648-4bf8-8495-83f9428178d0-kube-api-access-xz2fb\") pod \"cert-manager-webhook-6888856db4-nlg42\" (UID: \"0d92d6cf-2648-4bf8-8495-83f9428178d0\") " pod="cert-manager/cert-manager-webhook-6888856db4-nlg42" Feb 20 12:14:56.582202 master-0 kubenswrapper[31420]: I0220 12:14:56.582045 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d92d6cf-2648-4bf8-8495-83f9428178d0-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-nlg42\" (UID: \"0d92d6cf-2648-4bf8-8495-83f9428178d0\") " pod="cert-manager/cert-manager-webhook-6888856db4-nlg42" Feb 20 12:14:56.582399 master-0 kubenswrapper[31420]: I0220 12:14:56.582353 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz2fb\" (UniqueName: \"kubernetes.io/projected/0d92d6cf-2648-4bf8-8495-83f9428178d0-kube-api-access-xz2fb\") pod \"cert-manager-webhook-6888856db4-nlg42\" (UID: \"0d92d6cf-2648-4bf8-8495-83f9428178d0\") " pod="cert-manager/cert-manager-webhook-6888856db4-nlg42" Feb 20 12:14:56.888703 master-0 kubenswrapper[31420]: I0220 12:14:56.888445 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-4zsq8" Feb 20 12:14:56.921114 master-0 kubenswrapper[31420]: I0220 12:14:56.919318 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz2fb\" (UniqueName: \"kubernetes.io/projected/0d92d6cf-2648-4bf8-8495-83f9428178d0-kube-api-access-xz2fb\") pod \"cert-manager-webhook-6888856db4-nlg42\" (UID: \"0d92d6cf-2648-4bf8-8495-83f9428178d0\") " pod="cert-manager/cert-manager-webhook-6888856db4-nlg42" Feb 20 12:14:56.924798 master-0 kubenswrapper[31420]: I0220 12:14:56.922921 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d92d6cf-2648-4bf8-8495-83f9428178d0-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-nlg42\" (UID: \"0d92d6cf-2648-4bf8-8495-83f9428178d0\") " pod="cert-manager/cert-manager-webhook-6888856db4-nlg42" Feb 20 12:14:56.955083 master-0 kubenswrapper[31420]: I0220 12:14:56.951324 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-xhskg"] Feb 20 12:14:57.052971 master-0 kubenswrapper[31420]: I0220 12:14:57.052910 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-nlg42" Feb 20 12:14:57.335655 master-0 kubenswrapper[31420]: I0220 12:14:57.335592 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-xhskg" event={"ID":"1be75269-37d2-4d2b-9114-1454bb8447f4","Type":"ContainerStarted","Data":"e433ef91bde18ff966b80839d914ee76f86157daa0e2efcd03554c0a2aa4b120"} Feb 20 12:14:57.536118 master-0 kubenswrapper[31420]: W0220 12:14:57.536027 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d92d6cf_2648_4bf8_8495_83f9428178d0.slice/crio-154443200a38efcecfa18113784efe0f634e0c75dbddc9155674aec11322a988 WatchSource:0}: Error finding container 154443200a38efcecfa18113784efe0f634e0c75dbddc9155674aec11322a988: Status 404 returned error can't find the container with id 154443200a38efcecfa18113784efe0f634e0c75dbddc9155674aec11322a988 Feb 20 12:14:57.542782 master-0 kubenswrapper[31420]: I0220 12:14:57.542711 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-nlg42"] Feb 20 12:14:58.345340 master-0 kubenswrapper[31420]: I0220 12:14:58.345278 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-nlg42" event={"ID":"0d92d6cf-2648-4bf8-8495-83f9428178d0","Type":"ContainerStarted","Data":"154443200a38efcecfa18113784efe0f634e0c75dbddc9155674aec11322a988"} Feb 20 12:14:58.646379 master-0 kubenswrapper[31420]: I0220 12:14:58.646250 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7865667bdc-lwg78" Feb 20 12:15:00.189513 master-0 kubenswrapper[31420]: I0220 12:15:00.186037 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh"] Feb 20 12:15:00.189513 master-0 kubenswrapper[31420]: I0220 12:15:00.187074 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" Feb 20 12:15:00.199563 master-0 kubenswrapper[31420]: I0220 12:15:00.198932 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kmfb6" Feb 20 12:15:00.218575 master-0 kubenswrapper[31420]: I0220 12:15:00.212318 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 20 12:15:00.222969 master-0 kubenswrapper[31420]: I0220 12:15:00.221680 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh"] Feb 20 12:15:00.366804 master-0 kubenswrapper[31420]: I0220 12:15:00.366739 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e936990-53f8-428f-8691-29af6e548f5e-secret-volume\") pod \"collect-profiles-29526495-qwpmh\" (UID: \"7e936990-53f8-428f-8691-29af6e548f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" Feb 20 12:15:00.367008 master-0 kubenswrapper[31420]: I0220 12:15:00.366844 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e936990-53f8-428f-8691-29af6e548f5e-config-volume\") pod \"collect-profiles-29526495-qwpmh\" (UID: \"7e936990-53f8-428f-8691-29af6e548f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" Feb 20 12:15:00.367008 master-0 kubenswrapper[31420]: I0220 12:15:00.366934 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4sbr\" (UniqueName: \"kubernetes.io/projected/7e936990-53f8-428f-8691-29af6e548f5e-kube-api-access-l4sbr\") pod \"collect-profiles-29526495-qwpmh\" (UID: \"7e936990-53f8-428f-8691-29af6e548f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" Feb 20 12:15:00.472213 master-0 kubenswrapper[31420]: I0220 12:15:00.472098 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4sbr\" (UniqueName: \"kubernetes.io/projected/7e936990-53f8-428f-8691-29af6e548f5e-kube-api-access-l4sbr\") pod \"collect-profiles-29526495-qwpmh\" (UID: \"7e936990-53f8-428f-8691-29af6e548f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" Feb 20 12:15:00.472213 master-0 kubenswrapper[31420]: I0220 12:15:00.472182 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e936990-53f8-428f-8691-29af6e548f5e-secret-volume\") pod \"collect-profiles-29526495-qwpmh\" (UID: \"7e936990-53f8-428f-8691-29af6e548f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" Feb 20 12:15:00.472421 master-0 kubenswrapper[31420]: I0220 12:15:00.472269 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e936990-53f8-428f-8691-29af6e548f5e-config-volume\") pod \"collect-profiles-29526495-qwpmh\" (UID: \"7e936990-53f8-428f-8691-29af6e548f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" Feb 20 12:15:00.473690 master-0 kubenswrapper[31420]: I0220 12:15:00.473665 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e936990-53f8-428f-8691-29af6e548f5e-config-volume\") pod \"collect-profiles-29526495-qwpmh\" (UID: \"7e936990-53f8-428f-8691-29af6e548f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" Feb 20 12:15:00.478401 master-0 kubenswrapper[31420]: I0220 12:15:00.478339 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e936990-53f8-428f-8691-29af6e548f5e-secret-volume\") pod \"collect-profiles-29526495-qwpmh\" (UID: \"7e936990-53f8-428f-8691-29af6e548f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" Feb 20 12:15:00.505371 master-0 kubenswrapper[31420]: I0220 12:15:00.505326 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4sbr\" (UniqueName: \"kubernetes.io/projected/7e936990-53f8-428f-8691-29af6e548f5e-kube-api-access-l4sbr\") pod \"collect-profiles-29526495-qwpmh\" (UID: \"7e936990-53f8-428f-8691-29af6e548f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" Feb 20 12:15:00.533080 master-0 kubenswrapper[31420]: I0220 12:15:00.533012 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" Feb 20 12:15:01.621017 master-0 kubenswrapper[31420]: I0220 12:15:01.617916 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh"] Feb 20 12:15:03.443563 master-0 kubenswrapper[31420]: I0220 12:15:03.439803 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" event={"ID":"7e936990-53f8-428f-8691-29af6e548f5e","Type":"ContainerStarted","Data":"5ede7ca1a33239980d68f7c12baf2ba65ddca32848a23e4d4e32b5f1570f3d4d"} Feb 20 12:15:03.443563 master-0 kubenswrapper[31420]: I0220 12:15:03.439881 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" event={"ID":"7e936990-53f8-428f-8691-29af6e548f5e","Type":"ContainerStarted","Data":"be1fc58d3ef926d01febfdfc37c91f39535dff8f0c8785f72f659772825207fb"} Feb 20 12:15:03.487572 master-0 kubenswrapper[31420]: I0220 12:15:03.484719 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" podStartSLOduration=3.484686852 podStartE2EDuration="3.484686852s" podCreationTimestamp="2026-02-20 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:15:03.474884998 +0000 UTC m=+608.194123249" watchObservedRunningTime="2026-02-20 12:15:03.484686852 +0000 UTC m=+608.203925093" Feb 20 12:15:04.450512 master-0 kubenswrapper[31420]: I0220 12:15:04.450449 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-nlg42" event={"ID":"0d92d6cf-2648-4bf8-8495-83f9428178d0","Type":"ContainerStarted","Data":"7f6f8e62aa1bf4802f7117a04d478db5c216fd247ad73b60115c92ddf8a4bfec"} Feb 20 12:15:04.451173 master-0 kubenswrapper[31420]: I0220 12:15:04.450551 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-nlg42" Feb 20 12:15:04.452922 master-0 kubenswrapper[31420]: I0220 12:15:04.452868 31420 generic.go:334] "Generic (PLEG): container finished" podID="7e936990-53f8-428f-8691-29af6e548f5e" containerID="5ede7ca1a33239980d68f7c12baf2ba65ddca32848a23e4d4e32b5f1570f3d4d" exitCode=0 Feb 20 12:15:04.453019 master-0 kubenswrapper[31420]: I0220 12:15:04.452939 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" event={"ID":"7e936990-53f8-428f-8691-29af6e548f5e","Type":"ContainerDied","Data":"5ede7ca1a33239980d68f7c12baf2ba65ddca32848a23e4d4e32b5f1570f3d4d"} Feb 20 12:15:04.454598 master-0 kubenswrapper[31420]: I0220 12:15:04.454565 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-xhskg" event={"ID":"1be75269-37d2-4d2b-9114-1454bb8447f4","Type":"ContainerStarted","Data":"5a109135d361530eceb23c1ed29260dbbda55bd92a2eec364c31bd89b136c033"} Feb 20 12:15:04.474094 master-0 kubenswrapper[31420]: I0220 12:15:04.473994 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-nlg42" podStartSLOduration=2.7624667069999997 podStartE2EDuration="8.473973816s" podCreationTimestamp="2026-02-20 12:14:56 +0000 UTC" firstStartedPulling="2026-02-20 12:14:57.540260158 +0000 UTC m=+602.259498399" lastFinishedPulling="2026-02-20 12:15:03.251767267 +0000 UTC m=+607.971005508" observedRunningTime="2026-02-20 12:15:04.467941037 +0000 UTC m=+609.187179278" watchObservedRunningTime="2026-02-20 12:15:04.473973816 +0000 UTC m=+609.193212057" Feb 20 12:15:04.496803 master-0 kubenswrapper[31420]: I0220 12:15:04.496714 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-xhskg" podStartSLOduration=3.238068137 podStartE2EDuration="9.496693792s" podCreationTimestamp="2026-02-20 12:14:55 +0000 UTC" firstStartedPulling="2026-02-20 12:14:56.964061048 +0000 UTC m=+601.683299329" lastFinishedPulling="2026-02-20 12:15:03.222686743 +0000 UTC m=+607.941924984" observedRunningTime="2026-02-20 12:15:04.489254844 +0000 UTC m=+609.208493105" watchObservedRunningTime="2026-02-20 12:15:04.496693792 +0000 UTC m=+609.215932033" Feb 20 12:15:05.896517 master-0 kubenswrapper[31420]: I0220 12:15:05.896454 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" Feb 20 12:15:06.004157 master-0 kubenswrapper[31420]: I0220 12:15:06.004081 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e936990-53f8-428f-8691-29af6e548f5e-secret-volume\") pod \"7e936990-53f8-428f-8691-29af6e548f5e\" (UID: \"7e936990-53f8-428f-8691-29af6e548f5e\") " Feb 20 12:15:06.004378 master-0 kubenswrapper[31420]: I0220 12:15:06.004202 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e936990-53f8-428f-8691-29af6e548f5e-config-volume\") pod \"7e936990-53f8-428f-8691-29af6e548f5e\" (UID: \"7e936990-53f8-428f-8691-29af6e548f5e\") " Feb 20 12:15:06.004378 master-0 kubenswrapper[31420]: I0220 12:15:06.004307 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4sbr\" (UniqueName: \"kubernetes.io/projected/7e936990-53f8-428f-8691-29af6e548f5e-kube-api-access-l4sbr\") pod \"7e936990-53f8-428f-8691-29af6e548f5e\" (UID: \"7e936990-53f8-428f-8691-29af6e548f5e\") " Feb 20 12:15:06.006021 master-0 kubenswrapper[31420]: I0220 12:15:06.005985 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e936990-53f8-428f-8691-29af6e548f5e-config-volume" (OuterVolumeSpecName: "config-volume") pod "7e936990-53f8-428f-8691-29af6e548f5e" (UID: "7e936990-53f8-428f-8691-29af6e548f5e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:15:06.007775 master-0 kubenswrapper[31420]: I0220 12:15:06.007735 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e936990-53f8-428f-8691-29af6e548f5e-kube-api-access-l4sbr" (OuterVolumeSpecName: "kube-api-access-l4sbr") pod "7e936990-53f8-428f-8691-29af6e548f5e" (UID: "7e936990-53f8-428f-8691-29af6e548f5e"). InnerVolumeSpecName "kube-api-access-l4sbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:15:06.009886 master-0 kubenswrapper[31420]: I0220 12:15:06.009820 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e936990-53f8-428f-8691-29af6e548f5e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7e936990-53f8-428f-8691-29af6e548f5e" (UID: "7e936990-53f8-428f-8691-29af6e548f5e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:15:06.106272 master-0 kubenswrapper[31420]: I0220 12:15:06.106136 31420 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e936990-53f8-428f-8691-29af6e548f5e-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 12:15:06.106272 master-0 kubenswrapper[31420]: I0220 12:15:06.106192 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4sbr\" (UniqueName: \"kubernetes.io/projected/7e936990-53f8-428f-8691-29af6e548f5e-kube-api-access-l4sbr\") on node \"master-0\" DevicePath \"\"" Feb 20 12:15:06.106272 master-0 kubenswrapper[31420]: I0220 12:15:06.106207 31420 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e936990-53f8-428f-8691-29af6e548f5e-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 12:15:06.470459 master-0 kubenswrapper[31420]: I0220 12:15:06.470318 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" event={"ID":"7e936990-53f8-428f-8691-29af6e548f5e","Type":"ContainerDied","Data":"be1fc58d3ef926d01febfdfc37c91f39535dff8f0c8785f72f659772825207fb"} Feb 20 12:15:06.470459 master-0 kubenswrapper[31420]: I0220 12:15:06.470373 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be1fc58d3ef926d01febfdfc37c91f39535dff8f0c8785f72f659772825207fb" Feb 20 12:15:06.470459 master-0 kubenswrapper[31420]: I0220 12:15:06.470401 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526495-qwpmh" Feb 20 12:15:12.058215 master-0 kubenswrapper[31420]: I0220 12:15:12.058140 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-nlg42" Feb 20 12:15:13.469766 master-0 kubenswrapper[31420]: I0220 12:15:13.469677 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-hr8qf"] Feb 20 12:15:13.470385 master-0 kubenswrapper[31420]: E0220 12:15:13.470264 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e936990-53f8-428f-8691-29af6e548f5e" containerName="collect-profiles" Feb 20 12:15:13.470385 master-0 kubenswrapper[31420]: I0220 12:15:13.470295 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e936990-53f8-428f-8691-29af6e548f5e" containerName="collect-profiles" Feb 20 12:15:13.471331 master-0 kubenswrapper[31420]: I0220 12:15:13.470652 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e936990-53f8-428f-8691-29af6e548f5e" containerName="collect-profiles" Feb 20 12:15:13.471620 master-0 kubenswrapper[31420]: I0220 12:15:13.471582 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-hr8qf" Feb 20 12:15:13.477799 master-0 kubenswrapper[31420]: I0220 12:15:13.477738 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-hr8qf"] Feb 20 12:15:13.555185 master-0 kubenswrapper[31420]: I0220 12:15:13.555068 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44dfd77d-d9ca-4ed3-a08d-6e0bb66d1272-bound-sa-token\") pod \"cert-manager-545d4d4674-hr8qf\" (UID: \"44dfd77d-d9ca-4ed3-a08d-6e0bb66d1272\") " pod="cert-manager/cert-manager-545d4d4674-hr8qf" Feb 20 12:15:13.555493 master-0 kubenswrapper[31420]: I0220 12:15:13.555327 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55vx7\" (UniqueName: \"kubernetes.io/projected/44dfd77d-d9ca-4ed3-a08d-6e0bb66d1272-kube-api-access-55vx7\") pod \"cert-manager-545d4d4674-hr8qf\" (UID: \"44dfd77d-d9ca-4ed3-a08d-6e0bb66d1272\") " pod="cert-manager/cert-manager-545d4d4674-hr8qf" Feb 20 12:15:13.657653 master-0 kubenswrapper[31420]: I0220 12:15:13.656903 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55vx7\" (UniqueName: \"kubernetes.io/projected/44dfd77d-d9ca-4ed3-a08d-6e0bb66d1272-kube-api-access-55vx7\") pod \"cert-manager-545d4d4674-hr8qf\" (UID: \"44dfd77d-d9ca-4ed3-a08d-6e0bb66d1272\") " pod="cert-manager/cert-manager-545d4d4674-hr8qf" Feb 20 12:15:13.658191 master-0 kubenswrapper[31420]: I0220 12:15:13.658123 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44dfd77d-d9ca-4ed3-a08d-6e0bb66d1272-bound-sa-token\") pod \"cert-manager-545d4d4674-hr8qf\" (UID: \"44dfd77d-d9ca-4ed3-a08d-6e0bb66d1272\") " pod="cert-manager/cert-manager-545d4d4674-hr8qf" Feb 20 12:15:13.673872 master-0 kubenswrapper[31420]: I0220 12:15:13.673806 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55vx7\" (UniqueName: \"kubernetes.io/projected/44dfd77d-d9ca-4ed3-a08d-6e0bb66d1272-kube-api-access-55vx7\") pod \"cert-manager-545d4d4674-hr8qf\" (UID: \"44dfd77d-d9ca-4ed3-a08d-6e0bb66d1272\") " pod="cert-manager/cert-manager-545d4d4674-hr8qf" Feb 20 12:15:13.676734 master-0 kubenswrapper[31420]: I0220 12:15:13.676677 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44dfd77d-d9ca-4ed3-a08d-6e0bb66d1272-bound-sa-token\") pod \"cert-manager-545d4d4674-hr8qf\" (UID: \"44dfd77d-d9ca-4ed3-a08d-6e0bb66d1272\") " pod="cert-manager/cert-manager-545d4d4674-hr8qf" Feb 20 12:15:13.805571 master-0 kubenswrapper[31420]: I0220 12:15:13.805384 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-hr8qf" Feb 20 12:15:14.283599 master-0 kubenswrapper[31420]: I0220 12:15:14.283498 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-hr8qf"] Feb 20 12:15:14.285342 master-0 kubenswrapper[31420]: W0220 12:15:14.285291 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44dfd77d_d9ca_4ed3_a08d_6e0bb66d1272.slice/crio-31479b455b76b35dcfa053da8c53f39b61ea04200022e469f5154889be44014a WatchSource:0}: Error finding container 31479b455b76b35dcfa053da8c53f39b61ea04200022e469f5154889be44014a: Status 404 returned error can't find the container with id 31479b455b76b35dcfa053da8c53f39b61ea04200022e469f5154889be44014a Feb 20 12:15:14.563879 master-0 kubenswrapper[31420]: I0220 12:15:14.563690 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-hr8qf" event={"ID":"44dfd77d-d9ca-4ed3-a08d-6e0bb66d1272","Type":"ContainerStarted","Data":"a593ac73a6eff8c95b294d98e0e636710902497379ca03b9082d3ad05d5a5062"} Feb 20 12:15:14.563879 master-0 kubenswrapper[31420]: I0220 12:15:14.563791 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-hr8qf" event={"ID":"44dfd77d-d9ca-4ed3-a08d-6e0bb66d1272","Type":"ContainerStarted","Data":"31479b455b76b35dcfa053da8c53f39b61ea04200022e469f5154889be44014a"} Feb 20 12:15:14.612022 master-0 kubenswrapper[31420]: I0220 12:15:14.611886 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-hr8qf" podStartSLOduration=1.6118541020000001 podStartE2EDuration="1.611854102s" podCreationTimestamp="2026-02-20 12:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:15:14.593877548 +0000 UTC m=+619.313115829" watchObservedRunningTime="2026-02-20 12:15:14.611854102 +0000 UTC m=+619.331092363" Feb 20 12:15:16.530549 master-0 kubenswrapper[31420]: I0220 12:15:16.527780 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz"] Feb 20 12:15:16.530549 master-0 kubenswrapper[31420]: I0220 12:15:16.529634 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz" Feb 20 12:15:16.537856 master-0 kubenswrapper[31420]: I0220 12:15:16.537783 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-cfkwh"] Feb 20 12:15:16.538912 master-0 kubenswrapper[31420]: I0220 12:15:16.538237 31420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 20 12:15:16.561696 master-0 kubenswrapper[31420]: I0220 12:15:16.561633 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz"] Feb 20 12:15:16.561813 master-0 kubenswrapper[31420]: I0220 12:15:16.561790 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.567022 master-0 kubenswrapper[31420]: I0220 12:15:16.566967 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 20 12:15:16.567410 master-0 kubenswrapper[31420]: I0220 12:15:16.567379 31420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 20 12:15:16.610118 master-0 kubenswrapper[31420]: I0220 12:15:16.610057 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91455b18-03a0-49c7-aa61-59b91e88a5fe-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-d7llz\" (UID: \"91455b18-03a0-49c7-aa61-59b91e88a5fe\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz" Feb 20 12:15:16.610375 master-0 kubenswrapper[31420]: I0220 12:15:16.610159 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgl2z\" (UniqueName: \"kubernetes.io/projected/91455b18-03a0-49c7-aa61-59b91e88a5fe-kube-api-access-dgl2z\") pod \"frr-k8s-webhook-server-78b44bf5bb-d7llz\" (UID: \"91455b18-03a0-49c7-aa61-59b91e88a5fe\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz" Feb 20 12:15:16.612831 master-0 kubenswrapper[31420]: I0220 12:15:16.612333 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-r94p4"] Feb 20 12:15:16.614869 master-0 kubenswrapper[31420]: I0220 12:15:16.614037 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-r94p4" Feb 20 12:15:16.616118 master-0 kubenswrapper[31420]: I0220 12:15:16.616000 31420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 20 12:15:16.618435 master-0 kubenswrapper[31420]: I0220 12:15:16.618386 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 20 12:15:16.618435 master-0 kubenswrapper[31420]: I0220 12:15:16.618409 31420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 20 12:15:16.629863 master-0 kubenswrapper[31420]: I0220 12:15:16.629799 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-vdrkc"] Feb 20 12:15:16.634304 master-0 kubenswrapper[31420]: I0220 12:15:16.632234 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-vdrkc" Feb 20 12:15:16.634304 master-0 kubenswrapper[31420]: I0220 12:15:16.633953 31420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 20 12:15:16.660955 master-0 kubenswrapper[31420]: I0220 12:15:16.660907 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-vdrkc"] Feb 20 12:15:16.712341 master-0 kubenswrapper[31420]: I0220 12:15:16.712283 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ebec7408-42ea-4bdd-9cc9-a42caaefe664-frr-startup\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.712341 master-0 kubenswrapper[31420]: I0220 12:15:16.712335 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b91f2548-98e3-418c-9a05-58502d67d66f-memberlist\") pod \"speaker-r94p4\" (UID: \"b91f2548-98e3-418c-9a05-58502d67d66f\") " pod="metallb-system/speaker-r94p4" Feb 20 12:15:16.712624 master-0 kubenswrapper[31420]: I0220 12:15:16.712375 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91455b18-03a0-49c7-aa61-59b91e88a5fe-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-d7llz\" (UID: \"91455b18-03a0-49c7-aa61-59b91e88a5fe\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz" Feb 20 12:15:16.712624 master-0 kubenswrapper[31420]: I0220 12:15:16.712399 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkljt\" (UniqueName: \"kubernetes.io/projected/d8a5df14-16b6-4d50-900b-8f0c241b1d1b-kube-api-access-lkljt\") pod \"controller-69bbfbf88f-vdrkc\" (UID: \"d8a5df14-16b6-4d50-900b-8f0c241b1d1b\") " pod="metallb-system/controller-69bbfbf88f-vdrkc" Feb 20 12:15:16.712624 master-0 kubenswrapper[31420]: I0220 12:15:16.712425 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b91f2548-98e3-418c-9a05-58502d67d66f-metrics-certs\") pod \"speaker-r94p4\" (UID: \"b91f2548-98e3-418c-9a05-58502d67d66f\") " pod="metallb-system/speaker-r94p4" Feb 20 12:15:16.712624 master-0 kubenswrapper[31420]: I0220 12:15:16.712446 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8a5df14-16b6-4d50-900b-8f0c241b1d1b-metrics-certs\") pod \"controller-69bbfbf88f-vdrkc\" (UID: \"d8a5df14-16b6-4d50-900b-8f0c241b1d1b\") " pod="metallb-system/controller-69bbfbf88f-vdrkc" Feb 20 12:15:16.712624 master-0 kubenswrapper[31420]: I0220 12:15:16.712467 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ebec7408-42ea-4bdd-9cc9-a42caaefe664-frr-conf\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.712624 master-0 kubenswrapper[31420]: I0220 12:15:16.712483 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ebec7408-42ea-4bdd-9cc9-a42caaefe664-frr-sockets\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.712624 master-0 kubenswrapper[31420]: I0220 12:15:16.712497 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b91f2548-98e3-418c-9a05-58502d67d66f-metallb-excludel2\") pod \"speaker-r94p4\" (UID: \"b91f2548-98e3-418c-9a05-58502d67d66f\") " pod="metallb-system/speaker-r94p4" Feb 20 12:15:16.712624 master-0 kubenswrapper[31420]: I0220 12:15:16.712558 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebec7408-42ea-4bdd-9cc9-a42caaefe664-metrics-certs\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.712624 master-0 kubenswrapper[31420]: I0220 12:15:16.712585 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ebec7408-42ea-4bdd-9cc9-a42caaefe664-metrics\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.712624 master-0 kubenswrapper[31420]: I0220 12:15:16.712611 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqnwx\" (UniqueName: \"kubernetes.io/projected/ebec7408-42ea-4bdd-9cc9-a42caaefe664-kube-api-access-dqnwx\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.712624 master-0 kubenswrapper[31420]: I0220 12:15:16.712630 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgl2z\" (UniqueName: \"kubernetes.io/projected/91455b18-03a0-49c7-aa61-59b91e88a5fe-kube-api-access-dgl2z\") pod \"frr-k8s-webhook-server-78b44bf5bb-d7llz\" (UID: \"91455b18-03a0-49c7-aa61-59b91e88a5fe\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz" Feb 20 12:15:16.712984 master-0 kubenswrapper[31420]: I0220 12:15:16.712649 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ebec7408-42ea-4bdd-9cc9-a42caaefe664-reloader\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.712984 master-0 kubenswrapper[31420]: I0220 12:15:16.712683 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptm98\" (UniqueName: \"kubernetes.io/projected/b91f2548-98e3-418c-9a05-58502d67d66f-kube-api-access-ptm98\") pod \"speaker-r94p4\" (UID: \"b91f2548-98e3-418c-9a05-58502d67d66f\") " pod="metallb-system/speaker-r94p4" Feb 20 12:15:16.712984 master-0 kubenswrapper[31420]: I0220 12:15:16.712698 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8a5df14-16b6-4d50-900b-8f0c241b1d1b-cert\") pod \"controller-69bbfbf88f-vdrkc\" (UID: \"d8a5df14-16b6-4d50-900b-8f0c241b1d1b\") " pod="metallb-system/controller-69bbfbf88f-vdrkc" Feb 20 12:15:16.712984 master-0 kubenswrapper[31420]: E0220 12:15:16.712827 31420 secret.go:189] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 20 12:15:16.712984 master-0 kubenswrapper[31420]: E0220 12:15:16.712869 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91455b18-03a0-49c7-aa61-59b91e88a5fe-cert podName:91455b18-03a0-49c7-aa61-59b91e88a5fe nodeName:}" failed. No retries permitted until 2026-02-20 12:15:17.212852568 +0000 UTC m=+621.932090809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/91455b18-03a0-49c7-aa61-59b91e88a5fe-cert") pod "frr-k8s-webhook-server-78b44bf5bb-d7llz" (UID: "91455b18-03a0-49c7-aa61-59b91e88a5fe") : secret "frr-k8s-webhook-server-cert" not found Feb 20 12:15:16.730861 master-0 kubenswrapper[31420]: I0220 12:15:16.730822 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgl2z\" (UniqueName: \"kubernetes.io/projected/91455b18-03a0-49c7-aa61-59b91e88a5fe-kube-api-access-dgl2z\") pod \"frr-k8s-webhook-server-78b44bf5bb-d7llz\" (UID: \"91455b18-03a0-49c7-aa61-59b91e88a5fe\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz" Feb 20 12:15:16.814086 master-0 kubenswrapper[31420]: I0220 12:15:16.813951 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqnwx\" (UniqueName: \"kubernetes.io/projected/ebec7408-42ea-4bdd-9cc9-a42caaefe664-kube-api-access-dqnwx\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.814086 master-0 kubenswrapper[31420]: I0220 12:15:16.814028 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ebec7408-42ea-4bdd-9cc9-a42caaefe664-reloader\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.814086 master-0 kubenswrapper[31420]: I0220 12:15:16.814077 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptm98\" (UniqueName: \"kubernetes.io/projected/b91f2548-98e3-418c-9a05-58502d67d66f-kube-api-access-ptm98\") pod \"speaker-r94p4\" (UID: \"b91f2548-98e3-418c-9a05-58502d67d66f\") " pod="metallb-system/speaker-r94p4" Feb 20 12:15:16.814326 master-0 kubenswrapper[31420]: I0220 12:15:16.814101 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8a5df14-16b6-4d50-900b-8f0c241b1d1b-cert\") pod \"controller-69bbfbf88f-vdrkc\" (UID: \"d8a5df14-16b6-4d50-900b-8f0c241b1d1b\") " pod="metallb-system/controller-69bbfbf88f-vdrkc" Feb 20 12:15:16.814326 master-0 kubenswrapper[31420]: I0220 12:15:16.814144 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ebec7408-42ea-4bdd-9cc9-a42caaefe664-frr-startup\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.814326 master-0 kubenswrapper[31420]: I0220 12:15:16.814166 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b91f2548-98e3-418c-9a05-58502d67d66f-memberlist\") pod \"speaker-r94p4\" (UID: \"b91f2548-98e3-418c-9a05-58502d67d66f\") " pod="metallb-system/speaker-r94p4" Feb 20 12:15:16.814326 master-0 kubenswrapper[31420]: I0220 12:15:16.814230 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkljt\" (UniqueName: \"kubernetes.io/projected/d8a5df14-16b6-4d50-900b-8f0c241b1d1b-kube-api-access-lkljt\") pod \"controller-69bbfbf88f-vdrkc\" (UID: \"d8a5df14-16b6-4d50-900b-8f0c241b1d1b\") " pod="metallb-system/controller-69bbfbf88f-vdrkc" Feb 20 12:15:16.814326 master-0 kubenswrapper[31420]: I0220 12:15:16.814262 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b91f2548-98e3-418c-9a05-58502d67d66f-metrics-certs\") pod \"speaker-r94p4\" (UID: \"b91f2548-98e3-418c-9a05-58502d67d66f\") " pod="metallb-system/speaker-r94p4" Feb 20 12:15:16.814326 master-0 kubenswrapper[31420]: I0220 12:15:16.814293 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8a5df14-16b6-4d50-900b-8f0c241b1d1b-metrics-certs\") pod \"controller-69bbfbf88f-vdrkc\" (UID: \"d8a5df14-16b6-4d50-900b-8f0c241b1d1b\") " pod="metallb-system/controller-69bbfbf88f-vdrkc" Feb 20 12:15:16.814326 master-0 kubenswrapper[31420]: I0220 12:15:16.814319 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ebec7408-42ea-4bdd-9cc9-a42caaefe664-frr-conf\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.814573 master-0 kubenswrapper[31420]: I0220 12:15:16.814354 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ebec7408-42ea-4bdd-9cc9-a42caaefe664-frr-sockets\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.814573 master-0 kubenswrapper[31420]: I0220 12:15:16.814379 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b91f2548-98e3-418c-9a05-58502d67d66f-metallb-excludel2\") pod \"speaker-r94p4\" (UID: \"b91f2548-98e3-418c-9a05-58502d67d66f\") " pod="metallb-system/speaker-r94p4" Feb 20 12:15:16.814573 master-0 kubenswrapper[31420]: I0220 12:15:16.814415 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebec7408-42ea-4bdd-9cc9-a42caaefe664-metrics-certs\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.814573 master-0 kubenswrapper[31420]: I0220 12:15:16.814444 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ebec7408-42ea-4bdd-9cc9-a42caaefe664-metrics\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.814999 master-0 kubenswrapper[31420]: I0220 12:15:16.814965 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ebec7408-42ea-4bdd-9cc9-a42caaefe664-metrics\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.815569 master-0 kubenswrapper[31420]: I0220 12:15:16.815518 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ebec7408-42ea-4bdd-9cc9-a42caaefe664-reloader\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.816735 master-0 kubenswrapper[31420]: I0220 12:15:16.816699 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ebec7408-42ea-4bdd-9cc9-a42caaefe664-frr-startup\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.816814 master-0 kubenswrapper[31420]: E0220 12:15:16.816785 31420 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 20 12:15:16.816870 master-0 kubenswrapper[31420]: E0220 12:15:16.816854 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b91f2548-98e3-418c-9a05-58502d67d66f-memberlist podName:b91f2548-98e3-418c-9a05-58502d67d66f nodeName:}" failed. No retries permitted until 2026-02-20 12:15:17.316838141 +0000 UTC m=+622.036076382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b91f2548-98e3-418c-9a05-58502d67d66f-memberlist") pod "speaker-r94p4" (UID: "b91f2548-98e3-418c-9a05-58502d67d66f") : secret "metallb-memberlist" not found Feb 20 12:15:16.818394 master-0 kubenswrapper[31420]: I0220 12:15:16.818357 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b91f2548-98e3-418c-9a05-58502d67d66f-metallb-excludel2\") pod \"speaker-r94p4\" (UID: \"b91f2548-98e3-418c-9a05-58502d67d66f\") " pod="metallb-system/speaker-r94p4" Feb 20 12:15:16.818695 master-0 kubenswrapper[31420]: E0220 12:15:16.818638 31420 secret.go:189] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 20 12:15:16.818791 master-0 kubenswrapper[31420]: E0220 12:15:16.818762 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a5df14-16b6-4d50-900b-8f0c241b1d1b-metrics-certs podName:d8a5df14-16b6-4d50-900b-8f0c241b1d1b nodeName:}" failed. No retries permitted until 2026-02-20 12:15:17.318734314 +0000 UTC m=+622.037972655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8a5df14-16b6-4d50-900b-8f0c241b1d1b-metrics-certs") pod "controller-69bbfbf88f-vdrkc" (UID: "d8a5df14-16b6-4d50-900b-8f0c241b1d1b") : secret "controller-certs-secret" not found Feb 20 12:15:16.818840 master-0 kubenswrapper[31420]: I0220 12:15:16.818815 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ebec7408-42ea-4bdd-9cc9-a42caaefe664-frr-conf\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.819117 master-0 kubenswrapper[31420]: I0220 12:15:16.819085 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ebec7408-42ea-4bdd-9cc9-a42caaefe664-frr-sockets\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.819747 master-0 kubenswrapper[31420]: I0220 12:15:16.819712 31420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 20 12:15:16.822795 master-0 kubenswrapper[31420]: I0220 12:15:16.822755 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b91f2548-98e3-418c-9a05-58502d67d66f-metrics-certs\") pod \"speaker-r94p4\" (UID: \"b91f2548-98e3-418c-9a05-58502d67d66f\") " pod="metallb-system/speaker-r94p4" Feb 20 12:15:16.829015 master-0 kubenswrapper[31420]: I0220 12:15:16.828963 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebec7408-42ea-4bdd-9cc9-a42caaefe664-metrics-certs\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.829408 master-0 kubenswrapper[31420]: I0220 12:15:16.829349 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8a5df14-16b6-4d50-900b-8f0c241b1d1b-cert\") pod \"controller-69bbfbf88f-vdrkc\" (UID: \"d8a5df14-16b6-4d50-900b-8f0c241b1d1b\") " pod="metallb-system/controller-69bbfbf88f-vdrkc" Feb 20 12:15:16.835991 master-0 kubenswrapper[31420]: I0220 12:15:16.835953 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptm98\" (UniqueName: \"kubernetes.io/projected/b91f2548-98e3-418c-9a05-58502d67d66f-kube-api-access-ptm98\") pod \"speaker-r94p4\" (UID: \"b91f2548-98e3-418c-9a05-58502d67d66f\") " pod="metallb-system/speaker-r94p4" Feb 20 12:15:16.836661 master-0 kubenswrapper[31420]: I0220 12:15:16.836600 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqnwx\" (UniqueName: \"kubernetes.io/projected/ebec7408-42ea-4bdd-9cc9-a42caaefe664-kube-api-access-dqnwx\") pod \"frr-k8s-cfkwh\" (UID: \"ebec7408-42ea-4bdd-9cc9-a42caaefe664\") " pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:16.842323 master-0 kubenswrapper[31420]: I0220 12:15:16.842282 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkljt\" (UniqueName: \"kubernetes.io/projected/d8a5df14-16b6-4d50-900b-8f0c241b1d1b-kube-api-access-lkljt\") pod \"controller-69bbfbf88f-vdrkc\" (UID: \"d8a5df14-16b6-4d50-900b-8f0c241b1d1b\") " pod="metallb-system/controller-69bbfbf88f-vdrkc" Feb 20 12:15:16.906241 master-0 kubenswrapper[31420]: I0220 12:15:16.906160 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:17.223070 master-0 kubenswrapper[31420]: I0220 12:15:17.222979 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91455b18-03a0-49c7-aa61-59b91e88a5fe-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-d7llz\" (UID: \"91455b18-03a0-49c7-aa61-59b91e88a5fe\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz" Feb 20 12:15:17.229476 master-0 kubenswrapper[31420]: I0220 12:15:17.229411 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91455b18-03a0-49c7-aa61-59b91e88a5fe-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-d7llz\" (UID: \"91455b18-03a0-49c7-aa61-59b91e88a5fe\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz" Feb 20 12:15:17.324851 master-0 kubenswrapper[31420]: I0220 12:15:17.324703 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b91f2548-98e3-418c-9a05-58502d67d66f-memberlist\") pod \"speaker-r94p4\" (UID: \"b91f2548-98e3-418c-9a05-58502d67d66f\") " pod="metallb-system/speaker-r94p4" Feb 20 12:15:17.325231 master-0 kubenswrapper[31420]: E0220 12:15:17.324964 31420 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 20 12:15:17.325327 master-0 kubenswrapper[31420]: I0220 12:15:17.325178 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8a5df14-16b6-4d50-900b-8f0c241b1d1b-metrics-certs\") pod \"controller-69bbfbf88f-vdrkc\" (UID: \"d8a5df14-16b6-4d50-900b-8f0c241b1d1b\") " pod="metallb-system/controller-69bbfbf88f-vdrkc" Feb 20 12:15:17.325491 master-0 kubenswrapper[31420]: E0220 12:15:17.325305 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b91f2548-98e3-418c-9a05-58502d67d66f-memberlist podName:b91f2548-98e3-418c-9a05-58502d67d66f nodeName:}" failed. No retries permitted until 2026-02-20 12:15:18.325274494 +0000 UTC m=+623.044512775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b91f2548-98e3-418c-9a05-58502d67d66f-memberlist") pod "speaker-r94p4" (UID: "b91f2548-98e3-418c-9a05-58502d67d66f") : secret "metallb-memberlist" not found Feb 20 12:15:17.329858 master-0 kubenswrapper[31420]: I0220 12:15:17.329822 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8a5df14-16b6-4d50-900b-8f0c241b1d1b-metrics-certs\") pod \"controller-69bbfbf88f-vdrkc\" (UID: \"d8a5df14-16b6-4d50-900b-8f0c241b1d1b\") " pod="metallb-system/controller-69bbfbf88f-vdrkc" Feb 20 12:15:17.469002 master-0 kubenswrapper[31420]: I0220 12:15:17.468914 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz" Feb 20 12:15:17.576227 master-0 kubenswrapper[31420]: I0220 12:15:17.576065 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-vdrkc" Feb 20 12:15:17.606602 master-0 kubenswrapper[31420]: I0220 12:15:17.606484 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cfkwh" event={"ID":"ebec7408-42ea-4bdd-9cc9-a42caaefe664","Type":"ContainerStarted","Data":"59cda70230c69e66e76906f36d00a2d3f0f74751877bb3d8c40206accd7abc68"} Feb 20 12:15:17.972151 master-0 kubenswrapper[31420]: I0220 12:15:17.972041 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz"] Feb 20 12:15:17.975423 master-0 kubenswrapper[31420]: W0220 12:15:17.974874 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91455b18_03a0_49c7_aa61_59b91e88a5fe.slice/crio-b2448a0291e1c0833b10849e9ac580790132ec69cdac1527d26e0718f47a572b WatchSource:0}: Error finding container b2448a0291e1c0833b10849e9ac580790132ec69cdac1527d26e0718f47a572b: Status 404 returned error can't find the container with id b2448a0291e1c0833b10849e9ac580790132ec69cdac1527d26e0718f47a572b Feb 20 12:15:18.102206 master-0 kubenswrapper[31420]: I0220 12:15:18.100099 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-vdrkc"] Feb 20 12:15:18.350058 master-0 kubenswrapper[31420]: I0220 12:15:18.349975 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b91f2548-98e3-418c-9a05-58502d67d66f-memberlist\") pod \"speaker-r94p4\" (UID: \"b91f2548-98e3-418c-9a05-58502d67d66f\") " pod="metallb-system/speaker-r94p4" Feb 20 12:15:18.350317 master-0 kubenswrapper[31420]: E0220 12:15:18.350226 31420 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 20 12:15:18.350371 master-0 kubenswrapper[31420]: E0220 12:15:18.350344 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b91f2548-98e3-418c-9a05-58502d67d66f-memberlist podName:b91f2548-98e3-418c-9a05-58502d67d66f nodeName:}" failed. No retries permitted until 2026-02-20 12:15:20.350313959 +0000 UTC m=+625.069552210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b91f2548-98e3-418c-9a05-58502d67d66f-memberlist") pod "speaker-r94p4" (UID: "b91f2548-98e3-418c-9a05-58502d67d66f") : secret "metallb-memberlist" not found Feb 20 12:15:18.616841 master-0 kubenswrapper[31420]: I0220 12:15:18.616773 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-vdrkc" event={"ID":"d8a5df14-16b6-4d50-900b-8f0c241b1d1b","Type":"ContainerStarted","Data":"9c1932179c8d738556c25127050e0ecc940f859d0705bcf23b149a59e78c467a"} Feb 20 12:15:18.618508 master-0 kubenswrapper[31420]: I0220 12:15:18.616859 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-vdrkc" event={"ID":"d8a5df14-16b6-4d50-900b-8f0c241b1d1b","Type":"ContainerStarted","Data":"d60a12d8733a4d091b44097e626734c8293aef3a0f32640f78af658a4b23bce5"} Feb 20 12:15:18.619327 master-0 kubenswrapper[31420]: I0220 12:15:18.619124 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz" event={"ID":"91455b18-03a0-49c7-aa61-59b91e88a5fe","Type":"ContainerStarted","Data":"b2448a0291e1c0833b10849e9ac580790132ec69cdac1527d26e0718f47a572b"} Feb 20 12:15:18.666644 master-0 kubenswrapper[31420]: I0220 12:15:18.663122 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-r22kg"] Feb 20 12:15:18.666644 master-0 kubenswrapper[31420]: I0220 12:15:18.665166 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-r22kg" Feb 20 12:15:18.673633 master-0 kubenswrapper[31420]: I0220 12:15:18.672668 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb"] Feb 20 12:15:18.673900 master-0 kubenswrapper[31420]: I0220 12:15:18.673847 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb" Feb 20 12:15:18.676568 master-0 kubenswrapper[31420]: I0220 12:15:18.676338 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 20 12:15:18.722088 master-0 kubenswrapper[31420]: I0220 12:15:18.720655 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-r22kg"] Feb 20 12:15:18.743175 master-0 kubenswrapper[31420]: I0220 12:15:18.742914 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-k6pl2"] Feb 20 12:15:18.746636 master-0 kubenswrapper[31420]: I0220 12:15:18.746585 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:18.756723 master-0 kubenswrapper[31420]: I0220 12:15:18.756663 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87dhs\" (UniqueName: \"kubernetes.io/projected/a3c9202b-541d-4ec7-9ef5-d5da935ad5d9-kube-api-access-87dhs\") pod \"nmstate-metrics-58c85c668d-r22kg\" (UID: \"a3c9202b-541d-4ec7-9ef5-d5da935ad5d9\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-r22kg" Feb 20 12:15:18.768728 master-0 kubenswrapper[31420]: I0220 12:15:18.767023 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb"] Feb 20 12:15:18.827342 master-0 kubenswrapper[31420]: I0220 12:15:18.827262 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g"] Feb 20 12:15:18.828752 master-0 kubenswrapper[31420]: I0220 12:15:18.828724 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g" Feb 20 12:15:18.841941 master-0 kubenswrapper[31420]: I0220 12:15:18.841881 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 20 12:15:18.842272 master-0 kubenswrapper[31420]: I0220 12:15:18.842195 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 20 12:15:18.849378 master-0 kubenswrapper[31420]: I0220 12:15:18.848744 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g"] Feb 20 12:15:18.860774 master-0 kubenswrapper[31420]: I0220 12:15:18.860506 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b686af0c-791e-42be-b608-e1a265d973a0-dbus-socket\") pod \"nmstate-handler-k6pl2\" (UID: \"b686af0c-791e-42be-b608-e1a265d973a0\") " pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:18.860774 master-0 kubenswrapper[31420]: I0220 12:15:18.860571 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b686af0c-791e-42be-b608-e1a265d973a0-ovs-socket\") pod \"nmstate-handler-k6pl2\" (UID: \"b686af0c-791e-42be-b608-e1a265d973a0\") " pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:18.860774 master-0 kubenswrapper[31420]: I0220 12:15:18.860610 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6cd442ad-ad65-497a-b5eb-bc79c3023466-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-qj8cb\" (UID: \"6cd442ad-ad65-497a-b5eb-bc79c3023466\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb" Feb 20 12:15:18.860774 master-0 kubenswrapper[31420]: I0220 12:15:18.860652 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b686af0c-791e-42be-b608-e1a265d973a0-nmstate-lock\") pod \"nmstate-handler-k6pl2\" (UID: \"b686af0c-791e-42be-b608-e1a265d973a0\") " pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:18.860774 master-0 kubenswrapper[31420]: I0220 12:15:18.860679 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ncgh\" (UniqueName: \"kubernetes.io/projected/6cd442ad-ad65-497a-b5eb-bc79c3023466-kube-api-access-8ncgh\") pod \"nmstate-webhook-866bcb46dc-qj8cb\" (UID: \"6cd442ad-ad65-497a-b5eb-bc79c3023466\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb" Feb 20 12:15:18.860774 master-0 kubenswrapper[31420]: I0220 12:15:18.860722 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87dhs\" (UniqueName: \"kubernetes.io/projected/a3c9202b-541d-4ec7-9ef5-d5da935ad5d9-kube-api-access-87dhs\") pod \"nmstate-metrics-58c85c668d-r22kg\" (UID: \"a3c9202b-541d-4ec7-9ef5-d5da935ad5d9\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-r22kg" Feb 20 12:15:18.860774 master-0 kubenswrapper[31420]: I0220 12:15:18.860779 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5c25\" (UniqueName: \"kubernetes.io/projected/b686af0c-791e-42be-b608-e1a265d973a0-kube-api-access-n5c25\") pod \"nmstate-handler-k6pl2\" (UID: \"b686af0c-791e-42be-b608-e1a265d973a0\") " pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:18.882940 master-0 kubenswrapper[31420]: I0220 12:15:18.882793 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87dhs\" (UniqueName: \"kubernetes.io/projected/a3c9202b-541d-4ec7-9ef5-d5da935ad5d9-kube-api-access-87dhs\") pod \"nmstate-metrics-58c85c668d-r22kg\" (UID: \"a3c9202b-541d-4ec7-9ef5-d5da935ad5d9\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-r22kg" Feb 20 12:15:18.962948 master-0 kubenswrapper[31420]: I0220 12:15:18.962710 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5c25\" (UniqueName: \"kubernetes.io/projected/b686af0c-791e-42be-b608-e1a265d973a0-kube-api-access-n5c25\") pod \"nmstate-handler-k6pl2\" (UID: \"b686af0c-791e-42be-b608-e1a265d973a0\") " pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:18.962948 master-0 kubenswrapper[31420]: I0220 12:15:18.962791 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b686af0c-791e-42be-b608-e1a265d973a0-dbus-socket\") pod \"nmstate-handler-k6pl2\" (UID: \"b686af0c-791e-42be-b608-e1a265d973a0\") " pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:18.962948 master-0 kubenswrapper[31420]: I0220 12:15:18.962820 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdvft\" (UniqueName: \"kubernetes.io/projected/8585d57e-59ce-4616-9c40-80fa1d13357c-kube-api-access-zdvft\") pod \"nmstate-console-plugin-5c78fc5d65-dml4g\" (UID: \"8585d57e-59ce-4616-9c40-80fa1d13357c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g" Feb 20 12:15:18.962948 master-0 kubenswrapper[31420]: I0220 12:15:18.962843 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b686af0c-791e-42be-b608-e1a265d973a0-ovs-socket\") pod \"nmstate-handler-k6pl2\" (UID: \"b686af0c-791e-42be-b608-e1a265d973a0\") " pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:18.962948 master-0 kubenswrapper[31420]: I0220 12:15:18.962878 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6cd442ad-ad65-497a-b5eb-bc79c3023466-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-qj8cb\" (UID: \"6cd442ad-ad65-497a-b5eb-bc79c3023466\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb" Feb 20 12:15:18.963316 master-0 kubenswrapper[31420]: I0220 12:15:18.963093 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b686af0c-791e-42be-b608-e1a265d973a0-dbus-socket\") pod \"nmstate-handler-k6pl2\" (UID: \"b686af0c-791e-42be-b608-e1a265d973a0\") " pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:18.963316 master-0 kubenswrapper[31420]: I0220 12:15:18.963226 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b686af0c-791e-42be-b608-e1a265d973a0-nmstate-lock\") pod \"nmstate-handler-k6pl2\" (UID: \"b686af0c-791e-42be-b608-e1a265d973a0\") " pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:18.963381 master-0 kubenswrapper[31420]: I0220 12:15:18.963320 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8585d57e-59ce-4616-9c40-80fa1d13357c-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-dml4g\" (UID: \"8585d57e-59ce-4616-9c40-80fa1d13357c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g" Feb 20 12:15:18.963422 master-0 kubenswrapper[31420]: I0220 12:15:18.963408 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ncgh\" (UniqueName: \"kubernetes.io/projected/6cd442ad-ad65-497a-b5eb-bc79c3023466-kube-api-access-8ncgh\") pod \"nmstate-webhook-866bcb46dc-qj8cb\" (UID: \"6cd442ad-ad65-497a-b5eb-bc79c3023466\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb" Feb 20 12:15:18.966543 master-0 kubenswrapper[31420]: I0220 12:15:18.963516 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8585d57e-59ce-4616-9c40-80fa1d13357c-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dml4g\" (UID: \"8585d57e-59ce-4616-9c40-80fa1d13357c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g" Feb 20 12:15:18.966543 master-0 kubenswrapper[31420]: E0220 12:15:18.964007 31420 secret.go:189] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 20 12:15:18.966543 master-0 kubenswrapper[31420]: I0220 12:15:18.964053 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b686af0c-791e-42be-b608-e1a265d973a0-nmstate-lock\") pod \"nmstate-handler-k6pl2\" (UID: \"b686af0c-791e-42be-b608-e1a265d973a0\") " pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:18.966543 master-0 kubenswrapper[31420]: E0220 12:15:18.964454 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cd442ad-ad65-497a-b5eb-bc79c3023466-tls-key-pair podName:6cd442ad-ad65-497a-b5eb-bc79c3023466 nodeName:}" failed. No retries permitted until 2026-02-20 12:15:19.464433462 +0000 UTC m=+624.183671703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/6cd442ad-ad65-497a-b5eb-bc79c3023466-tls-key-pair") pod "nmstate-webhook-866bcb46dc-qj8cb" (UID: "6cd442ad-ad65-497a-b5eb-bc79c3023466") : secret "openshift-nmstate-webhook" not found Feb 20 12:15:18.966543 master-0 kubenswrapper[31420]: I0220 12:15:18.964557 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b686af0c-791e-42be-b608-e1a265d973a0-ovs-socket\") pod \"nmstate-handler-k6pl2\" (UID: \"b686af0c-791e-42be-b608-e1a265d973a0\") " pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:18.986505 master-0 kubenswrapper[31420]: I0220 12:15:18.986434 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5c25\" (UniqueName: \"kubernetes.io/projected/b686af0c-791e-42be-b608-e1a265d973a0-kube-api-access-n5c25\") pod \"nmstate-handler-k6pl2\" (UID: \"b686af0c-791e-42be-b608-e1a265d973a0\") " pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:18.990683 master-0 kubenswrapper[31420]: I0220 12:15:18.990653 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ncgh\" (UniqueName: \"kubernetes.io/projected/6cd442ad-ad65-497a-b5eb-bc79c3023466-kube-api-access-8ncgh\") pod \"nmstate-webhook-866bcb46dc-qj8cb\" (UID: \"6cd442ad-ad65-497a-b5eb-bc79c3023466\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb" Feb 20 12:15:18.996619 master-0 kubenswrapper[31420]: I0220 12:15:18.996321 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-r22kg" Feb 20 12:15:19.026545 master-0 kubenswrapper[31420]: I0220 12:15:19.022566 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6784f9677c-8sx5l"] Feb 20 12:15:19.026545 master-0 kubenswrapper[31420]: I0220 12:15:19.023890 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.049153 master-0 kubenswrapper[31420]: I0220 12:15:19.049050 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6784f9677c-8sx5l"] Feb 20 12:15:19.067728 master-0 kubenswrapper[31420]: I0220 12:15:19.067629 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:19.070446 master-0 kubenswrapper[31420]: I0220 12:15:19.070379 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8585d57e-59ce-4616-9c40-80fa1d13357c-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dml4g\" (UID: \"8585d57e-59ce-4616-9c40-80fa1d13357c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g" Feb 20 12:15:19.070578 master-0 kubenswrapper[31420]: I0220 12:15:19.070553 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdvft\" (UniqueName: \"kubernetes.io/projected/8585d57e-59ce-4616-9c40-80fa1d13357c-kube-api-access-zdvft\") pod \"nmstate-console-plugin-5c78fc5d65-dml4g\" (UID: \"8585d57e-59ce-4616-9c40-80fa1d13357c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g" Feb 20 12:15:19.070692 master-0 kubenswrapper[31420]: I0220 12:15:19.070670 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8585d57e-59ce-4616-9c40-80fa1d13357c-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-dml4g\" (UID: \"8585d57e-59ce-4616-9c40-80fa1d13357c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g" Feb 20 12:15:19.079176 master-0 kubenswrapper[31420]: I0220 12:15:19.079126 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8585d57e-59ce-4616-9c40-80fa1d13357c-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dml4g\" (UID: \"8585d57e-59ce-4616-9c40-80fa1d13357c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g" Feb 20 12:15:19.085312 master-0 kubenswrapper[31420]: I0220 12:15:19.085266 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8585d57e-59ce-4616-9c40-80fa1d13357c-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-dml4g\" (UID: \"8585d57e-59ce-4616-9c40-80fa1d13357c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g" Feb 20 12:15:19.091344 master-0 kubenswrapper[31420]: I0220 12:15:19.091312 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdvft\" (UniqueName: \"kubernetes.io/projected/8585d57e-59ce-4616-9c40-80fa1d13357c-kube-api-access-zdvft\") pod \"nmstate-console-plugin-5c78fc5d65-dml4g\" (UID: \"8585d57e-59ce-4616-9c40-80fa1d13357c\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g" Feb 20 12:15:19.170430 master-0 kubenswrapper[31420]: I0220 12:15:19.169988 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g" Feb 20 12:15:19.171775 master-0 kubenswrapper[31420]: I0220 12:15:19.171740 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-oauth-serving-cert\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.171924 master-0 kubenswrapper[31420]: I0220 12:15:19.171908 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-console-serving-cert\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.172040 master-0 kubenswrapper[31420]: I0220 12:15:19.172020 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-console-config\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.172138 master-0 kubenswrapper[31420]: I0220 12:15:19.172124 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-trusted-ca-bundle\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.172248 master-0 kubenswrapper[31420]: I0220 12:15:19.172235 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-service-ca\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.172325 master-0 kubenswrapper[31420]: I0220 12:15:19.172313 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szl6j\" (UniqueName: \"kubernetes.io/projected/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-kube-api-access-szl6j\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.172435 master-0 kubenswrapper[31420]: I0220 12:15:19.172422 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-console-oauth-config\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.290263 master-0 kubenswrapper[31420]: I0220 12:15:19.287795 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-trusted-ca-bundle\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.290263 master-0 kubenswrapper[31420]: I0220 12:15:19.287927 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-service-ca\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.290263 master-0 kubenswrapper[31420]: I0220 12:15:19.287950 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szl6j\" (UniqueName: \"kubernetes.io/projected/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-kube-api-access-szl6j\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.290263 master-0 kubenswrapper[31420]: I0220 12:15:19.288777 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-service-ca\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.290263 master-0 kubenswrapper[31420]: I0220 12:15:19.288865 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-console-oauth-config\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.290263 master-0 kubenswrapper[31420]: I0220 12:15:19.288899 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-oauth-serving-cert\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.290263 master-0 kubenswrapper[31420]: I0220 12:15:19.288939 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-trusted-ca-bundle\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.290263 master-0 kubenswrapper[31420]: I0220 12:15:19.288951 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-console-serving-cert\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.290263 master-0 kubenswrapper[31420]: I0220 12:15:19.289046 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-console-config\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.290263 master-0 kubenswrapper[31420]: I0220 12:15:19.289670 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-console-config\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.290782 master-0 kubenswrapper[31420]: I0220 12:15:19.290341 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-oauth-serving-cert\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.292907 master-0 kubenswrapper[31420]: I0220 12:15:19.292873 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-console-oauth-config\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.293563 master-0 kubenswrapper[31420]: I0220 12:15:19.293516 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-console-serving-cert\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.319441 master-0 kubenswrapper[31420]: I0220 12:15:19.319392 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szl6j\" (UniqueName: \"kubernetes.io/projected/ff341ee9-5a82-46f3-b6b5-4e4adb9a242e-kube-api-access-szl6j\") pod \"console-6784f9677c-8sx5l\" (UID: \"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e\") " pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.470723 master-0 kubenswrapper[31420]: I0220 12:15:19.470663 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:19.494994 master-0 kubenswrapper[31420]: I0220 12:15:19.494243 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6cd442ad-ad65-497a-b5eb-bc79c3023466-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-qj8cb\" (UID: \"6cd442ad-ad65-497a-b5eb-bc79c3023466\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb" Feb 20 12:15:19.501136 master-0 kubenswrapper[31420]: I0220 12:15:19.497173 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6cd442ad-ad65-497a-b5eb-bc79c3023466-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-qj8cb\" (UID: \"6cd442ad-ad65-497a-b5eb-bc79c3023466\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb" Feb 20 12:15:19.553558 master-0 kubenswrapper[31420]: I0220 12:15:19.553495 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-r22kg"] Feb 20 12:15:19.562279 master-0 kubenswrapper[31420]: W0220 12:15:19.562223 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3c9202b_541d_4ec7_9ef5_d5da935ad5d9.slice/crio-94a50296e8e9c753a38f8452e78c53884b401e0fb1b76c71673b76c10e778f1b WatchSource:0}: Error finding container 94a50296e8e9c753a38f8452e78c53884b401e0fb1b76c71673b76c10e778f1b: Status 404 returned error can't find the container with id 94a50296e8e9c753a38f8452e78c53884b401e0fb1b76c71673b76c10e778f1b Feb 20 12:15:19.609483 master-0 kubenswrapper[31420]: I0220 12:15:19.609321 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb" Feb 20 12:15:19.639080 master-0 kubenswrapper[31420]: I0220 12:15:19.639006 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-r22kg" event={"ID":"a3c9202b-541d-4ec7-9ef5-d5da935ad5d9","Type":"ContainerStarted","Data":"94a50296e8e9c753a38f8452e78c53884b401e0fb1b76c71673b76c10e778f1b"} Feb 20 12:15:19.641787 master-0 kubenswrapper[31420]: I0220 12:15:19.641748 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-k6pl2" event={"ID":"b686af0c-791e-42be-b608-e1a265d973a0","Type":"ContainerStarted","Data":"b22bab56e360ed0e8d9ebea76eaa491243fb9ac7b21abd9a0f4be5a05228e45f"} Feb 20 12:15:19.643885 master-0 kubenswrapper[31420]: I0220 12:15:19.643833 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-vdrkc" event={"ID":"d8a5df14-16b6-4d50-900b-8f0c241b1d1b","Type":"ContainerStarted","Data":"83ac3327a735a553ad5d7a6fe7b48d217ba1992b937cd9bd7b7381c036b2220e"} Feb 20 12:15:19.644768 master-0 kubenswrapper[31420]: I0220 12:15:19.644723 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-vdrkc" Feb 20 12:15:19.667207 master-0 kubenswrapper[31420]: I0220 12:15:19.667120 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-vdrkc" podStartSLOduration=2.5508420259999998 podStartE2EDuration="3.667098216s" podCreationTimestamp="2026-02-20 12:15:16 +0000 UTC" firstStartedPulling="2026-02-20 12:15:18.309061613 +0000 UTC m=+623.028299894" lastFinishedPulling="2026-02-20 12:15:19.425317843 +0000 UTC m=+624.144556084" observedRunningTime="2026-02-20 12:15:19.660886382 +0000 UTC m=+624.380124643" watchObservedRunningTime="2026-02-20 12:15:19.667098216 +0000 UTC m=+624.386336457" Feb 20 12:15:19.719474 master-0 kubenswrapper[31420]: I0220 12:15:19.719407 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g"] Feb 20 12:15:19.722052 master-0 kubenswrapper[31420]: W0220 12:15:19.722001 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8585d57e_59ce_4616_9c40_80fa1d13357c.slice/crio-355b8dc054a8cc55c5e65bfd1e3c0a4fab27fb3bb4b229d57b3be44e74276312 WatchSource:0}: Error finding container 355b8dc054a8cc55c5e65bfd1e3c0a4fab27fb3bb4b229d57b3be44e74276312: Status 404 returned error can't find the container with id 355b8dc054a8cc55c5e65bfd1e3c0a4fab27fb3bb4b229d57b3be44e74276312 Feb 20 12:15:19.892570 master-0 kubenswrapper[31420]: I0220 12:15:19.892483 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6784f9677c-8sx5l"] Feb 20 12:15:20.024880 master-0 kubenswrapper[31420]: I0220 12:15:20.024815 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb"] Feb 20 12:15:20.030555 master-0 kubenswrapper[31420]: W0220 12:15:20.030468 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cd442ad_ad65_497a_b5eb_bc79c3023466.slice/crio-717ea65f584ca7b4d8e6fbf6ea591adb02b1f30e7a6286cdcbc3928af0fbf457 WatchSource:0}: Error finding container 717ea65f584ca7b4d8e6fbf6ea591adb02b1f30e7a6286cdcbc3928af0fbf457: Status 404 returned error can't find the container with id 717ea65f584ca7b4d8e6fbf6ea591adb02b1f30e7a6286cdcbc3928af0fbf457 Feb 20 12:15:20.410357 master-0 kubenswrapper[31420]: I0220 12:15:20.410225 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b91f2548-98e3-418c-9a05-58502d67d66f-memberlist\") pod \"speaker-r94p4\" (UID: \"b91f2548-98e3-418c-9a05-58502d67d66f\") " pod="metallb-system/speaker-r94p4" Feb 20 12:15:20.415065 master-0 kubenswrapper[31420]: I0220 12:15:20.415006 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b91f2548-98e3-418c-9a05-58502d67d66f-memberlist\") pod \"speaker-r94p4\" (UID: \"b91f2548-98e3-418c-9a05-58502d67d66f\") " pod="metallb-system/speaker-r94p4" Feb 20 12:15:20.533968 master-0 kubenswrapper[31420]: I0220 12:15:20.533797 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-r94p4" Feb 20 12:15:20.567309 master-0 kubenswrapper[31420]: W0220 12:15:20.567244 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb91f2548_98e3_418c_9a05_58502d67d66f.slice/crio-8a0232edf464a6e29e3c8754ed1aebbcb471451956cda4f6d83bf6f84b351f88 WatchSource:0}: Error finding container 8a0232edf464a6e29e3c8754ed1aebbcb471451956cda4f6d83bf6f84b351f88: Status 404 returned error can't find the container with id 8a0232edf464a6e29e3c8754ed1aebbcb471451956cda4f6d83bf6f84b351f88 Feb 20 12:15:20.672139 master-0 kubenswrapper[31420]: I0220 12:15:20.671892 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb" event={"ID":"6cd442ad-ad65-497a-b5eb-bc79c3023466","Type":"ContainerStarted","Data":"717ea65f584ca7b4d8e6fbf6ea591adb02b1f30e7a6286cdcbc3928af0fbf457"} Feb 20 12:15:20.673819 master-0 kubenswrapper[31420]: I0220 12:15:20.673763 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g" event={"ID":"8585d57e-59ce-4616-9c40-80fa1d13357c","Type":"ContainerStarted","Data":"355b8dc054a8cc55c5e65bfd1e3c0a4fab27fb3bb4b229d57b3be44e74276312"} Feb 20 12:15:20.675741 master-0 kubenswrapper[31420]: I0220 12:15:20.675596 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6784f9677c-8sx5l" event={"ID":"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e","Type":"ContainerStarted","Data":"3a9cb102350e6ed401722fca62efafb18c6801518798ba62a5f37ae5797be458"} Feb 20 12:15:20.675741 master-0 kubenswrapper[31420]: I0220 12:15:20.675675 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6784f9677c-8sx5l" event={"ID":"ff341ee9-5a82-46f3-b6b5-4e4adb9a242e","Type":"ContainerStarted","Data":"92ebc20f31bfd2d20d8e107843456ed7d1552f7c48f6a69d1c5266f28d0fb797"} Feb 20 12:15:20.677106 master-0 kubenswrapper[31420]: I0220 12:15:20.677072 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-r94p4" event={"ID":"b91f2548-98e3-418c-9a05-58502d67d66f","Type":"ContainerStarted","Data":"8a0232edf464a6e29e3c8754ed1aebbcb471451956cda4f6d83bf6f84b351f88"} Feb 20 12:15:20.710432 master-0 kubenswrapper[31420]: I0220 12:15:20.710019 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6784f9677c-8sx5l" podStartSLOduration=2.709800776 podStartE2EDuration="2.709800776s" podCreationTimestamp="2026-02-20 12:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:15:20.698202701 +0000 UTC m=+625.417440952" watchObservedRunningTime="2026-02-20 12:15:20.709800776 +0000 UTC m=+625.429039017" Feb 20 12:15:21.689399 master-0 kubenswrapper[31420]: I0220 12:15:21.689344 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-r94p4" event={"ID":"b91f2548-98e3-418c-9a05-58502d67d66f","Type":"ContainerStarted","Data":"039b4edba0e1fcdd50d966a25d5a1bb0483dcdf0d00f7e5934fbd0b796719b33"} Feb 20 12:15:21.689399 master-0 kubenswrapper[31420]: I0220 12:15:21.689406 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-r94p4" event={"ID":"b91f2548-98e3-418c-9a05-58502d67d66f","Type":"ContainerStarted","Data":"aaf62f92f91209cb76d4e380a366bb0ace8a8287348a65572aaed6c2eda3c8ef"} Feb 20 12:15:21.714548 master-0 kubenswrapper[31420]: I0220 12:15:21.714450 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-r94p4" podStartSLOduration=5.714430019 podStartE2EDuration="5.714430019s" podCreationTimestamp="2026-02-20 12:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:15:21.707341841 +0000 UTC m=+626.426580082" watchObservedRunningTime="2026-02-20 12:15:21.714430019 +0000 UTC m=+626.433668260" Feb 20 12:15:22.698408 master-0 kubenswrapper[31420]: I0220 12:15:22.698315 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-r94p4" Feb 20 12:15:25.738001 master-0 kubenswrapper[31420]: I0220 12:15:25.737912 31420 generic.go:334] "Generic (PLEG): container finished" podID="ebec7408-42ea-4bdd-9cc9-a42caaefe664" containerID="cb3d1438daf6143130331476727c7f827cfffd7f68294e7b60205101b315decc" exitCode=0 Feb 20 12:15:25.741638 master-0 kubenswrapper[31420]: I0220 12:15:25.739708 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cfkwh" event={"ID":"ebec7408-42ea-4bdd-9cc9-a42caaefe664","Type":"ContainerDied","Data":"cb3d1438daf6143130331476727c7f827cfffd7f68294e7b60205101b315decc"} Feb 20 12:15:25.748433 master-0 kubenswrapper[31420]: I0220 12:15:25.748370 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-k6pl2" event={"ID":"b686af0c-791e-42be-b608-e1a265d973a0","Type":"ContainerStarted","Data":"5ffc2f6949ba4de496bff50ebc69d8bceb29013f397ad1f2610d5ca6c6846eaf"} Feb 20 12:15:25.749423 master-0 kubenswrapper[31420]: I0220 12:15:25.749377 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:25.752467 master-0 kubenswrapper[31420]: I0220 12:15:25.751811 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb" event={"ID":"6cd442ad-ad65-497a-b5eb-bc79c3023466","Type":"ContainerStarted","Data":"33ef9733fb20b4b85d530325c4855a1539366811ad81076d3fdeee673349ef41"} Feb 20 12:15:25.752668 master-0 kubenswrapper[31420]: I0220 12:15:25.752491 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb" Feb 20 12:15:25.755640 master-0 kubenswrapper[31420]: I0220 12:15:25.755520 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz" event={"ID":"91455b18-03a0-49c7-aa61-59b91e88a5fe","Type":"ContainerStarted","Data":"242ba7caacaabf8fc161446d3135bdc4092b248c7a4960b9c0b64eae01fd2530"} Feb 20 12:15:25.755946 master-0 kubenswrapper[31420]: I0220 12:15:25.755910 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz" Feb 20 12:15:25.758599 master-0 kubenswrapper[31420]: I0220 12:15:25.758464 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g" event={"ID":"8585d57e-59ce-4616-9c40-80fa1d13357c","Type":"ContainerStarted","Data":"e0d0d8063c7a8a2ca279097565ed87b90887535fd26ea80dc66aa7b2e2c7deb8"} Feb 20 12:15:25.763362 master-0 kubenswrapper[31420]: I0220 12:15:25.763242 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-r22kg" event={"ID":"a3c9202b-541d-4ec7-9ef5-d5da935ad5d9","Type":"ContainerStarted","Data":"10d73e4565834b73be0f34f5e5ead984759d0af73a7c80821c32c83c02ab5a32"} Feb 20 12:15:25.763362 master-0 kubenswrapper[31420]: I0220 12:15:25.763355 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-r22kg" event={"ID":"a3c9202b-541d-4ec7-9ef5-d5da935ad5d9","Type":"ContainerStarted","Data":"f4fdfc5566e970899b6b5bfaf54309bbdffd48fd157a0827f7de5fdd00c58922"} Feb 20 12:15:25.817516 master-0 kubenswrapper[31420]: I0220 12:15:25.813634 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-r22kg" podStartSLOduration=2.902958747 podStartE2EDuration="7.813611661s" podCreationTimestamp="2026-02-20 12:15:18 +0000 UTC" firstStartedPulling="2026-02-20 12:15:19.584569684 +0000 UTC m=+624.303807925" lastFinishedPulling="2026-02-20 12:15:24.495222558 +0000 UTC m=+629.214460839" observedRunningTime="2026-02-20 12:15:25.807886811 +0000 UTC m=+630.527125052" watchObservedRunningTime="2026-02-20 12:15:25.813611661 +0000 UTC m=+630.532849902" Feb 20 12:15:25.841023 master-0 kubenswrapper[31420]: I0220 12:15:25.840930 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz" podStartSLOduration=3.267832572 podStartE2EDuration="9.840910226s" podCreationTimestamp="2026-02-20 12:15:16 +0000 UTC" firstStartedPulling="2026-02-20 12:15:17.981111906 +0000 UTC m=+622.700350177" lastFinishedPulling="2026-02-20 12:15:24.55418959 +0000 UTC m=+629.273427831" observedRunningTime="2026-02-20 12:15:25.833217011 +0000 UTC m=+630.552455272" watchObservedRunningTime="2026-02-20 12:15:25.840910226 +0000 UTC m=+630.560148457" Feb 20 12:15:25.857972 master-0 kubenswrapper[31420]: I0220 12:15:25.857832 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dml4g" podStartSLOduration=3.08334607 podStartE2EDuration="7.857806739s" podCreationTimestamp="2026-02-20 12:15:18 +0000 UTC" firstStartedPulling="2026-02-20 12:15:19.726289424 +0000 UTC m=+624.445527665" lastFinishedPulling="2026-02-20 12:15:24.500750063 +0000 UTC m=+629.219988334" observedRunningTime="2026-02-20 12:15:25.850621928 +0000 UTC m=+630.569860169" watchObservedRunningTime="2026-02-20 12:15:25.857806739 +0000 UTC m=+630.577044980" Feb 20 12:15:25.881125 master-0 kubenswrapper[31420]: I0220 12:15:25.880999 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-k6pl2" podStartSLOduration=2.500345628 podStartE2EDuration="7.880973928s" podCreationTimestamp="2026-02-20 12:15:18 +0000 UTC" firstStartedPulling="2026-02-20 12:15:19.119334011 +0000 UTC m=+623.838572252" lastFinishedPulling="2026-02-20 12:15:24.499962271 +0000 UTC m=+629.219200552" observedRunningTime="2026-02-20 12:15:25.871678818 +0000 UTC m=+630.590917069" watchObservedRunningTime="2026-02-20 12:15:25.880973928 +0000 UTC m=+630.600212179" Feb 20 12:15:26.783179 master-0 kubenswrapper[31420]: I0220 12:15:26.783004 31420 generic.go:334] "Generic (PLEG): container finished" podID="ebec7408-42ea-4bdd-9cc9-a42caaefe664" containerID="275d601dd1d10adbd938156ee4ad7a53ff80c114f1c7362033929914d9475632" exitCode=0 Feb 20 12:15:26.783870 master-0 kubenswrapper[31420]: I0220 12:15:26.783276 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cfkwh" event={"ID":"ebec7408-42ea-4bdd-9cc9-a42caaefe664","Type":"ContainerDied","Data":"275d601dd1d10adbd938156ee4ad7a53ff80c114f1c7362033929914d9475632"} Feb 20 12:15:26.850255 master-0 kubenswrapper[31420]: I0220 12:15:26.850126 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb" podStartSLOduration=4.379384892 podStartE2EDuration="8.850105242s" podCreationTimestamp="2026-02-20 12:15:18 +0000 UTC" firstStartedPulling="2026-02-20 12:15:20.033086459 +0000 UTC m=+624.752324700" lastFinishedPulling="2026-02-20 12:15:24.503806769 +0000 UTC m=+629.223045050" observedRunningTime="2026-02-20 12:15:25.892552113 +0000 UTC m=+630.611790354" watchObservedRunningTime="2026-02-20 12:15:26.850105242 +0000 UTC m=+631.569343493" Feb 20 12:15:27.804736 master-0 kubenswrapper[31420]: I0220 12:15:27.804629 31420 generic.go:334] "Generic (PLEG): container finished" podID="ebec7408-42ea-4bdd-9cc9-a42caaefe664" containerID="f57d8ddc3466cd219f82344243d9208fc2b105d03442fbf8f3c1b50bd546212f" exitCode=0 Feb 20 12:15:27.805713 master-0 kubenswrapper[31420]: I0220 12:15:27.804738 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cfkwh" event={"ID":"ebec7408-42ea-4bdd-9cc9-a42caaefe664","Type":"ContainerDied","Data":"f57d8ddc3466cd219f82344243d9208fc2b105d03442fbf8f3c1b50bd546212f"} Feb 20 12:15:28.818725 master-0 kubenswrapper[31420]: I0220 12:15:28.818653 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cfkwh" event={"ID":"ebec7408-42ea-4bdd-9cc9-a42caaefe664","Type":"ContainerStarted","Data":"7bc4b2c47abbc8688ac581f1aed1b32b5bb0765d0259eddd715fd448c65853e6"} Feb 20 12:15:28.818725 master-0 kubenswrapper[31420]: I0220 12:15:28.818714 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cfkwh" event={"ID":"ebec7408-42ea-4bdd-9cc9-a42caaefe664","Type":"ContainerStarted","Data":"89a798d38cf32b4f4b8006c3601f595b0b6e9829390286a50afff6a8a4fbd849"} Feb 20 12:15:28.818725 master-0 kubenswrapper[31420]: I0220 12:15:28.818730 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cfkwh" event={"ID":"ebec7408-42ea-4bdd-9cc9-a42caaefe664","Type":"ContainerStarted","Data":"127f383516698733500c240f7c9a5664fa685fff37cd05ed06d1620f29ba5cc1"} Feb 20 12:15:28.819325 master-0 kubenswrapper[31420]: I0220 12:15:28.818742 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cfkwh" event={"ID":"ebec7408-42ea-4bdd-9cc9-a42caaefe664","Type":"ContainerStarted","Data":"b2f23097b4281d0f8d6be8415bac91f07fa8a372bd90d30474888bc16a35146d"} Feb 20 12:15:28.819325 master-0 kubenswrapper[31420]: I0220 12:15:28.818753 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cfkwh" event={"ID":"ebec7408-42ea-4bdd-9cc9-a42caaefe664","Type":"ContainerStarted","Data":"7d369893508cea5110f00d2b37edd60fca199aeadca96ef0054d0d17603285de"} Feb 20 12:15:29.106663 master-0 kubenswrapper[31420]: I0220 12:15:29.106593 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-k6pl2" Feb 20 12:15:29.471797 master-0 kubenswrapper[31420]: I0220 12:15:29.471681 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:29.472373 master-0 kubenswrapper[31420]: I0220 12:15:29.472034 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:29.477787 master-0 kubenswrapper[31420]: I0220 12:15:29.477742 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:29.839214 master-0 kubenswrapper[31420]: I0220 12:15:29.839108 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cfkwh" event={"ID":"ebec7408-42ea-4bdd-9cc9-a42caaefe664","Type":"ContainerStarted","Data":"db18bc2525d0c1be84354c1dafa1249b92d968465685a1920d8c9e1ff56302ca"} Feb 20 12:15:29.840306 master-0 kubenswrapper[31420]: I0220 12:15:29.839573 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:29.845273 master-0 kubenswrapper[31420]: I0220 12:15:29.845163 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6784f9677c-8sx5l" Feb 20 12:15:29.880488 master-0 kubenswrapper[31420]: I0220 12:15:29.880296 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-cfkwh" podStartSLOduration=6.378210164 podStartE2EDuration="13.880273272s" podCreationTimestamp="2026-02-20 12:15:16 +0000 UTC" firstStartedPulling="2026-02-20 12:15:17.061218807 +0000 UTC m=+621.780457048" lastFinishedPulling="2026-02-20 12:15:24.563281875 +0000 UTC m=+629.282520156" observedRunningTime="2026-02-20 12:15:29.878973336 +0000 UTC m=+634.598211617" watchObservedRunningTime="2026-02-20 12:15:29.880273272 +0000 UTC m=+634.599511523" Feb 20 12:15:29.972471 master-0 kubenswrapper[31420]: I0220 12:15:29.970843 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-67d46bcf94-kpph6"] Feb 20 12:15:30.539256 master-0 kubenswrapper[31420]: I0220 12:15:30.539169 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-r94p4" Feb 20 12:15:31.906714 master-0 kubenswrapper[31420]: I0220 12:15:31.906620 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:31.941444 master-0 kubenswrapper[31420]: I0220 12:15:31.941359 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:37.472898 master-0 kubenswrapper[31420]: I0220 12:15:37.472840 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-d7llz" Feb 20 12:15:37.583288 master-0 kubenswrapper[31420]: I0220 12:15:37.582799 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-vdrkc" Feb 20 12:15:39.617719 master-0 kubenswrapper[31420]: I0220 12:15:39.617662 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qj8cb" Feb 20 12:15:44.280665 master-0 kubenswrapper[31420]: I0220 12:15:44.280586 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-shj7q"] Feb 20 12:15:44.282103 master-0 kubenswrapper[31420]: I0220 12:15:44.282072 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.285065 master-0 kubenswrapper[31420]: I0220 12:15:44.285041 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Feb 20 12:15:44.297628 master-0 kubenswrapper[31420]: I0220 12:15:44.297580 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-shj7q"] Feb 20 12:15:44.385912 master-0 kubenswrapper[31420]: I0220 12:15:44.385826 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-run-udev\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.385912 master-0 kubenswrapper[31420]: I0220 12:15:44.385904 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-node-plugin-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.386165 master-0 kubenswrapper[31420]: I0220 12:15:44.385961 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv5nh\" (UniqueName: \"kubernetes.io/projected/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-kube-api-access-tv5nh\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.386252 master-0 kubenswrapper[31420]: I0220 12:15:44.386182 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-metrics-cert\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.386302 master-0 kubenswrapper[31420]: I0220 12:15:44.386256 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-sys\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.386302 master-0 kubenswrapper[31420]: I0220 12:15:44.386278 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-device-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.386368 master-0 kubenswrapper[31420]: I0220 12:15:44.386305 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-registration-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.386368 master-0 kubenswrapper[31420]: I0220 12:15:44.386327 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-csi-plugin-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.386368 master-0 kubenswrapper[31420]: I0220 12:15:44.386358 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-lvmd-config\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.386486 master-0 kubenswrapper[31420]: I0220 12:15:44.386433 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-file-lock-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.386486 master-0 kubenswrapper[31420]: I0220 12:15:44.386460 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-pod-volumes-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.488666 master-0 kubenswrapper[31420]: I0220 12:15:44.488609 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-run-udev\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.488968 master-0 kubenswrapper[31420]: I0220 12:15:44.488947 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-node-plugin-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.489121 master-0 kubenswrapper[31420]: I0220 12:15:44.489103 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv5nh\" (UniqueName: \"kubernetes.io/projected/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-kube-api-access-tv5nh\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.489282 master-0 kubenswrapper[31420]: I0220 12:15:44.489264 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-metrics-cert\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.489396 master-0 kubenswrapper[31420]: I0220 12:15:44.489380 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-sys\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.489495 master-0 kubenswrapper[31420]: I0220 12:15:44.489479 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-device-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.489690 master-0 kubenswrapper[31420]: I0220 12:15:44.489671 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-registration-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.489811 master-0 kubenswrapper[31420]: I0220 12:15:44.489795 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-csi-plugin-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.489944 master-0 kubenswrapper[31420]: I0220 12:15:44.488799 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-run-udev\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.490010 master-0 kubenswrapper[31420]: I0220 12:15:44.489675 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-device-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.490010 master-0 kubenswrapper[31420]: I0220 12:15:44.489607 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-sys\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.490094 master-0 kubenswrapper[31420]: I0220 12:15:44.489740 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-registration-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.490094 master-0 kubenswrapper[31420]: I0220 12:15:44.489909 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-lvmd-config\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.490181 master-0 kubenswrapper[31420]: I0220 12:15:44.490100 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-csi-plugin-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.490226 master-0 kubenswrapper[31420]: I0220 12:15:44.489314 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-node-plugin-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.490486 master-0 kubenswrapper[31420]: I0220 12:15:44.490465 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-lvmd-config\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.490677 master-0 kubenswrapper[31420]: I0220 12:15:44.490656 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-file-lock-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.490791 master-0 kubenswrapper[31420]: I0220 12:15:44.490773 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-pod-volumes-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.491054 master-0 kubenswrapper[31420]: I0220 12:15:44.491008 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-pod-volumes-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.491454 master-0 kubenswrapper[31420]: I0220 12:15:44.491396 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-file-lock-dir\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.493835 master-0 kubenswrapper[31420]: I0220 12:15:44.493770 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-metrics-cert\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.504779 master-0 kubenswrapper[31420]: I0220 12:15:44.504733 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv5nh\" (UniqueName: \"kubernetes.io/projected/449f5fe0-a9c5-41de-aa38-3f96b439a8ba-kube-api-access-tv5nh\") pod \"vg-manager-shj7q\" (UID: \"449f5fe0-a9c5-41de-aa38-3f96b439a8ba\") " pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:44.637239 master-0 kubenswrapper[31420]: I0220 12:15:44.636953 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:45.145958 master-0 kubenswrapper[31420]: I0220 12:15:45.145893 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-shj7q"] Feb 20 12:15:45.156084 master-0 kubenswrapper[31420]: W0220 12:15:45.156014 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod449f5fe0_a9c5_41de_aa38_3f96b439a8ba.slice/crio-ca1984288bd1eec4613b7a549a8bba401f421f2c533084b8536d4a41cdc678e1 WatchSource:0}: Error finding container ca1984288bd1eec4613b7a549a8bba401f421f2c533084b8536d4a41cdc678e1: Status 404 returned error can't find the container with id ca1984288bd1eec4613b7a549a8bba401f421f2c533084b8536d4a41cdc678e1 Feb 20 12:15:46.047438 master-0 kubenswrapper[31420]: I0220 12:15:46.047350 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-shj7q" event={"ID":"449f5fe0-a9c5-41de-aa38-3f96b439a8ba","Type":"ContainerStarted","Data":"15e9e9d31d8ac5d5000745152ca4fbae7f3e866dea7510a27c70a3c99c3adc4b"} Feb 20 12:15:46.047972 master-0 kubenswrapper[31420]: I0220 12:15:46.047450 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-shj7q" event={"ID":"449f5fe0-a9c5-41de-aa38-3f96b439a8ba","Type":"ContainerStarted","Data":"ca1984288bd1eec4613b7a549a8bba401f421f2c533084b8536d4a41cdc678e1"} Feb 20 12:15:46.081653 master-0 kubenswrapper[31420]: I0220 12:15:46.081499 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-shj7q" podStartSLOduration=2.081483311 podStartE2EDuration="2.081483311s" podCreationTimestamp="2026-02-20 12:15:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:15:46.073401675 +0000 UTC m=+650.792639956" watchObservedRunningTime="2026-02-20 12:15:46.081483311 +0000 UTC m=+650.800721562" Feb 20 12:15:46.913549 master-0 kubenswrapper[31420]: I0220 12:15:46.912980 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-cfkwh" Feb 20 12:15:48.075297 master-0 kubenswrapper[31420]: I0220 12:15:48.075223 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-shj7q_449f5fe0-a9c5-41de-aa38-3f96b439a8ba/vg-manager/0.log" Feb 20 12:15:48.075297 master-0 kubenswrapper[31420]: I0220 12:15:48.075288 31420 generic.go:334] "Generic (PLEG): container finished" podID="449f5fe0-a9c5-41de-aa38-3f96b439a8ba" containerID="15e9e9d31d8ac5d5000745152ca4fbae7f3e866dea7510a27c70a3c99c3adc4b" exitCode=1 Feb 20 12:15:48.076321 master-0 kubenswrapper[31420]: I0220 12:15:48.075323 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-shj7q" event={"ID":"449f5fe0-a9c5-41de-aa38-3f96b439a8ba","Type":"ContainerDied","Data":"15e9e9d31d8ac5d5000745152ca4fbae7f3e866dea7510a27c70a3c99c3adc4b"} Feb 20 12:15:48.076321 master-0 kubenswrapper[31420]: I0220 12:15:48.075830 31420 scope.go:117] "RemoveContainer" containerID="15e9e9d31d8ac5d5000745152ca4fbae7f3e866dea7510a27c70a3c99c3adc4b" Feb 20 12:15:48.439269 master-0 kubenswrapper[31420]: I0220 12:15:48.438106 31420 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Feb 20 12:15:49.101289 master-0 kubenswrapper[31420]: I0220 12:15:49.101193 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-shj7q_449f5fe0-a9c5-41de-aa38-3f96b439a8ba/vg-manager/0.log" Feb 20 12:15:49.102154 master-0 kubenswrapper[31420]: I0220 12:15:49.101309 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-shj7q" event={"ID":"449f5fe0-a9c5-41de-aa38-3f96b439a8ba","Type":"ContainerStarted","Data":"f1de7d0332a35578b264b6ef8c2e6f996383b7b3bb84318b0b4c17d8fcd926ba"} Feb 20 12:15:49.273808 master-0 kubenswrapper[31420]: I0220 12:15:49.273616 31420 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-02-20T12:15:48.438151214Z","Handler":null,"Name":""} Feb 20 12:15:49.277180 master-0 kubenswrapper[31420]: I0220 12:15:49.277106 31420 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Feb 20 12:15:49.277306 master-0 kubenswrapper[31420]: I0220 12:15:49.277193 31420 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Feb 20 12:15:54.637604 master-0 kubenswrapper[31420]: I0220 12:15:54.637472 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:54.640844 master-0 kubenswrapper[31420]: I0220 12:15:54.640786 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:55.036473 master-0 kubenswrapper[31420]: I0220 12:15:55.036377 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-67d46bcf94-kpph6" podUID="5f543dc6-6a36-46e9-b01c-bb79931b13ac" containerName="console" containerID="cri-o://03ae87a577455e5806a844ed4c5637b1fbc1a24ab80943d551023758b2052453" gracePeriod=15 Feb 20 12:15:55.164688 master-0 kubenswrapper[31420]: I0220 12:15:55.164600 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:55.166683 master-0 kubenswrapper[31420]: I0220 12:15:55.166616 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-shj7q" Feb 20 12:15:55.765516 master-0 kubenswrapper[31420]: I0220 12:15:55.765378 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67d46bcf94-kpph6_5f543dc6-6a36-46e9-b01c-bb79931b13ac/console/0.log" Feb 20 12:15:55.766270 master-0 kubenswrapper[31420]: I0220 12:15:55.765715 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:15:55.908114 master-0 kubenswrapper[31420]: I0220 12:15:55.908009 31420 scope.go:117] "RemoveContainer" containerID="9fb57990b9207fa6d4fe791972eb076de54f242b4467b952a304b997f55aee4c" Feb 20 12:15:55.935920 master-0 kubenswrapper[31420]: I0220 12:15:55.935875 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-oauth-config\") pod \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " Feb 20 12:15:55.936075 master-0 kubenswrapper[31420]: I0220 12:15:55.935984 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-trusted-ca-bundle\") pod \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " Feb 20 12:15:55.936075 master-0 kubenswrapper[31420]: I0220 12:15:55.936066 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-config\") pod \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " Feb 20 12:15:55.936205 master-0 kubenswrapper[31420]: I0220 12:15:55.936106 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-service-ca\") pod \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " Feb 20 12:15:55.936300 master-0 kubenswrapper[31420]: I0220 12:15:55.936259 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-oauth-serving-cert\") pod \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " Feb 20 12:15:55.936451 master-0 kubenswrapper[31420]: I0220 12:15:55.936422 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcrfs\" (UniqueName: \"kubernetes.io/projected/5f543dc6-6a36-46e9-b01c-bb79931b13ac-kube-api-access-bcrfs\") pod \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " Feb 20 12:15:55.936548 master-0 kubenswrapper[31420]: I0220 12:15:55.936466 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-serving-cert\") pod \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\" (UID: \"5f543dc6-6a36-46e9-b01c-bb79931b13ac\") " Feb 20 12:15:55.936919 master-0 kubenswrapper[31420]: I0220 12:15:55.936755 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-service-ca" (OuterVolumeSpecName: "service-ca") pod "5f543dc6-6a36-46e9-b01c-bb79931b13ac" (UID: "5f543dc6-6a36-46e9-b01c-bb79931b13ac"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:15:55.937013 master-0 kubenswrapper[31420]: I0220 12:15:55.936960 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5f543dc6-6a36-46e9-b01c-bb79931b13ac" (UID: "5f543dc6-6a36-46e9-b01c-bb79931b13ac"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:15:55.937618 master-0 kubenswrapper[31420]: I0220 12:15:55.937033 31420 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 12:15:55.937864 master-0 kubenswrapper[31420]: I0220 12:15:55.937476 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5f543dc6-6a36-46e9-b01c-bb79931b13ac" (UID: "5f543dc6-6a36-46e9-b01c-bb79931b13ac"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:15:55.938099 master-0 kubenswrapper[31420]: I0220 12:15:55.938039 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-config" (OuterVolumeSpecName: "console-config") pod "5f543dc6-6a36-46e9-b01c-bb79931b13ac" (UID: "5f543dc6-6a36-46e9-b01c-bb79931b13ac"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:15:55.940617 master-0 kubenswrapper[31420]: I0220 12:15:55.940562 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5f543dc6-6a36-46e9-b01c-bb79931b13ac" (UID: "5f543dc6-6a36-46e9-b01c-bb79931b13ac"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:15:55.940869 master-0 kubenswrapper[31420]: I0220 12:15:55.940808 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5f543dc6-6a36-46e9-b01c-bb79931b13ac" (UID: "5f543dc6-6a36-46e9-b01c-bb79931b13ac"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:15:55.941549 master-0 kubenswrapper[31420]: I0220 12:15:55.941443 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f543dc6-6a36-46e9-b01c-bb79931b13ac-kube-api-access-bcrfs" (OuterVolumeSpecName: "kube-api-access-bcrfs") pod "5f543dc6-6a36-46e9-b01c-bb79931b13ac" (UID: "5f543dc6-6a36-46e9-b01c-bb79931b13ac"). InnerVolumeSpecName "kube-api-access-bcrfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:15:56.039768 master-0 kubenswrapper[31420]: I0220 12:15:56.039670 31420 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:15:56.039768 master-0 kubenswrapper[31420]: I0220 12:15:56.039744 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcrfs\" (UniqueName: \"kubernetes.io/projected/5f543dc6-6a36-46e9-b01c-bb79931b13ac-kube-api-access-bcrfs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:15:56.039768 master-0 kubenswrapper[31420]: I0220 12:15:56.039769 31420 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 12:15:56.039768 master-0 kubenswrapper[31420]: I0220 12:15:56.039793 31420 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:15:56.040310 master-0 kubenswrapper[31420]: I0220 12:15:56.039815 31420 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:15:56.040310 master-0 kubenswrapper[31420]: I0220 12:15:56.039838 31420 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5f543dc6-6a36-46e9-b01c-bb79931b13ac-console-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:15:56.178264 master-0 kubenswrapper[31420]: I0220 12:15:56.178194 31420 generic.go:334] "Generic (PLEG): container finished" podID="5f543dc6-6a36-46e9-b01c-bb79931b13ac" containerID="03ae87a577455e5806a844ed4c5637b1fbc1a24ab80943d551023758b2052453" exitCode=2 Feb 20 12:15:56.178647 master-0 kubenswrapper[31420]: I0220 12:15:56.178251 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67d46bcf94-kpph6" event={"ID":"5f543dc6-6a36-46e9-b01c-bb79931b13ac","Type":"ContainerDied","Data":"03ae87a577455e5806a844ed4c5637b1fbc1a24ab80943d551023758b2052453"} Feb 20 12:15:56.178734 master-0 kubenswrapper[31420]: I0220 12:15:56.178428 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67d46bcf94-kpph6" Feb 20 12:15:56.178882 master-0 kubenswrapper[31420]: I0220 12:15:56.178668 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67d46bcf94-kpph6" event={"ID":"5f543dc6-6a36-46e9-b01c-bb79931b13ac","Type":"ContainerDied","Data":"77daf0bc051c91820775b31053034e8ee60ba431b87896db49a9ae1194b9f5c5"} Feb 20 12:15:56.178945 master-0 kubenswrapper[31420]: I0220 12:15:56.178856 31420 scope.go:117] "RemoveContainer" containerID="03ae87a577455e5806a844ed4c5637b1fbc1a24ab80943d551023758b2052453" Feb 20 12:15:56.206402 master-0 kubenswrapper[31420]: I0220 12:15:56.206344 31420 scope.go:117] "RemoveContainer" containerID="03ae87a577455e5806a844ed4c5637b1fbc1a24ab80943d551023758b2052453" Feb 20 12:15:56.207075 master-0 kubenswrapper[31420]: E0220 12:15:56.207007 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03ae87a577455e5806a844ed4c5637b1fbc1a24ab80943d551023758b2052453\": container with ID starting with 03ae87a577455e5806a844ed4c5637b1fbc1a24ab80943d551023758b2052453 not found: ID does not exist" containerID="03ae87a577455e5806a844ed4c5637b1fbc1a24ab80943d551023758b2052453" Feb 20 12:15:56.207198 master-0 kubenswrapper[31420]: I0220 12:15:56.207072 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ae87a577455e5806a844ed4c5637b1fbc1a24ab80943d551023758b2052453"} err="failed to get container status \"03ae87a577455e5806a844ed4c5637b1fbc1a24ab80943d551023758b2052453\": rpc error: code = NotFound desc = could not find container \"03ae87a577455e5806a844ed4c5637b1fbc1a24ab80943d551023758b2052453\": container with ID starting with 03ae87a577455e5806a844ed4c5637b1fbc1a24ab80943d551023758b2052453 not found: ID does not exist" Feb 20 12:15:56.335865 master-0 kubenswrapper[31420]: I0220 12:15:56.335796 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-67d46bcf94-kpph6"] Feb 20 12:15:56.343540 master-0 kubenswrapper[31420]: I0220 12:15:56.343461 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-67d46bcf94-kpph6"] Feb 20 12:15:56.351718 master-0 kubenswrapper[31420]: E0220 12:15:56.351663 31420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f543dc6_6a36_46e9_b01c_bb79931b13ac.slice\": RecentStats: unable to find data in memory cache]" Feb 20 12:15:57.515802 master-0 kubenswrapper[31420]: I0220 12:15:57.515694 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f543dc6-6a36-46e9-b01c-bb79931b13ac" path="/var/lib/kubelet/pods/5f543dc6-6a36-46e9-b01c-bb79931b13ac/volumes" Feb 20 12:15:58.200081 master-0 kubenswrapper[31420]: I0220 12:15:58.199980 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sxxrm"] Feb 20 12:15:58.200488 master-0 kubenswrapper[31420]: E0220 12:15:58.200459 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f543dc6-6a36-46e9-b01c-bb79931b13ac" containerName="console" Feb 20 12:15:58.200488 master-0 kubenswrapper[31420]: I0220 12:15:58.200486 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f543dc6-6a36-46e9-b01c-bb79931b13ac" containerName="console" Feb 20 12:15:58.200845 master-0 kubenswrapper[31420]: I0220 12:15:58.200818 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f543dc6-6a36-46e9-b01c-bb79931b13ac" containerName="console" Feb 20 12:15:58.201626 master-0 kubenswrapper[31420]: I0220 12:15:58.201596 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sxxrm" Feb 20 12:15:58.204381 master-0 kubenswrapper[31420]: I0220 12:15:58.204330 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 20 12:15:58.204683 master-0 kubenswrapper[31420]: I0220 12:15:58.204656 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 20 12:15:58.228644 master-0 kubenswrapper[31420]: I0220 12:15:58.228567 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sxxrm"] Feb 20 12:15:58.280422 master-0 kubenswrapper[31420]: I0220 12:15:58.280236 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkmb8\" (UniqueName: \"kubernetes.io/projected/2c9bf6b8-d30d-474f-adc8-97f248eca755-kube-api-access-qkmb8\") pod \"openstack-operator-index-sxxrm\" (UID: \"2c9bf6b8-d30d-474f-adc8-97f248eca755\") " pod="openstack-operators/openstack-operator-index-sxxrm" Feb 20 12:15:58.381976 master-0 kubenswrapper[31420]: I0220 12:15:58.381887 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkmb8\" (UniqueName: \"kubernetes.io/projected/2c9bf6b8-d30d-474f-adc8-97f248eca755-kube-api-access-qkmb8\") pod \"openstack-operator-index-sxxrm\" (UID: \"2c9bf6b8-d30d-474f-adc8-97f248eca755\") " pod="openstack-operators/openstack-operator-index-sxxrm" Feb 20 12:15:58.407744 master-0 kubenswrapper[31420]: I0220 12:15:58.407674 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkmb8\" (UniqueName: \"kubernetes.io/projected/2c9bf6b8-d30d-474f-adc8-97f248eca755-kube-api-access-qkmb8\") pod \"openstack-operator-index-sxxrm\" (UID: \"2c9bf6b8-d30d-474f-adc8-97f248eca755\") " pod="openstack-operators/openstack-operator-index-sxxrm" Feb 20 12:15:58.522603 master-0 kubenswrapper[31420]: I0220 12:15:58.520505 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sxxrm" Feb 20 12:15:58.940416 master-0 kubenswrapper[31420]: I0220 12:15:58.938703 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sxxrm"] Feb 20 12:15:58.946751 master-0 kubenswrapper[31420]: W0220 12:15:58.946679 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c9bf6b8_d30d_474f_adc8_97f248eca755.slice/crio-66558dbd16bb5835793590940f3642ee43bf6454e08ad9deabb14b885d278ed7 WatchSource:0}: Error finding container 66558dbd16bb5835793590940f3642ee43bf6454e08ad9deabb14b885d278ed7: Status 404 returned error can't find the container with id 66558dbd16bb5835793590940f3642ee43bf6454e08ad9deabb14b885d278ed7 Feb 20 12:15:59.212352 master-0 kubenswrapper[31420]: I0220 12:15:59.212175 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sxxrm" event={"ID":"2c9bf6b8-d30d-474f-adc8-97f248eca755","Type":"ContainerStarted","Data":"66558dbd16bb5835793590940f3642ee43bf6454e08ad9deabb14b885d278ed7"} Feb 20 12:16:00.236995 master-0 kubenswrapper[31420]: I0220 12:16:00.236949 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sxxrm" event={"ID":"2c9bf6b8-d30d-474f-adc8-97f248eca755","Type":"ContainerStarted","Data":"2c95cb2702b0bdbacdd4b76ceecf26b4114d21f0a9c0b47de8f8c0358da6f494"} Feb 20 12:16:08.521022 master-0 kubenswrapper[31420]: I0220 12:16:08.520920 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-sxxrm" Feb 20 12:16:08.522564 master-0 kubenswrapper[31420]: I0220 12:16:08.522236 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-sxxrm" Feb 20 12:16:08.583221 master-0 kubenswrapper[31420]: I0220 12:16:08.583146 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-sxxrm" Feb 20 12:16:08.609287 master-0 kubenswrapper[31420]: I0220 12:16:08.609173 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sxxrm" podStartSLOduration=9.699548184 podStartE2EDuration="10.609148091s" podCreationTimestamp="2026-02-20 12:15:58 +0000 UTC" firstStartedPulling="2026-02-20 12:15:58.951657916 +0000 UTC m=+663.670896197" lastFinishedPulling="2026-02-20 12:15:59.861257853 +0000 UTC m=+664.580496104" observedRunningTime="2026-02-20 12:16:00.270883752 +0000 UTC m=+664.990121983" watchObservedRunningTime="2026-02-20 12:16:08.609148091 +0000 UTC m=+673.328386362" Feb 20 12:16:09.395686 master-0 kubenswrapper[31420]: I0220 12:16:09.395553 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-sxxrm" Feb 20 12:16:10.633836 master-0 kubenswrapper[31420]: I0220 12:16:10.633770 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr"] Feb 20 12:16:10.636648 master-0 kubenswrapper[31420]: I0220 12:16:10.636619 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" Feb 20 12:16:10.650896 master-0 kubenswrapper[31420]: I0220 12:16:10.650852 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr"] Feb 20 12:16:10.770852 master-0 kubenswrapper[31420]: I0220 12:16:10.770779 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr\" (UID: \"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" Feb 20 12:16:10.771339 master-0 kubenswrapper[31420]: I0220 12:16:10.771303 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr\" (UID: \"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" Feb 20 12:16:10.771659 master-0 kubenswrapper[31420]: I0220 12:16:10.771616 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvm5g\" (UniqueName: \"kubernetes.io/projected/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-kube-api-access-gvm5g\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr\" (UID: \"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" Feb 20 12:16:10.874051 master-0 kubenswrapper[31420]: I0220 12:16:10.873950 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr\" (UID: \"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" Feb 20 12:16:10.874376 master-0 kubenswrapper[31420]: I0220 12:16:10.874103 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr\" (UID: \"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" Feb 20 12:16:10.874376 master-0 kubenswrapper[31420]: I0220 12:16:10.874210 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvm5g\" (UniqueName: \"kubernetes.io/projected/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-kube-api-access-gvm5g\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr\" (UID: \"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" Feb 20 12:16:10.874910 master-0 kubenswrapper[31420]: I0220 12:16:10.874844 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr\" (UID: \"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" Feb 20 12:16:10.875067 master-0 kubenswrapper[31420]: I0220 12:16:10.874946 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr\" (UID: \"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" Feb 20 12:16:10.893663 master-0 kubenswrapper[31420]: I0220 12:16:10.893560 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvm5g\" (UniqueName: \"kubernetes.io/projected/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-kube-api-access-gvm5g\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr\" (UID: \"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" Feb 20 12:16:10.969282 master-0 kubenswrapper[31420]: I0220 12:16:10.969193 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" Feb 20 12:16:11.523275 master-0 kubenswrapper[31420]: I0220 12:16:11.523170 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr"] Feb 20 12:16:11.984503 master-0 kubenswrapper[31420]: E0220 12:16:11.984436 31420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de82d75_91b3_4f8d_8bfd_e67bca18fa5e.slice/crio-conmon-f6671d57ebaad60fac5a6c20bda2c2a9ee9b1bb33d59b908866caf427a2b5ad7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de82d75_91b3_4f8d_8bfd_e67bca18fa5e.slice/crio-f6671d57ebaad60fac5a6c20bda2c2a9ee9b1bb33d59b908866caf427a2b5ad7.scope\": RecentStats: unable to find data in memory cache]" Feb 20 12:16:12.391587 master-0 kubenswrapper[31420]: I0220 12:16:12.391293 31420 generic.go:334] "Generic (PLEG): container finished" podID="8de82d75-91b3-4f8d-8bfd-e67bca18fa5e" containerID="f6671d57ebaad60fac5a6c20bda2c2a9ee9b1bb33d59b908866caf427a2b5ad7" exitCode=0 Feb 20 12:16:12.391587 master-0 kubenswrapper[31420]: I0220 12:16:12.391370 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" event={"ID":"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e","Type":"ContainerDied","Data":"f6671d57ebaad60fac5a6c20bda2c2a9ee9b1bb33d59b908866caf427a2b5ad7"} Feb 20 12:16:12.391587 master-0 kubenswrapper[31420]: I0220 12:16:12.391409 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" event={"ID":"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e","Type":"ContainerStarted","Data":"47d6eb05c0c75279e1f0919f757567c56a21a2b41d6ca6f7cf0ab4ac632d5e99"} Feb 20 12:16:13.403956 master-0 kubenswrapper[31420]: I0220 12:16:13.402968 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" event={"ID":"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e","Type":"ContainerStarted","Data":"b2723d7d2a0d2693f5a43c64cef9bf5f10fcbc0c950e39a3a6e97e3059164da9"} Feb 20 12:16:14.417599 master-0 kubenswrapper[31420]: I0220 12:16:14.417368 31420 generic.go:334] "Generic (PLEG): container finished" podID="8de82d75-91b3-4f8d-8bfd-e67bca18fa5e" containerID="b2723d7d2a0d2693f5a43c64cef9bf5f10fcbc0c950e39a3a6e97e3059164da9" exitCode=0 Feb 20 12:16:14.417599 master-0 kubenswrapper[31420]: I0220 12:16:14.417454 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" event={"ID":"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e","Type":"ContainerDied","Data":"b2723d7d2a0d2693f5a43c64cef9bf5f10fcbc0c950e39a3a6e97e3059164da9"} Feb 20 12:16:15.435365 master-0 kubenswrapper[31420]: I0220 12:16:15.435251 31420 generic.go:334] "Generic (PLEG): container finished" podID="8de82d75-91b3-4f8d-8bfd-e67bca18fa5e" containerID="d9cea6b474ba545c1ca7ee1d0ca80210ad802b9b82bac350bf2759bdf8339c41" exitCode=0 Feb 20 12:16:15.436517 master-0 kubenswrapper[31420]: I0220 12:16:15.435341 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" event={"ID":"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e","Type":"ContainerDied","Data":"d9cea6b474ba545c1ca7ee1d0ca80210ad802b9b82bac350bf2759bdf8339c41"} Feb 20 12:16:16.872028 master-0 kubenswrapper[31420]: I0220 12:16:16.871955 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" Feb 20 12:16:16.950219 master-0 kubenswrapper[31420]: I0220 12:16:16.950088 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-util\") pod \"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e\" (UID: \"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e\") " Feb 20 12:16:16.951018 master-0 kubenswrapper[31420]: I0220 12:16:16.950992 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvm5g\" (UniqueName: \"kubernetes.io/projected/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-kube-api-access-gvm5g\") pod \"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e\" (UID: \"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e\") " Feb 20 12:16:16.951334 master-0 kubenswrapper[31420]: I0220 12:16:16.951274 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-bundle\") pod \"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e\" (UID: \"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e\") " Feb 20 12:16:16.952550 master-0 kubenswrapper[31420]: I0220 12:16:16.952472 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-bundle" (OuterVolumeSpecName: "bundle") pod "8de82d75-91b3-4f8d-8bfd-e67bca18fa5e" (UID: "8de82d75-91b3-4f8d-8bfd-e67bca18fa5e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:16:16.961328 master-0 kubenswrapper[31420]: I0220 12:16:16.961224 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-kube-api-access-gvm5g" (OuterVolumeSpecName: "kube-api-access-gvm5g") pod "8de82d75-91b3-4f8d-8bfd-e67bca18fa5e" (UID: "8de82d75-91b3-4f8d-8bfd-e67bca18fa5e"). InnerVolumeSpecName "kube-api-access-gvm5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:16:17.055092 master-0 kubenswrapper[31420]: I0220 12:16:17.055033 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvm5g\" (UniqueName: \"kubernetes.io/projected/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-kube-api-access-gvm5g\") on node \"master-0\" DevicePath \"\"" Feb 20 12:16:17.055092 master-0 kubenswrapper[31420]: I0220 12:16:17.055089 31420 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:16:17.080765 master-0 kubenswrapper[31420]: I0220 12:16:17.080576 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-util" (OuterVolumeSpecName: "util") pod "8de82d75-91b3-4f8d-8bfd-e67bca18fa5e" (UID: "8de82d75-91b3-4f8d-8bfd-e67bca18fa5e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:16:17.158181 master-0 kubenswrapper[31420]: I0220 12:16:17.158102 31420 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8de82d75-91b3-4f8d-8bfd-e67bca18fa5e-util\") on node \"master-0\" DevicePath \"\"" Feb 20 12:16:17.462608 master-0 kubenswrapper[31420]: I0220 12:16:17.462403 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" event={"ID":"8de82d75-91b3-4f8d-8bfd-e67bca18fa5e","Type":"ContainerDied","Data":"47d6eb05c0c75279e1f0919f757567c56a21a2b41d6ca6f7cf0ab4ac632d5e99"} Feb 20 12:16:17.462608 master-0 kubenswrapper[31420]: I0220 12:16:17.462445 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47d6eb05c0c75279e1f0919f757567c56a21a2b41d6ca6f7cf0ab4ac632d5e99" Feb 20 12:16:17.462608 master-0 kubenswrapper[31420]: I0220 12:16:17.462470 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr" Feb 20 12:16:22.830136 master-0 kubenswrapper[31420]: I0220 12:16:22.830059 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-zcfvv"] Feb 20 12:16:22.830893 master-0 kubenswrapper[31420]: E0220 12:16:22.830474 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de82d75-91b3-4f8d-8bfd-e67bca18fa5e" containerName="pull" Feb 20 12:16:22.830893 master-0 kubenswrapper[31420]: I0220 12:16:22.830489 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de82d75-91b3-4f8d-8bfd-e67bca18fa5e" containerName="pull" Feb 20 12:16:22.830893 master-0 kubenswrapper[31420]: E0220 12:16:22.830505 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de82d75-91b3-4f8d-8bfd-e67bca18fa5e" containerName="util" Feb 20 12:16:22.830893 master-0 kubenswrapper[31420]: I0220 12:16:22.830513 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de82d75-91b3-4f8d-8bfd-e67bca18fa5e" containerName="util" Feb 20 12:16:22.830893 master-0 kubenswrapper[31420]: E0220 12:16:22.830579 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de82d75-91b3-4f8d-8bfd-e67bca18fa5e" containerName="extract" Feb 20 12:16:22.830893 master-0 kubenswrapper[31420]: I0220 12:16:22.830589 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de82d75-91b3-4f8d-8bfd-e67bca18fa5e" containerName="extract" Feb 20 12:16:22.830893 master-0 kubenswrapper[31420]: I0220 12:16:22.830801 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8de82d75-91b3-4f8d-8bfd-e67bca18fa5e" containerName="extract" Feb 20 12:16:22.831437 master-0 kubenswrapper[31420]: I0220 12:16:22.831414 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-zcfvv" Feb 20 12:16:22.857605 master-0 kubenswrapper[31420]: I0220 12:16:22.857518 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-zcfvv"] Feb 20 12:16:22.868930 master-0 kubenswrapper[31420]: I0220 12:16:22.868863 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msvx8\" (UniqueName: \"kubernetes.io/projected/493ecdb3-0ff0-4c1f-8e5b-b713b7d9bc91-kube-api-access-msvx8\") pod \"openstack-operator-controller-init-6679bf9b57-zcfvv\" (UID: \"493ecdb3-0ff0-4c1f-8e5b-b713b7d9bc91\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-zcfvv" Feb 20 12:16:22.970834 master-0 kubenswrapper[31420]: I0220 12:16:22.970749 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msvx8\" (UniqueName: \"kubernetes.io/projected/493ecdb3-0ff0-4c1f-8e5b-b713b7d9bc91-kube-api-access-msvx8\") pod \"openstack-operator-controller-init-6679bf9b57-zcfvv\" (UID: \"493ecdb3-0ff0-4c1f-8e5b-b713b7d9bc91\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-zcfvv" Feb 20 12:16:22.989199 master-0 kubenswrapper[31420]: I0220 12:16:22.989138 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msvx8\" (UniqueName: \"kubernetes.io/projected/493ecdb3-0ff0-4c1f-8e5b-b713b7d9bc91-kube-api-access-msvx8\") pod \"openstack-operator-controller-init-6679bf9b57-zcfvv\" (UID: \"493ecdb3-0ff0-4c1f-8e5b-b713b7d9bc91\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-zcfvv" Feb 20 12:16:23.150950 master-0 kubenswrapper[31420]: I0220 12:16:23.150807 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-zcfvv" Feb 20 12:16:23.614363 master-0 kubenswrapper[31420]: I0220 12:16:23.614258 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-zcfvv"] Feb 20 12:16:23.618046 master-0 kubenswrapper[31420]: W0220 12:16:23.617989 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod493ecdb3_0ff0_4c1f_8e5b_b713b7d9bc91.slice/crio-7e7e31fe3c01584e3ba793dbb9fcdb72fbd1d650c2e2b0220b24df03f0bcc2ad WatchSource:0}: Error finding container 7e7e31fe3c01584e3ba793dbb9fcdb72fbd1d650c2e2b0220b24df03f0bcc2ad: Status 404 returned error can't find the container with id 7e7e31fe3c01584e3ba793dbb9fcdb72fbd1d650c2e2b0220b24df03f0bcc2ad Feb 20 12:16:24.556833 master-0 kubenswrapper[31420]: I0220 12:16:24.556755 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-zcfvv" event={"ID":"493ecdb3-0ff0-4c1f-8e5b-b713b7d9bc91","Type":"ContainerStarted","Data":"7e7e31fe3c01584e3ba793dbb9fcdb72fbd1d650c2e2b0220b24df03f0bcc2ad"} Feb 20 12:16:28.605763 master-0 kubenswrapper[31420]: I0220 12:16:28.605692 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-zcfvv" event={"ID":"493ecdb3-0ff0-4c1f-8e5b-b713b7d9bc91","Type":"ContainerStarted","Data":"fb15e3cc239f37bbdb0bd4c739f319975ac597ef748358e01823d5e429d39f64"} Feb 20 12:16:28.606746 master-0 kubenswrapper[31420]: I0220 12:16:28.605872 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-zcfvv" Feb 20 12:16:28.643891 master-0 kubenswrapper[31420]: I0220 12:16:28.643230 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-zcfvv" podStartSLOduration=2.445452783 podStartE2EDuration="6.643214854s" podCreationTimestamp="2026-02-20 12:16:22 +0000 UTC" firstStartedPulling="2026-02-20 12:16:23.620475794 +0000 UTC m=+688.339714045" lastFinishedPulling="2026-02-20 12:16:27.818237865 +0000 UTC m=+692.537476116" observedRunningTime="2026-02-20 12:16:28.640895079 +0000 UTC m=+693.360133370" watchObservedRunningTime="2026-02-20 12:16:28.643214854 +0000 UTC m=+693.362453095" Feb 20 12:16:33.154264 master-0 kubenswrapper[31420]: I0220 12:16:33.154170 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-zcfvv" Feb 20 12:16:53.924760 master-0 kubenswrapper[31420]: I0220 12:16:53.924482 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-2db7x"] Feb 20 12:16:53.944821 master-0 kubenswrapper[31420]: I0220 12:16:53.941959 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2db7x" Feb 20 12:16:53.996547 master-0 kubenswrapper[31420]: I0220 12:16:53.988585 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-8xjmf"] Feb 20 12:16:53.996547 master-0 kubenswrapper[31420]: I0220 12:16:53.990190 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8xjmf" Feb 20 12:16:54.009544 master-0 kubenswrapper[31420]: I0220 12:16:54.008518 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-2db7x"] Feb 20 12:16:54.033598 master-0 kubenswrapper[31420]: I0220 12:16:54.025099 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-8xjmf"] Feb 20 12:16:54.044358 master-0 kubenswrapper[31420]: I0220 12:16:54.044274 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d626h\" (UniqueName: \"kubernetes.io/projected/5b412160-9ed7-4c10-9dc9-7fbe93d45803-kube-api-access-d626h\") pod \"barbican-operator-controller-manager-868647ff47-2db7x\" (UID: \"5b412160-9ed7-4c10-9dc9-7fbe93d45803\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2db7x" Feb 20 12:16:54.044637 master-0 kubenswrapper[31420]: I0220 12:16:54.044410 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhj29\" (UniqueName: \"kubernetes.io/projected/f0fda7fa-0935-47fc-8c9b-723d5b352c04-kube-api-access-xhj29\") pod \"cinder-operator-controller-manager-5d946d989d-8xjmf\" (UID: \"f0fda7fa-0935-47fc-8c9b-723d5b352c04\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8xjmf" Feb 20 12:16:54.119408 master-0 kubenswrapper[31420]: I0220 12:16:54.119340 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-f7vz4"] Feb 20 12:16:54.122081 master-0 kubenswrapper[31420]: I0220 12:16:54.122045 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-f7vz4" Feb 20 12:16:54.138228 master-0 kubenswrapper[31420]: I0220 12:16:54.135571 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-f7vz4"] Feb 20 12:16:54.147423 master-0 kubenswrapper[31420]: I0220 12:16:54.147354 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhj29\" (UniqueName: \"kubernetes.io/projected/f0fda7fa-0935-47fc-8c9b-723d5b352c04-kube-api-access-xhj29\") pod \"cinder-operator-controller-manager-5d946d989d-8xjmf\" (UID: \"f0fda7fa-0935-47fc-8c9b-723d5b352c04\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8xjmf" Feb 20 12:16:54.147628 master-0 kubenswrapper[31420]: I0220 12:16:54.147549 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d626h\" (UniqueName: \"kubernetes.io/projected/5b412160-9ed7-4c10-9dc9-7fbe93d45803-kube-api-access-d626h\") pod \"barbican-operator-controller-manager-868647ff47-2db7x\" (UID: \"5b412160-9ed7-4c10-9dc9-7fbe93d45803\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2db7x" Feb 20 12:16:54.195223 master-0 kubenswrapper[31420]: I0220 12:16:54.195101 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d626h\" (UniqueName: \"kubernetes.io/projected/5b412160-9ed7-4c10-9dc9-7fbe93d45803-kube-api-access-d626h\") pod \"barbican-operator-controller-manager-868647ff47-2db7x\" (UID: \"5b412160-9ed7-4c10-9dc9-7fbe93d45803\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2db7x" Feb 20 12:16:54.198267 master-0 kubenswrapper[31420]: I0220 12:16:54.197944 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhj29\" (UniqueName: \"kubernetes.io/projected/f0fda7fa-0935-47fc-8c9b-723d5b352c04-kube-api-access-xhj29\") pod \"cinder-operator-controller-manager-5d946d989d-8xjmf\" (UID: \"f0fda7fa-0935-47fc-8c9b-723d5b352c04\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8xjmf" Feb 20 12:16:54.202201 master-0 kubenswrapper[31420]: I0220 12:16:54.202143 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-v6x7r"] Feb 20 12:16:54.203300 master-0 kubenswrapper[31420]: I0220 12:16:54.203261 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6x7r" Feb 20 12:16:54.213394 master-0 kubenswrapper[31420]: I0220 12:16:54.213346 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-v6x7r"] Feb 20 12:16:54.229610 master-0 kubenswrapper[31420]: I0220 12:16:54.229503 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-6slqx"] Feb 20 12:16:54.230954 master-0 kubenswrapper[31420]: I0220 12:16:54.230913 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-6slqx" Feb 20 12:16:54.266560 master-0 kubenswrapper[31420]: I0220 12:16:54.256484 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7j5b\" (UniqueName: \"kubernetes.io/projected/3e4015cc-c404-4a2d-8ac0-a550b2b168f3-kube-api-access-l7j5b\") pod \"designate-operator-controller-manager-6d8bf5c495-f7vz4\" (UID: \"3e4015cc-c404-4a2d-8ac0-a550b2b168f3\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-f7vz4" Feb 20 12:16:54.266560 master-0 kubenswrapper[31420]: I0220 12:16:54.260734 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5qkng"] Feb 20 12:16:54.275831 master-0 kubenswrapper[31420]: I0220 12:16:54.274235 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5qkng" Feb 20 12:16:54.301547 master-0 kubenswrapper[31420]: I0220 12:16:54.300617 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2db7x" Feb 20 12:16:54.328944 master-0 kubenswrapper[31420]: I0220 12:16:54.312932 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-6slqx"] Feb 20 12:16:54.370639 master-0 kubenswrapper[31420]: I0220 12:16:54.367984 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7j5b\" (UniqueName: \"kubernetes.io/projected/3e4015cc-c404-4a2d-8ac0-a550b2b168f3-kube-api-access-l7j5b\") pod \"designate-operator-controller-manager-6d8bf5c495-f7vz4\" (UID: \"3e4015cc-c404-4a2d-8ac0-a550b2b168f3\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-f7vz4" Feb 20 12:16:54.370639 master-0 kubenswrapper[31420]: I0220 12:16:54.368099 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmtcf\" (UniqueName: \"kubernetes.io/projected/61d84cd4-22bd-4958-8c16-ea0edee7180e-kube-api-access-rmtcf\") pod \"glance-operator-controller-manager-77987464f4-v6x7r\" (UID: \"61d84cd4-22bd-4958-8c16-ea0edee7180e\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6x7r" Feb 20 12:16:54.370639 master-0 kubenswrapper[31420]: I0220 12:16:54.368181 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jsl5\" (UniqueName: \"kubernetes.io/projected/7d5ba596-526c-42b9-845a-9a4ec0b084e9-kube-api-access-6jsl5\") pod \"heat-operator-controller-manager-69f49c598c-6slqx\" (UID: \"7d5ba596-526c-42b9-845a-9a4ec0b084e9\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-6slqx" Feb 20 12:16:54.413595 master-0 kubenswrapper[31420]: I0220 12:16:54.404939 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8xjmf" Feb 20 12:16:54.413595 master-0 kubenswrapper[31420]: I0220 12:16:54.406094 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7j5b\" (UniqueName: \"kubernetes.io/projected/3e4015cc-c404-4a2d-8ac0-a550b2b168f3-kube-api-access-l7j5b\") pod \"designate-operator-controller-manager-6d8bf5c495-f7vz4\" (UID: \"3e4015cc-c404-4a2d-8ac0-a550b2b168f3\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-f7vz4" Feb 20 12:16:54.480917 master-0 kubenswrapper[31420]: I0220 12:16:54.475585 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmtcf\" (UniqueName: \"kubernetes.io/projected/61d84cd4-22bd-4958-8c16-ea0edee7180e-kube-api-access-rmtcf\") pod \"glance-operator-controller-manager-77987464f4-v6x7r\" (UID: \"61d84cd4-22bd-4958-8c16-ea0edee7180e\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6x7r" Feb 20 12:16:54.480917 master-0 kubenswrapper[31420]: I0220 12:16:54.475694 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jsl5\" (UniqueName: \"kubernetes.io/projected/7d5ba596-526c-42b9-845a-9a4ec0b084e9-kube-api-access-6jsl5\") pod \"heat-operator-controller-manager-69f49c598c-6slqx\" (UID: \"7d5ba596-526c-42b9-845a-9a4ec0b084e9\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-6slqx" Feb 20 12:16:54.480917 master-0 kubenswrapper[31420]: I0220 12:16:54.475788 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk4tr\" (UniqueName: \"kubernetes.io/projected/a86f22c3-c162-407b-9f7c-ee9fec02d78e-kube-api-access-lk4tr\") pod \"horizon-operator-controller-manager-5b9b8895d5-5qkng\" (UID: \"a86f22c3-c162-407b-9f7c-ee9fec02d78e\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5qkng" Feb 20 12:16:54.512415 master-0 kubenswrapper[31420]: I0220 12:16:54.510707 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg"] Feb 20 12:16:54.512415 master-0 kubenswrapper[31420]: I0220 12:16:54.511866 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:16:54.538548 master-0 kubenswrapper[31420]: I0220 12:16:54.525970 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 20 12:16:54.538548 master-0 kubenswrapper[31420]: I0220 12:16:54.527039 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jsl5\" (UniqueName: \"kubernetes.io/projected/7d5ba596-526c-42b9-845a-9a4ec0b084e9-kube-api-access-6jsl5\") pod \"heat-operator-controller-manager-69f49c598c-6slqx\" (UID: \"7d5ba596-526c-42b9-845a-9a4ec0b084e9\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-6slqx" Feb 20 12:16:54.538548 master-0 kubenswrapper[31420]: I0220 12:16:54.530494 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmtcf\" (UniqueName: \"kubernetes.io/projected/61d84cd4-22bd-4958-8c16-ea0edee7180e-kube-api-access-rmtcf\") pod \"glance-operator-controller-manager-77987464f4-v6x7r\" (UID: \"61d84cd4-22bd-4958-8c16-ea0edee7180e\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6x7r" Feb 20 12:16:54.584953 master-0 kubenswrapper[31420]: I0220 12:16:54.581820 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk4tr\" (UniqueName: \"kubernetes.io/projected/a86f22c3-c162-407b-9f7c-ee9fec02d78e-kube-api-access-lk4tr\") pod \"horizon-operator-controller-manager-5b9b8895d5-5qkng\" (UID: \"a86f22c3-c162-407b-9f7c-ee9fec02d78e\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5qkng" Feb 20 12:16:54.596172 master-0 kubenswrapper[31420]: I0220 12:16:54.588658 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-f7vz4" Feb 20 12:16:54.614189 master-0 kubenswrapper[31420]: I0220 12:16:54.609741 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-bhhzk"] Feb 20 12:16:54.614189 master-0 kubenswrapper[31420]: I0220 12:16:54.610925 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bhhzk" Feb 20 12:16:54.614189 master-0 kubenswrapper[31420]: I0220 12:16:54.612416 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6x7r" Feb 20 12:16:54.618447 master-0 kubenswrapper[31420]: I0220 12:16:54.618401 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk4tr\" (UniqueName: \"kubernetes.io/projected/a86f22c3-c162-407b-9f7c-ee9fec02d78e-kube-api-access-lk4tr\") pod \"horizon-operator-controller-manager-5b9b8895d5-5qkng\" (UID: \"a86f22c3-c162-407b-9f7c-ee9fec02d78e\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5qkng" Feb 20 12:16:54.664972 master-0 kubenswrapper[31420]: I0220 12:16:54.663195 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-6slqx" Feb 20 12:16:54.672764 master-0 kubenswrapper[31420]: I0220 12:16:54.672511 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-bhhzk"] Feb 20 12:16:54.685250 master-0 kubenswrapper[31420]: I0220 12:16:54.683398 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bn5dg\" (UID: \"5d18777a-1196-401b-b94c-6c8504f5ce3b\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:16:54.685250 master-0 kubenswrapper[31420]: I0220 12:16:54.683638 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6c57\" (UniqueName: \"kubernetes.io/projected/5d18777a-1196-401b-b94c-6c8504f5ce3b-kube-api-access-w6c57\") pod \"infra-operator-controller-manager-5f879c76b6-bn5dg\" (UID: \"5d18777a-1196-401b-b94c-6c8504f5ce3b\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:16:54.697702 master-0 kubenswrapper[31420]: I0220 12:16:54.697269 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5qkng" Feb 20 12:16:54.719332 master-0 kubenswrapper[31420]: I0220 12:16:54.719271 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg"] Feb 20 12:16:54.785175 master-0 kubenswrapper[31420]: I0220 12:16:54.785066 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6c57\" (UniqueName: \"kubernetes.io/projected/5d18777a-1196-401b-b94c-6c8504f5ce3b-kube-api-access-w6c57\") pod \"infra-operator-controller-manager-5f879c76b6-bn5dg\" (UID: \"5d18777a-1196-401b-b94c-6c8504f5ce3b\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:16:54.785444 master-0 kubenswrapper[31420]: I0220 12:16:54.785215 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crwxk\" (UniqueName: \"kubernetes.io/projected/c4b62567-b85d-476e-a92a-24b43173afd3-kube-api-access-crwxk\") pod \"ironic-operator-controller-manager-554564d7fc-bhhzk\" (UID: \"c4b62567-b85d-476e-a92a-24b43173afd3\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bhhzk" Feb 20 12:16:54.785444 master-0 kubenswrapper[31420]: I0220 12:16:54.785247 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bn5dg\" (UID: \"5d18777a-1196-401b-b94c-6c8504f5ce3b\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:16:54.786450 master-0 kubenswrapper[31420]: E0220 12:16:54.786392 31420 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 20 12:16:54.786509 master-0 kubenswrapper[31420]: E0220 12:16:54.786452 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert podName:5d18777a-1196-401b-b94c-6c8504f5ce3b nodeName:}" failed. No retries permitted until 2026-02-20 12:16:55.286432034 +0000 UTC m=+720.005670275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert") pod "infra-operator-controller-manager-5f879c76b6-bn5dg" (UID: "5d18777a-1196-401b-b94c-6c8504f5ce3b") : secret "infra-operator-webhook-server-cert" not found Feb 20 12:16:54.811389 master-0 kubenswrapper[31420]: I0220 12:16:54.810267 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6c57\" (UniqueName: \"kubernetes.io/projected/5d18777a-1196-401b-b94c-6c8504f5ce3b-kube-api-access-w6c57\") pod \"infra-operator-controller-manager-5f879c76b6-bn5dg\" (UID: \"5d18777a-1196-401b-b94c-6c8504f5ce3b\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:16:54.813387 master-0 kubenswrapper[31420]: I0220 12:16:54.812866 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5qkng"] Feb 20 12:16:54.826568 master-0 kubenswrapper[31420]: I0220 12:16:54.826463 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-xdnq5"] Feb 20 12:16:54.829434 master-0 kubenswrapper[31420]: I0220 12:16:54.828841 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xdnq5" Feb 20 12:16:54.849050 master-0 kubenswrapper[31420]: I0220 12:16:54.848514 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-xdnq5"] Feb 20 12:16:54.872915 master-0 kubenswrapper[31420]: I0220 12:16:54.872848 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-57jpf"] Feb 20 12:16:54.875936 master-0 kubenswrapper[31420]: I0220 12:16:54.875759 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-57jpf" Feb 20 12:16:54.886307 master-0 kubenswrapper[31420]: I0220 12:16:54.886228 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-57jpf"] Feb 20 12:16:54.887442 master-0 kubenswrapper[31420]: I0220 12:16:54.887230 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crwxk\" (UniqueName: \"kubernetes.io/projected/c4b62567-b85d-476e-a92a-24b43173afd3-kube-api-access-crwxk\") pod \"ironic-operator-controller-manager-554564d7fc-bhhzk\" (UID: \"c4b62567-b85d-476e-a92a-24b43173afd3\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bhhzk" Feb 20 12:16:54.895120 master-0 kubenswrapper[31420]: I0220 12:16:54.895031 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rn9t8"] Feb 20 12:16:54.897220 master-0 kubenswrapper[31420]: I0220 12:16:54.897185 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rn9t8" Feb 20 12:16:54.904145 master-0 kubenswrapper[31420]: I0220 12:16:54.904095 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rpcfg"] Feb 20 12:16:54.907855 master-0 kubenswrapper[31420]: I0220 12:16:54.906183 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rpcfg" Feb 20 12:16:54.914074 master-0 kubenswrapper[31420]: I0220 12:16:54.914036 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rn9t8"] Feb 20 12:16:54.917725 master-0 kubenswrapper[31420]: I0220 12:16:54.917671 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crwxk\" (UniqueName: \"kubernetes.io/projected/c4b62567-b85d-476e-a92a-24b43173afd3-kube-api-access-crwxk\") pod \"ironic-operator-controller-manager-554564d7fc-bhhzk\" (UID: \"c4b62567-b85d-476e-a92a-24b43173afd3\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bhhzk" Feb 20 12:16:54.927736 master-0 kubenswrapper[31420]: I0220 12:16:54.927688 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rpcfg"] Feb 20 12:16:54.951826 master-0 kubenswrapper[31420]: I0220 12:16:54.951769 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-mt88g"] Feb 20 12:16:54.958871 master-0 kubenswrapper[31420]: I0220 12:16:54.958813 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-mt88g" Feb 20 12:16:54.992774 master-0 kubenswrapper[31420]: I0220 12:16:54.992656 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-rgsrf"] Feb 20 12:16:54.993381 master-0 kubenswrapper[31420]: I0220 12:16:54.993333 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pcw9\" (UniqueName: \"kubernetes.io/projected/25065d47-a25e-4035-8c33-c73eb191f1b2-kube-api-access-2pcw9\") pod \"manila-operator-controller-manager-54f6768c69-57jpf\" (UID: \"25065d47-a25e-4035-8c33-c73eb191f1b2\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-57jpf" Feb 20 12:16:54.994231 master-0 kubenswrapper[31420]: I0220 12:16:54.994204 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdnsm\" (UniqueName: \"kubernetes.io/projected/7e9508f3-a3ab-4df1-b9fb-775bba9a0f43-kube-api-access-vdnsm\") pod \"mariadb-operator-controller-manager-6994f66f48-rn9t8\" (UID: \"7e9508f3-a3ab-4df1-b9fb-775bba9a0f43\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rn9t8" Feb 20 12:16:54.994299 master-0 kubenswrapper[31420]: I0220 12:16:54.994251 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86fgb\" (UniqueName: \"kubernetes.io/projected/ec3aef87-8ef5-4e4c-a06e-3d9424c62df6-kube-api-access-86fgb\") pod \"keystone-operator-controller-manager-b4d948c87-xdnq5\" (UID: \"ec3aef87-8ef5-4e4c-a06e-3d9424c62df6\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xdnq5" Feb 20 12:16:55.009986 master-0 kubenswrapper[31420]: I0220 12:16:55.009899 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rgsrf" Feb 20 12:16:55.056854 master-0 kubenswrapper[31420]: I0220 12:16:55.056753 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-mt88g"] Feb 20 12:16:55.082956 master-0 kubenswrapper[31420]: I0220 12:16:55.065356 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-rgsrf"] Feb 20 12:16:55.082956 master-0 kubenswrapper[31420]: I0220 12:16:55.065736 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bhhzk" Feb 20 12:16:55.082956 master-0 kubenswrapper[31420]: I0220 12:16:55.077717 31420 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 12:16:55.082956 master-0 kubenswrapper[31420]: I0220 12:16:55.078197 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg"] Feb 20 12:16:55.082956 master-0 kubenswrapper[31420]: I0220 12:16:55.080559 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-bl8h7"] Feb 20 12:16:55.082956 master-0 kubenswrapper[31420]: I0220 12:16:55.081750 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:16:55.082956 master-0 kubenswrapper[31420]: I0220 12:16:55.082033 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bl8h7" Feb 20 12:16:55.087541 master-0 kubenswrapper[31420]: I0220 12:16:55.084166 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 20 12:16:55.096643 master-0 kubenswrapper[31420]: I0220 12:16:55.094320 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-bl8h7"] Feb 20 12:16:55.096643 master-0 kubenswrapper[31420]: I0220 12:16:55.096288 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk8fs\" (UniqueName: \"kubernetes.io/projected/4bec2508-5bbe-4c35-8292-94a77950167a-kube-api-access-jk8fs\") pod \"nova-operator-controller-manager-567668f5cf-mt88g\" (UID: \"4bec2508-5bbe-4c35-8292-94a77950167a\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-mt88g" Feb 20 12:16:55.096643 master-0 kubenswrapper[31420]: I0220 12:16:55.096345 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdnsm\" (UniqueName: \"kubernetes.io/projected/7e9508f3-a3ab-4df1-b9fb-775bba9a0f43-kube-api-access-vdnsm\") pod \"mariadb-operator-controller-manager-6994f66f48-rn9t8\" (UID: \"7e9508f3-a3ab-4df1-b9fb-775bba9a0f43\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rn9t8" Feb 20 12:16:55.096643 master-0 kubenswrapper[31420]: I0220 12:16:55.096384 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5957\" (UniqueName: \"kubernetes.io/projected/b55110b9-7c65-46ee-a4f2-4e9b6a69158e-kube-api-access-b5957\") pod \"octavia-operator-controller-manager-69f8888797-rgsrf\" (UID: \"b55110b9-7c65-46ee-a4f2-4e9b6a69158e\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rgsrf" Feb 20 12:16:55.096643 master-0 kubenswrapper[31420]: I0220 12:16:55.096424 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86fgb\" (UniqueName: \"kubernetes.io/projected/ec3aef87-8ef5-4e4c-a06e-3d9424c62df6-kube-api-access-86fgb\") pod \"keystone-operator-controller-manager-b4d948c87-xdnq5\" (UID: \"ec3aef87-8ef5-4e4c-a06e-3d9424c62df6\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xdnq5" Feb 20 12:16:55.096643 master-0 kubenswrapper[31420]: I0220 12:16:55.096509 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pcw9\" (UniqueName: \"kubernetes.io/projected/25065d47-a25e-4035-8c33-c73eb191f1b2-kube-api-access-2pcw9\") pod \"manila-operator-controller-manager-54f6768c69-57jpf\" (UID: \"25065d47-a25e-4035-8c33-c73eb191f1b2\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-57jpf" Feb 20 12:16:55.096643 master-0 kubenswrapper[31420]: I0220 12:16:55.096627 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkpml\" (UniqueName: \"kubernetes.io/projected/38f6b140-e4b4-4999-af19-6dc2973ca6ed-kube-api-access-nkpml\") pod \"neutron-operator-controller-manager-64ddbf8bb-rpcfg\" (UID: \"38f6b140-e4b4-4999-af19-6dc2973ca6ed\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rpcfg" Feb 20 12:16:55.106668 master-0 kubenswrapper[31420]: I0220 12:16:55.106215 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-d2ptm"] Feb 20 12:16:55.115542 master-0 kubenswrapper[31420]: I0220 12:16:55.107969 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d2ptm" Feb 20 12:16:55.125542 master-0 kubenswrapper[31420]: I0220 12:16:55.120710 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdnsm\" (UniqueName: \"kubernetes.io/projected/7e9508f3-a3ab-4df1-b9fb-775bba9a0f43-kube-api-access-vdnsm\") pod \"mariadb-operator-controller-manager-6994f66f48-rn9t8\" (UID: \"7e9508f3-a3ab-4df1-b9fb-775bba9a0f43\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rn9t8" Feb 20 12:16:55.125542 master-0 kubenswrapper[31420]: I0220 12:16:55.120966 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg"] Feb 20 12:16:55.125542 master-0 kubenswrapper[31420]: I0220 12:16:55.123296 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pcw9\" (UniqueName: \"kubernetes.io/projected/25065d47-a25e-4035-8c33-c73eb191f1b2-kube-api-access-2pcw9\") pod \"manila-operator-controller-manager-54f6768c69-57jpf\" (UID: \"25065d47-a25e-4035-8c33-c73eb191f1b2\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-57jpf" Feb 20 12:16:55.125542 master-0 kubenswrapper[31420]: I0220 12:16:55.124852 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86fgb\" (UniqueName: \"kubernetes.io/projected/ec3aef87-8ef5-4e4c-a06e-3d9424c62df6-kube-api-access-86fgb\") pod \"keystone-operator-controller-manager-b4d948c87-xdnq5\" (UID: \"ec3aef87-8ef5-4e4c-a06e-3d9424c62df6\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xdnq5" Feb 20 12:16:55.139623 master-0 kubenswrapper[31420]: I0220 12:16:55.128137 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-d2ptm"] Feb 20 12:16:55.139623 master-0 kubenswrapper[31420]: I0220 12:16:55.138126 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-fvzrc"] Feb 20 12:16:55.139843 master-0 kubenswrapper[31420]: I0220 12:16:55.139696 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fvzrc" Feb 20 12:16:55.146215 master-0 kubenswrapper[31420]: I0220 12:16:55.146181 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-fvzrc"] Feb 20 12:16:55.154901 master-0 kubenswrapper[31420]: I0220 12:16:55.154839 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qpwhf"] Feb 20 12:16:55.157013 master-0 kubenswrapper[31420]: I0220 12:16:55.156984 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qpwhf" Feb 20 12:16:55.161233 master-0 kubenswrapper[31420]: I0220 12:16:55.161178 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xdnq5" Feb 20 12:16:55.179284 master-0 kubenswrapper[31420]: I0220 12:16:55.176749 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-8sqt4"] Feb 20 12:16:55.180897 master-0 kubenswrapper[31420]: I0220 12:16:55.180814 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-8sqt4" Feb 20 12:16:55.192378 master-0 kubenswrapper[31420]: I0220 12:16:55.192324 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qpwhf"] Feb 20 12:16:55.204007 master-0 kubenswrapper[31420]: I0220 12:16:55.203634 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-8sqt4"] Feb 20 12:16:55.204212 master-0 kubenswrapper[31420]: I0220 12:16:55.204034 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfc28\" (UniqueName: \"kubernetes.io/projected/0503483c-565e-4e79-ba1b-bf0ad98481b0-kube-api-access-vfc28\") pod \"ovn-operator-controller-manager-d44cf6b75-bl8h7\" (UID: \"0503483c-565e-4e79-ba1b-bf0ad98481b0\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bl8h7" Feb 20 12:16:55.204212 master-0 kubenswrapper[31420]: I0220 12:16:55.204117 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk8fs\" (UniqueName: \"kubernetes.io/projected/4bec2508-5bbe-4c35-8292-94a77950167a-kube-api-access-jk8fs\") pod \"nova-operator-controller-manager-567668f5cf-mt88g\" (UID: \"4bec2508-5bbe-4c35-8292-94a77950167a\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-mt88g" Feb 20 12:16:55.209633 master-0 kubenswrapper[31420]: I0220 12:16:55.209585 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg\" (UID: \"1db07cb7-a520-4044-95b9-05f1ec724217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:16:55.209823 master-0 kubenswrapper[31420]: I0220 12:16:55.209801 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c79cg\" (UniqueName: \"kubernetes.io/projected/4d890613-25fd-4d7f-b82b-1295bb5a66fd-kube-api-access-c79cg\") pod \"placement-operator-controller-manager-8497b45c89-d2ptm\" (UID: \"4d890613-25fd-4d7f-b82b-1295bb5a66fd\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d2ptm" Feb 20 12:16:55.209989 master-0 kubenswrapper[31420]: I0220 12:16:55.209971 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tztq4\" (UniqueName: \"kubernetes.io/projected/1db07cb7-a520-4044-95b9-05f1ec724217-kube-api-access-tztq4\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg\" (UID: \"1db07cb7-a520-4044-95b9-05f1ec724217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:16:55.210082 master-0 kubenswrapper[31420]: I0220 12:16:55.210066 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5957\" (UniqueName: \"kubernetes.io/projected/b55110b9-7c65-46ee-a4f2-4e9b6a69158e-kube-api-access-b5957\") pod \"octavia-operator-controller-manager-69f8888797-rgsrf\" (UID: \"b55110b9-7c65-46ee-a4f2-4e9b6a69158e\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rgsrf" Feb 20 12:16:55.210440 master-0 kubenswrapper[31420]: I0220 12:16:55.210421 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkpml\" (UniqueName: \"kubernetes.io/projected/38f6b140-e4b4-4999-af19-6dc2973ca6ed-kube-api-access-nkpml\") pod \"neutron-operator-controller-manager-64ddbf8bb-rpcfg\" (UID: \"38f6b140-e4b4-4999-af19-6dc2973ca6ed\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rpcfg" Feb 20 12:16:55.211748 master-0 kubenswrapper[31420]: I0220 12:16:55.211704 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-57jpf" Feb 20 12:16:55.214969 master-0 kubenswrapper[31420]: I0220 12:16:55.214477 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-h5qzd"] Feb 20 12:16:55.216228 master-0 kubenswrapper[31420]: I0220 12:16:55.216187 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h5qzd" Feb 20 12:16:55.232058 master-0 kubenswrapper[31420]: I0220 12:16:55.231907 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rn9t8" Feb 20 12:16:55.234873 master-0 kubenswrapper[31420]: I0220 12:16:55.234011 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5957\" (UniqueName: \"kubernetes.io/projected/b55110b9-7c65-46ee-a4f2-4e9b6a69158e-kube-api-access-b5957\") pod \"octavia-operator-controller-manager-69f8888797-rgsrf\" (UID: \"b55110b9-7c65-46ee-a4f2-4e9b6a69158e\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rgsrf" Feb 20 12:16:55.237557 master-0 kubenswrapper[31420]: I0220 12:16:55.237104 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk8fs\" (UniqueName: \"kubernetes.io/projected/4bec2508-5bbe-4c35-8292-94a77950167a-kube-api-access-jk8fs\") pod \"nova-operator-controller-manager-567668f5cf-mt88g\" (UID: \"4bec2508-5bbe-4c35-8292-94a77950167a\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-mt88g" Feb 20 12:16:55.237557 master-0 kubenswrapper[31420]: I0220 12:16:55.237191 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-h5qzd"] Feb 20 12:16:55.239231 master-0 kubenswrapper[31420]: I0220 12:16:55.239200 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkpml\" (UniqueName: \"kubernetes.io/projected/38f6b140-e4b4-4999-af19-6dc2973ca6ed-kube-api-access-nkpml\") pod \"neutron-operator-controller-manager-64ddbf8bb-rpcfg\" (UID: \"38f6b140-e4b4-4999-af19-6dc2973ca6ed\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rpcfg" Feb 20 12:16:55.247994 master-0 kubenswrapper[31420]: W0220 12:16:55.247884 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0fda7fa_0935_47fc_8c9b_723d5b352c04.slice/crio-3bc342e4f26d81b413b7216cf0b54c33f4df9c39f8a4bc30b7597360fe42e2ac WatchSource:0}: Error finding container 3bc342e4f26d81b413b7216cf0b54c33f4df9c39f8a4bc30b7597360fe42e2ac: Status 404 returned error can't find the container with id 3bc342e4f26d81b413b7216cf0b54c33f4df9c39f8a4bc30b7597360fe42e2ac Feb 20 12:16:55.262642 master-0 kubenswrapper[31420]: I0220 12:16:55.253603 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5"] Feb 20 12:16:55.262642 master-0 kubenswrapper[31420]: I0220 12:16:55.255277 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:16:55.263285 master-0 kubenswrapper[31420]: I0220 12:16:55.263236 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5"] Feb 20 12:16:55.263716 master-0 kubenswrapper[31420]: I0220 12:16:55.263696 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 20 12:16:55.263876 master-0 kubenswrapper[31420]: I0220 12:16:55.263833 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 20 12:16:55.273599 master-0 kubenswrapper[31420]: I0220 12:16:55.273545 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kwlp7"] Feb 20 12:16:55.276377 master-0 kubenswrapper[31420]: I0220 12:16:55.276113 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kwlp7" Feb 20 12:16:55.288825 master-0 kubenswrapper[31420]: I0220 12:16:55.288753 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kwlp7"] Feb 20 12:16:55.300324 master-0 kubenswrapper[31420]: I0220 12:16:55.299189 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-mt88g" Feb 20 12:16:55.315571 master-0 kubenswrapper[31420]: I0220 12:16:55.313814 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmjnh\" (UniqueName: \"kubernetes.io/projected/862d9673-54d9-4647-bcc5-146f5ab37483-kube-api-access-pmjnh\") pod \"telemetry-operator-controller-manager-7f45b4ff68-qpwhf\" (UID: \"862d9673-54d9-4647-bcc5-146f5ab37483\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qpwhf" Feb 20 12:16:55.317153 master-0 kubenswrapper[31420]: I0220 12:16:55.316049 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kppw5\" (UniqueName: \"kubernetes.io/projected/4bb94d70-fe73-4d78-8f5a-6ebb2f0ea9d5-kube-api-access-kppw5\") pod \"swift-operator-controller-manager-68f46476f-fvzrc\" (UID: \"4bb94d70-fe73-4d78-8f5a-6ebb2f0ea9d5\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-fvzrc" Feb 20 12:16:55.317153 master-0 kubenswrapper[31420]: I0220 12:16:55.316123 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bn5dg\" (UID: \"5d18777a-1196-401b-b94c-6c8504f5ce3b\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:16:55.317153 master-0 kubenswrapper[31420]: I0220 12:16:55.316171 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pssnc\" (UniqueName: \"kubernetes.io/projected/8e537287-9fc2-4c6f-bea9-0b7c8565a6c7-kube-api-access-pssnc\") pod \"watcher-operator-controller-manager-5db88f68c-h5qzd\" (UID: \"8e537287-9fc2-4c6f-bea9-0b7c8565a6c7\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h5qzd" Feb 20 12:16:55.317153 master-0 kubenswrapper[31420]: I0220 12:16:55.316252 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfc28\" (UniqueName: \"kubernetes.io/projected/0503483c-565e-4e79-ba1b-bf0ad98481b0-kube-api-access-vfc28\") pod \"ovn-operator-controller-manager-d44cf6b75-bl8h7\" (UID: \"0503483c-565e-4e79-ba1b-bf0ad98481b0\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bl8h7" Feb 20 12:16:55.317153 master-0 kubenswrapper[31420]: I0220 12:16:55.316299 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg\" (UID: \"1db07cb7-a520-4044-95b9-05f1ec724217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:16:55.317153 master-0 kubenswrapper[31420]: I0220 12:16:55.316322 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c79cg\" (UniqueName: \"kubernetes.io/projected/4d890613-25fd-4d7f-b82b-1295bb5a66fd-kube-api-access-c79cg\") pod \"placement-operator-controller-manager-8497b45c89-d2ptm\" (UID: \"4d890613-25fd-4d7f-b82b-1295bb5a66fd\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d2ptm" Feb 20 12:16:55.317153 master-0 kubenswrapper[31420]: I0220 12:16:55.316343 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tztq4\" (UniqueName: \"kubernetes.io/projected/1db07cb7-a520-4044-95b9-05f1ec724217-kube-api-access-tztq4\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg\" (UID: \"1db07cb7-a520-4044-95b9-05f1ec724217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:16:55.317153 master-0 kubenswrapper[31420]: I0220 12:16:55.316384 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9zm7\" (UniqueName: \"kubernetes.io/projected/c76a47e8-3df4-4cdf-9eb2-dfc28847f5ad-kube-api-access-b9zm7\") pod \"test-operator-controller-manager-7866795846-8sqt4\" (UID: \"c76a47e8-3df4-4cdf-9eb2-dfc28847f5ad\") " pod="openstack-operators/test-operator-controller-manager-7866795846-8sqt4" Feb 20 12:16:55.317425 master-0 kubenswrapper[31420]: E0220 12:16:55.317149 31420 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 20 12:16:55.317425 master-0 kubenswrapper[31420]: E0220 12:16:55.317330 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert podName:5d18777a-1196-401b-b94c-6c8504f5ce3b nodeName:}" failed. No retries permitted until 2026-02-20 12:16:56.317278367 +0000 UTC m=+721.036516658 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert") pod "infra-operator-controller-manager-5f879c76b6-bn5dg" (UID: "5d18777a-1196-401b-b94c-6c8504f5ce3b") : secret "infra-operator-webhook-server-cert" not found Feb 20 12:16:55.318035 master-0 kubenswrapper[31420]: E0220 12:16:55.317996 31420 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 12:16:55.318084 master-0 kubenswrapper[31420]: E0220 12:16:55.318055 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert podName:1db07cb7-a520-4044-95b9-05f1ec724217 nodeName:}" failed. No retries permitted until 2026-02-20 12:16:55.818042058 +0000 UTC m=+720.537280299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" (UID: "1db07cb7-a520-4044-95b9-05f1ec724217") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 12:16:55.348901 master-0 kubenswrapper[31420]: I0220 12:16:55.348843 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c79cg\" (UniqueName: \"kubernetes.io/projected/4d890613-25fd-4d7f-b82b-1295bb5a66fd-kube-api-access-c79cg\") pod \"placement-operator-controller-manager-8497b45c89-d2ptm\" (UID: \"4d890613-25fd-4d7f-b82b-1295bb5a66fd\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d2ptm" Feb 20 12:16:55.350925 master-0 kubenswrapper[31420]: I0220 12:16:55.349810 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfc28\" (UniqueName: \"kubernetes.io/projected/0503483c-565e-4e79-ba1b-bf0ad98481b0-kube-api-access-vfc28\") pod \"ovn-operator-controller-manager-d44cf6b75-bl8h7\" (UID: \"0503483c-565e-4e79-ba1b-bf0ad98481b0\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bl8h7" Feb 20 12:16:55.354718 master-0 kubenswrapper[31420]: W0220 12:16:55.354319 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e4015cc_c404_4a2d_8ac0_a550b2b168f3.slice/crio-421dd6dccd2f215b85568a5d52ee9764b4bf854be8f56c105b12c73fa4aeb9ce WatchSource:0}: Error finding container 421dd6dccd2f215b85568a5d52ee9764b4bf854be8f56c105b12c73fa4aeb9ce: Status 404 returned error can't find the container with id 421dd6dccd2f215b85568a5d52ee9764b4bf854be8f56c105b12c73fa4aeb9ce Feb 20 12:16:55.354890 master-0 kubenswrapper[31420]: I0220 12:16:55.354776 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tztq4\" (UniqueName: \"kubernetes.io/projected/1db07cb7-a520-4044-95b9-05f1ec724217-kube-api-access-tztq4\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg\" (UID: \"1db07cb7-a520-4044-95b9-05f1ec724217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:16:55.359859 master-0 kubenswrapper[31420]: I0220 12:16:55.359773 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-2db7x"] Feb 20 12:16:55.370290 master-0 kubenswrapper[31420]: I0220 12:16:55.367030 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rgsrf" Feb 20 12:16:55.412929 master-0 kubenswrapper[31420]: I0220 12:16:55.412688 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bl8h7" Feb 20 12:16:55.420953 master-0 kubenswrapper[31420]: I0220 12:16:55.418613 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmjnh\" (UniqueName: \"kubernetes.io/projected/862d9673-54d9-4647-bcc5-146f5ab37483-kube-api-access-pmjnh\") pod \"telemetry-operator-controller-manager-7f45b4ff68-qpwhf\" (UID: \"862d9673-54d9-4647-bcc5-146f5ab37483\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qpwhf" Feb 20 12:16:55.420953 master-0 kubenswrapper[31420]: I0220 12:16:55.418682 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:16:55.420953 master-0 kubenswrapper[31420]: I0220 12:16:55.418702 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fp6x\" (UniqueName: \"kubernetes.io/projected/b0bc4973-cb77-49a8-be3a-35340c08e9a0-kube-api-access-5fp6x\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kwlp7\" (UID: \"b0bc4973-cb77-49a8-be3a-35340c08e9a0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kwlp7" Feb 20 12:16:55.420953 master-0 kubenswrapper[31420]: I0220 12:16:55.418744 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:16:55.420953 master-0 kubenswrapper[31420]: I0220 12:16:55.418792 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kppw5\" (UniqueName: \"kubernetes.io/projected/4bb94d70-fe73-4d78-8f5a-6ebb2f0ea9d5-kube-api-access-kppw5\") pod \"swift-operator-controller-manager-68f46476f-fvzrc\" (UID: \"4bb94d70-fe73-4d78-8f5a-6ebb2f0ea9d5\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-fvzrc" Feb 20 12:16:55.420953 master-0 kubenswrapper[31420]: I0220 12:16:55.418855 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pssnc\" (UniqueName: \"kubernetes.io/projected/8e537287-9fc2-4c6f-bea9-0b7c8565a6c7-kube-api-access-pssnc\") pod \"watcher-operator-controller-manager-5db88f68c-h5qzd\" (UID: \"8e537287-9fc2-4c6f-bea9-0b7c8565a6c7\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h5qzd" Feb 20 12:16:55.420953 master-0 kubenswrapper[31420]: I0220 12:16:55.418882 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvdrl\" (UniqueName: \"kubernetes.io/projected/43ce6170-0fbe-4278-a2cc-131dad533824-kube-api-access-kvdrl\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:16:55.420953 master-0 kubenswrapper[31420]: I0220 12:16:55.418964 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9zm7\" (UniqueName: \"kubernetes.io/projected/c76a47e8-3df4-4cdf-9eb2-dfc28847f5ad-kube-api-access-b9zm7\") pod \"test-operator-controller-manager-7866795846-8sqt4\" (UID: \"c76a47e8-3df4-4cdf-9eb2-dfc28847f5ad\") " pod="openstack-operators/test-operator-controller-manager-7866795846-8sqt4" Feb 20 12:16:55.425691 master-0 kubenswrapper[31420]: I0220 12:16:55.425638 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-8xjmf"] Feb 20 12:16:55.435879 master-0 kubenswrapper[31420]: I0220 12:16:55.434866 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9zm7\" (UniqueName: \"kubernetes.io/projected/c76a47e8-3df4-4cdf-9eb2-dfc28847f5ad-kube-api-access-b9zm7\") pod \"test-operator-controller-manager-7866795846-8sqt4\" (UID: \"c76a47e8-3df4-4cdf-9eb2-dfc28847f5ad\") " pod="openstack-operators/test-operator-controller-manager-7866795846-8sqt4" Feb 20 12:16:55.445733 master-0 kubenswrapper[31420]: I0220 12:16:55.445658 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmjnh\" (UniqueName: \"kubernetes.io/projected/862d9673-54d9-4647-bcc5-146f5ab37483-kube-api-access-pmjnh\") pod \"telemetry-operator-controller-manager-7f45b4ff68-qpwhf\" (UID: \"862d9673-54d9-4647-bcc5-146f5ab37483\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qpwhf" Feb 20 12:16:55.457162 master-0 kubenswrapper[31420]: I0220 12:16:55.457108 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kppw5\" (UniqueName: \"kubernetes.io/projected/4bb94d70-fe73-4d78-8f5a-6ebb2f0ea9d5-kube-api-access-kppw5\") pod \"swift-operator-controller-manager-68f46476f-fvzrc\" (UID: \"4bb94d70-fe73-4d78-8f5a-6ebb2f0ea9d5\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-fvzrc" Feb 20 12:16:55.458669 master-0 kubenswrapper[31420]: I0220 12:16:55.458619 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pssnc\" (UniqueName: \"kubernetes.io/projected/8e537287-9fc2-4c6f-bea9-0b7c8565a6c7-kube-api-access-pssnc\") pod \"watcher-operator-controller-manager-5db88f68c-h5qzd\" (UID: \"8e537287-9fc2-4c6f-bea9-0b7c8565a6c7\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h5qzd" Feb 20 12:16:55.464923 master-0 kubenswrapper[31420]: I0220 12:16:55.464878 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fvzrc" Feb 20 12:16:55.489062 master-0 kubenswrapper[31420]: I0220 12:16:55.482952 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d2ptm" Feb 20 12:16:55.489062 master-0 kubenswrapper[31420]: I0220 12:16:55.484254 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qpwhf" Feb 20 12:16:55.534673 master-0 kubenswrapper[31420]: I0220 12:16:55.524393 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvdrl\" (UniqueName: \"kubernetes.io/projected/43ce6170-0fbe-4278-a2cc-131dad533824-kube-api-access-kvdrl\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:16:55.534673 master-0 kubenswrapper[31420]: I0220 12:16:55.528636 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:16:55.534673 master-0 kubenswrapper[31420]: I0220 12:16:55.528707 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fp6x\" (UniqueName: \"kubernetes.io/projected/b0bc4973-cb77-49a8-be3a-35340c08e9a0-kube-api-access-5fp6x\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kwlp7\" (UID: \"b0bc4973-cb77-49a8-be3a-35340c08e9a0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kwlp7" Feb 20 12:16:55.534673 master-0 kubenswrapper[31420]: I0220 12:16:55.528804 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:16:55.534673 master-0 kubenswrapper[31420]: I0220 12:16:55.531709 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 20 12:16:55.534673 master-0 kubenswrapper[31420]: I0220 12:16:55.532617 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 20 12:16:55.539132 master-0 kubenswrapper[31420]: I0220 12:16:55.539070 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rpcfg" Feb 20 12:16:55.539738 master-0 kubenswrapper[31420]: E0220 12:16:55.539713 31420 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 20 12:16:55.539867 master-0 kubenswrapper[31420]: E0220 12:16:55.539855 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs podName:43ce6170-0fbe-4278-a2cc-131dad533824 nodeName:}" failed. No retries permitted until 2026-02-20 12:16:56.039836877 +0000 UTC m=+720.759075118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-wnsg5" (UID: "43ce6170-0fbe-4278-a2cc-131dad533824") : secret "metrics-server-cert" not found Feb 20 12:16:55.540282 master-0 kubenswrapper[31420]: E0220 12:16:55.540250 31420 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 20 12:16:55.540340 master-0 kubenswrapper[31420]: E0220 12:16:55.540325 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs podName:43ce6170-0fbe-4278-a2cc-131dad533824 nodeName:}" failed. No retries permitted until 2026-02-20 12:16:56.04029742 +0000 UTC m=+720.759535751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-wnsg5" (UID: "43ce6170-0fbe-4278-a2cc-131dad533824") : secret "webhook-server-cert" not found Feb 20 12:16:55.545408 master-0 kubenswrapper[31420]: I0220 12:16:55.545368 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvdrl\" (UniqueName: \"kubernetes.io/projected/43ce6170-0fbe-4278-a2cc-131dad533824-kube-api-access-kvdrl\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:16:55.549187 master-0 kubenswrapper[31420]: I0220 12:16:55.549145 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fp6x\" (UniqueName: \"kubernetes.io/projected/b0bc4973-cb77-49a8-be3a-35340c08e9a0-kube-api-access-5fp6x\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kwlp7\" (UID: \"b0bc4973-cb77-49a8-be3a-35340c08e9a0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kwlp7" Feb 20 12:16:55.581310 master-0 kubenswrapper[31420]: I0220 12:16:55.581277 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-8sqt4" Feb 20 12:16:55.593677 master-0 kubenswrapper[31420]: I0220 12:16:55.593632 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-f7vz4"] Feb 20 12:16:55.594578 master-0 kubenswrapper[31420]: I0220 12:16:55.594233 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h5qzd" Feb 20 12:16:55.622718 master-0 kubenswrapper[31420]: I0220 12:16:55.622678 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kwlp7" Feb 20 12:16:55.709059 master-0 kubenswrapper[31420]: I0220 12:16:55.708983 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-6slqx"] Feb 20 12:16:55.837466 master-0 kubenswrapper[31420]: I0220 12:16:55.837387 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-v6x7r"] Feb 20 12:16:55.846578 master-0 kubenswrapper[31420]: I0220 12:16:55.846514 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg\" (UID: \"1db07cb7-a520-4044-95b9-05f1ec724217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:16:55.846808 master-0 kubenswrapper[31420]: E0220 12:16:55.846693 31420 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 12:16:55.846808 master-0 kubenswrapper[31420]: E0220 12:16:55.846792 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert podName:1db07cb7-a520-4044-95b9-05f1ec724217 nodeName:}" failed. No retries permitted until 2026-02-20 12:16:56.846770631 +0000 UTC m=+721.566008872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" (UID: "1db07cb7-a520-4044-95b9-05f1ec724217") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 12:16:55.854023 master-0 kubenswrapper[31420]: I0220 12:16:55.851502 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5qkng"] Feb 20 12:16:55.997624 master-0 kubenswrapper[31420]: I0220 12:16:55.997556 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8xjmf" event={"ID":"f0fda7fa-0935-47fc-8c9b-723d5b352c04","Type":"ContainerStarted","Data":"3bc342e4f26d81b413b7216cf0b54c33f4df9c39f8a4bc30b7597360fe42e2ac"} Feb 20 12:16:55.999106 master-0 kubenswrapper[31420]: I0220 12:16:55.999048 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6x7r" event={"ID":"61d84cd4-22bd-4958-8c16-ea0edee7180e","Type":"ContainerStarted","Data":"686aed434b29f116963385d8d1851fb33c8a53847c434322fe9d73e48fae64e4"} Feb 20 12:16:56.000482 master-0 kubenswrapper[31420]: I0220 12:16:56.000426 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-6slqx" event={"ID":"7d5ba596-526c-42b9-845a-9a4ec0b084e9","Type":"ContainerStarted","Data":"ccced5e3237f6badd059bd8ec9589a808b32a3db8cb48a90dd5e348be2d969da"} Feb 20 12:16:56.003265 master-0 kubenswrapper[31420]: I0220 12:16:56.003182 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-f7vz4" event={"ID":"3e4015cc-c404-4a2d-8ac0-a550b2b168f3","Type":"ContainerStarted","Data":"421dd6dccd2f215b85568a5d52ee9764b4bf854be8f56c105b12c73fa4aeb9ce"} Feb 20 12:16:56.005316 master-0 kubenswrapper[31420]: I0220 12:16:56.005280 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2db7x" event={"ID":"5b412160-9ed7-4c10-9dc9-7fbe93d45803","Type":"ContainerStarted","Data":"fbc37e576fe4afed27d8e9f36d326d4623210da85c120c284009da04e2db2371"} Feb 20 12:16:56.006616 master-0 kubenswrapper[31420]: I0220 12:16:56.006554 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5qkng" event={"ID":"a86f22c3-c162-407b-9f7c-ee9fec02d78e","Type":"ContainerStarted","Data":"eb321ba8e7502ef92bba6628a4b40fb7455a80be65f4ae6fbd9b2b73bf0b2f0e"} Feb 20 12:16:56.056461 master-0 kubenswrapper[31420]: I0220 12:16:56.055726 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:16:56.056670 master-0 kubenswrapper[31420]: E0220 12:16:56.055873 31420 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 20 12:16:56.056670 master-0 kubenswrapper[31420]: E0220 12:16:56.056564 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs podName:43ce6170-0fbe-4278-a2cc-131dad533824 nodeName:}" failed. No retries permitted until 2026-02-20 12:16:57.056546384 +0000 UTC m=+721.775784625 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-wnsg5" (UID: "43ce6170-0fbe-4278-a2cc-131dad533824") : secret "metrics-server-cert" not found Feb 20 12:16:56.056670 master-0 kubenswrapper[31420]: I0220 12:16:56.056648 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:16:56.057014 master-0 kubenswrapper[31420]: E0220 12:16:56.056914 31420 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 20 12:16:56.057014 master-0 kubenswrapper[31420]: E0220 12:16:56.056996 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs podName:43ce6170-0fbe-4278-a2cc-131dad533824 nodeName:}" failed. No retries permitted until 2026-02-20 12:16:57.056949676 +0000 UTC m=+721.776187917 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-wnsg5" (UID: "43ce6170-0fbe-4278-a2cc-131dad533824") : secret "webhook-server-cert" not found Feb 20 12:16:56.291700 master-0 kubenswrapper[31420]: I0220 12:16:56.291627 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-bhhzk"] Feb 20 12:16:56.299804 master-0 kubenswrapper[31420]: I0220 12:16:56.299635 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-xdnq5"] Feb 20 12:16:56.318733 master-0 kubenswrapper[31420]: W0220 12:16:56.318690 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec3aef87_8ef5_4e4c_a06e_3d9424c62df6.slice/crio-21d654ad525512b4ef0d9f1389500ed482751536768f3b2692d905a0d4f552e7 WatchSource:0}: Error finding container 21d654ad525512b4ef0d9f1389500ed482751536768f3b2692d905a0d4f552e7: Status 404 returned error can't find the container with id 21d654ad525512b4ef0d9f1389500ed482751536768f3b2692d905a0d4f552e7 Feb 20 12:16:56.327279 master-0 kubenswrapper[31420]: I0220 12:16:56.327229 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rn9t8"] Feb 20 12:16:56.334127 master-0 kubenswrapper[31420]: W0220 12:16:56.334065 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e9508f3_a3ab_4df1_b9fb_775bba9a0f43.slice/crio-6056ba18d845466568a8e70796bdf145d715349748b6c20108877846abb7b261 WatchSource:0}: Error finding container 6056ba18d845466568a8e70796bdf145d715349748b6c20108877846abb7b261: Status 404 returned error can't find the container with id 6056ba18d845466568a8e70796bdf145d715349748b6c20108877846abb7b261 Feb 20 12:16:56.363721 master-0 kubenswrapper[31420]: I0220 12:16:56.363645 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bn5dg\" (UID: \"5d18777a-1196-401b-b94c-6c8504f5ce3b\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:16:56.368794 master-0 kubenswrapper[31420]: E0220 12:16:56.363966 31420 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 20 12:16:56.368794 master-0 kubenswrapper[31420]: E0220 12:16:56.364089 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert podName:5d18777a-1196-401b-b94c-6c8504f5ce3b nodeName:}" failed. No retries permitted until 2026-02-20 12:16:58.364062265 +0000 UTC m=+723.083300556 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert") pod "infra-operator-controller-manager-5f879c76b6-bn5dg" (UID: "5d18777a-1196-401b-b94c-6c8504f5ce3b") : secret "infra-operator-webhook-server-cert" not found Feb 20 12:16:56.870619 master-0 kubenswrapper[31420]: I0220 12:16:56.868568 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-rgsrf"] Feb 20 12:16:56.873622 master-0 kubenswrapper[31420]: I0220 12:16:56.873577 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg\" (UID: \"1db07cb7-a520-4044-95b9-05f1ec724217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:16:56.873775 master-0 kubenswrapper[31420]: E0220 12:16:56.873737 31420 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 12:16:56.873843 master-0 kubenswrapper[31420]: E0220 12:16:56.873830 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert podName:1db07cb7-a520-4044-95b9-05f1ec724217 nodeName:}" failed. No retries permitted until 2026-02-20 12:16:58.873810027 +0000 UTC m=+723.593048268 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" (UID: "1db07cb7-a520-4044-95b9-05f1ec724217") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 12:16:56.875920 master-0 kubenswrapper[31420]: I0220 12:16:56.875534 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qpwhf"] Feb 20 12:16:56.881133 master-0 kubenswrapper[31420]: I0220 12:16:56.881082 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rpcfg"] Feb 20 12:16:56.902773 master-0 kubenswrapper[31420]: I0220 12:16:56.902695 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-mt88g"] Feb 20 12:16:56.912873 master-0 kubenswrapper[31420]: I0220 12:16:56.912800 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-57jpf"] Feb 20 12:16:56.920385 master-0 kubenswrapper[31420]: I0220 12:16:56.920329 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-bl8h7"] Feb 20 12:16:56.928924 master-0 kubenswrapper[31420]: I0220 12:16:56.928735 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-fvzrc"] Feb 20 12:16:56.935249 master-0 kubenswrapper[31420]: I0220 12:16:56.935202 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-8sqt4"] Feb 20 12:16:57.054552 master-0 kubenswrapper[31420]: I0220 12:16:57.054462 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xdnq5" event={"ID":"ec3aef87-8ef5-4e4c-a06e-3d9424c62df6","Type":"ContainerStarted","Data":"21d654ad525512b4ef0d9f1389500ed482751536768f3b2692d905a0d4f552e7"} Feb 20 12:16:57.058195 master-0 kubenswrapper[31420]: I0220 12:16:57.058123 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rn9t8" event={"ID":"7e9508f3-a3ab-4df1-b9fb-775bba9a0f43","Type":"ContainerStarted","Data":"6056ba18d845466568a8e70796bdf145d715349748b6c20108877846abb7b261"} Feb 20 12:16:57.062306 master-0 kubenswrapper[31420]: I0220 12:16:57.062254 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bhhzk" event={"ID":"c4b62567-b85d-476e-a92a-24b43173afd3","Type":"ContainerStarted","Data":"60a74650a0e5a49a106b01c2cbede9480c55f538ca29c6c27007ed8f3133c708"} Feb 20 12:16:57.077147 master-0 kubenswrapper[31420]: I0220 12:16:57.077078 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:16:57.077592 master-0 kubenswrapper[31420]: I0220 12:16:57.077554 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:16:57.077642 master-0 kubenswrapper[31420]: E0220 12:16:57.077575 31420 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 20 12:16:57.077673 master-0 kubenswrapper[31420]: E0220 12:16:57.077648 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs podName:43ce6170-0fbe-4278-a2cc-131dad533824 nodeName:}" failed. No retries permitted until 2026-02-20 12:16:59.077628394 +0000 UTC m=+723.796866625 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-wnsg5" (UID: "43ce6170-0fbe-4278-a2cc-131dad533824") : secret "webhook-server-cert" not found Feb 20 12:16:57.077767 master-0 kubenswrapper[31420]: E0220 12:16:57.077744 31420 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 20 12:16:57.077887 master-0 kubenswrapper[31420]: E0220 12:16:57.077851 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs podName:43ce6170-0fbe-4278-a2cc-131dad533824 nodeName:}" failed. No retries permitted until 2026-02-20 12:16:59.077827209 +0000 UTC m=+723.797065550 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-wnsg5" (UID: "43ce6170-0fbe-4278-a2cc-131dad533824") : secret "metrics-server-cert" not found Feb 20 12:16:57.280583 master-0 kubenswrapper[31420]: I0220 12:16:57.280524 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-h5qzd"] Feb 20 12:16:57.293478 master-0 kubenswrapper[31420]: I0220 12:16:57.293406 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kwlp7"] Feb 20 12:16:57.317260 master-0 kubenswrapper[31420]: I0220 12:16:57.316037 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-d2ptm"] Feb 20 12:16:57.525286 master-0 kubenswrapper[31420]: W0220 12:16:57.525219 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38f6b140_e4b4_4999_af19_6dc2973ca6ed.slice/crio-47d13311fa1f5630e3c3af73656eb23953ad3a2f676755a86db1be42f7c46a12 WatchSource:0}: Error finding container 47d13311fa1f5630e3c3af73656eb23953ad3a2f676755a86db1be42f7c46a12: Status 404 returned error can't find the container with id 47d13311fa1f5630e3c3af73656eb23953ad3a2f676755a86db1be42f7c46a12 Feb 20 12:16:57.527433 master-0 kubenswrapper[31420]: W0220 12:16:57.527368 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb55110b9_7c65_46ee_a4f2_4e9b6a69158e.slice/crio-5e2a8c2b6ea16638d3758417cb260008c605c85f72107f2e2be4f3b57115ecc9 WatchSource:0}: Error finding container 5e2a8c2b6ea16638d3758417cb260008c605c85f72107f2e2be4f3b57115ecc9: Status 404 returned error can't find the container with id 5e2a8c2b6ea16638d3758417cb260008c605c85f72107f2e2be4f3b57115ecc9 Feb 20 12:16:57.532031 master-0 kubenswrapper[31420]: W0220 12:16:57.531964 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc76a47e8_3df4_4cdf_9eb2_dfc28847f5ad.slice/crio-7b98607cb11dd1556e1e007b2ffc25c771c53195f2a169cc89c3074799996c1d WatchSource:0}: Error finding container 7b98607cb11dd1556e1e007b2ffc25c771c53195f2a169cc89c3074799996c1d: Status 404 returned error can't find the container with id 7b98607cb11dd1556e1e007b2ffc25c771c53195f2a169cc89c3074799996c1d Feb 20 12:16:57.536899 master-0 kubenswrapper[31420]: W0220 12:16:57.536843 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0503483c_565e_4e79_ba1b_bf0ad98481b0.slice/crio-bc7de04ddb1d0eb8f362da24bf2d26a71a3c52f6069b326c6fda6dc07d16354e WatchSource:0}: Error finding container bc7de04ddb1d0eb8f362da24bf2d26a71a3c52f6069b326c6fda6dc07d16354e: Status 404 returned error can't find the container with id bc7de04ddb1d0eb8f362da24bf2d26a71a3c52f6069b326c6fda6dc07d16354e Feb 20 12:16:57.545004 master-0 kubenswrapper[31420]: W0220 12:16:57.544959 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bec2508_5bbe_4c35_8292_94a77950167a.slice/crio-9d0de32369ea46902d512ad48b4a4f0e24c4051c554631ce9bcaa9f5675c08a5 WatchSource:0}: Error finding container 9d0de32369ea46902d512ad48b4a4f0e24c4051c554631ce9bcaa9f5675c08a5: Status 404 returned error can't find the container with id 9d0de32369ea46902d512ad48b4a4f0e24c4051c554631ce9bcaa9f5675c08a5 Feb 20 12:16:58.078290 master-0 kubenswrapper[31420]: I0220 12:16:58.078236 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h5qzd" event={"ID":"8e537287-9fc2-4c6f-bea9-0b7c8565a6c7","Type":"ContainerStarted","Data":"eb68bacfc810e946e37e5a877499c72fa7cba60d00907c6e752e304a7aade69d"} Feb 20 12:16:58.082407 master-0 kubenswrapper[31420]: I0220 12:16:58.082370 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rgsrf" event={"ID":"b55110b9-7c65-46ee-a4f2-4e9b6a69158e","Type":"ContainerStarted","Data":"5e2a8c2b6ea16638d3758417cb260008c605c85f72107f2e2be4f3b57115ecc9"} Feb 20 12:16:58.085198 master-0 kubenswrapper[31420]: I0220 12:16:58.085145 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-8sqt4" event={"ID":"c76a47e8-3df4-4cdf-9eb2-dfc28847f5ad","Type":"ContainerStarted","Data":"7b98607cb11dd1556e1e007b2ffc25c771c53195f2a169cc89c3074799996c1d"} Feb 20 12:16:58.086653 master-0 kubenswrapper[31420]: I0220 12:16:58.086626 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-mt88g" event={"ID":"4bec2508-5bbe-4c35-8292-94a77950167a","Type":"ContainerStarted","Data":"9d0de32369ea46902d512ad48b4a4f0e24c4051c554631ce9bcaa9f5675c08a5"} Feb 20 12:16:58.088264 master-0 kubenswrapper[31420]: I0220 12:16:58.088234 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rpcfg" event={"ID":"38f6b140-e4b4-4999-af19-6dc2973ca6ed","Type":"ContainerStarted","Data":"47d13311fa1f5630e3c3af73656eb23953ad3a2f676755a86db1be42f7c46a12"} Feb 20 12:16:58.089420 master-0 kubenswrapper[31420]: I0220 12:16:58.089389 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bl8h7" event={"ID":"0503483c-565e-4e79-ba1b-bf0ad98481b0","Type":"ContainerStarted","Data":"bc7de04ddb1d0eb8f362da24bf2d26a71a3c52f6069b326c6fda6dc07d16354e"} Feb 20 12:16:58.163846 master-0 kubenswrapper[31420]: W0220 12:16:58.163781 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25065d47_a25e_4035_8c33_c73eb191f1b2.slice/crio-c542f86862ad0821fd6c60356bdaebf1c5aaccb3332ccee28bddd11bc55ad284 WatchSource:0}: Error finding container c542f86862ad0821fd6c60356bdaebf1c5aaccb3332ccee28bddd11bc55ad284: Status 404 returned error can't find the container with id c542f86862ad0821fd6c60356bdaebf1c5aaccb3332ccee28bddd11bc55ad284 Feb 20 12:16:58.169694 master-0 kubenswrapper[31420]: W0220 12:16:58.169638 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bb94d70_fe73_4d78_8f5a_6ebb2f0ea9d5.slice/crio-84015796b1634ca45b5d4493a5efd816a315e209677f95e3e0bab7c88cce1839 WatchSource:0}: Error finding container 84015796b1634ca45b5d4493a5efd816a315e209677f95e3e0bab7c88cce1839: Status 404 returned error can't find the container with id 84015796b1634ca45b5d4493a5efd816a315e209677f95e3e0bab7c88cce1839 Feb 20 12:16:58.175093 master-0 kubenswrapper[31420]: W0220 12:16:58.175048 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod862d9673_54d9_4647_bcc5_146f5ab37483.slice/crio-c36dc5a56ae3e5123c89dfcf42f4fc4cf991e7874b21dd2928da619c6b7b1dea WatchSource:0}: Error finding container c36dc5a56ae3e5123c89dfcf42f4fc4cf991e7874b21dd2928da619c6b7b1dea: Status 404 returned error can't find the container with id c36dc5a56ae3e5123c89dfcf42f4fc4cf991e7874b21dd2928da619c6b7b1dea Feb 20 12:16:58.408844 master-0 kubenswrapper[31420]: I0220 12:16:58.408627 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bn5dg\" (UID: \"5d18777a-1196-401b-b94c-6c8504f5ce3b\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:16:58.409193 master-0 kubenswrapper[31420]: E0220 12:16:58.409170 31420 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 20 12:16:58.409278 master-0 kubenswrapper[31420]: E0220 12:16:58.409255 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert podName:5d18777a-1196-401b-b94c-6c8504f5ce3b nodeName:}" failed. No retries permitted until 2026-02-20 12:17:02.409219746 +0000 UTC m=+727.128457987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert") pod "infra-operator-controller-manager-5f879c76b6-bn5dg" (UID: "5d18777a-1196-401b-b94c-6c8504f5ce3b") : secret "infra-operator-webhook-server-cert" not found Feb 20 12:16:58.831080 master-0 kubenswrapper[31420]: W0220 12:16:58.831001 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d890613_25fd_4d7f_b82b_1295bb5a66fd.slice/crio-64954c8f96b7a21e84896c66a9a663b32728993f51d2dac3c6a2671b6fb6c436 WatchSource:0}: Error finding container 64954c8f96b7a21e84896c66a9a663b32728993f51d2dac3c6a2671b6fb6c436: Status 404 returned error can't find the container with id 64954c8f96b7a21e84896c66a9a663b32728993f51d2dac3c6a2671b6fb6c436 Feb 20 12:16:58.920950 master-0 kubenswrapper[31420]: I0220 12:16:58.920879 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg\" (UID: \"1db07cb7-a520-4044-95b9-05f1ec724217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:16:58.921424 master-0 kubenswrapper[31420]: E0220 12:16:58.921355 31420 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 12:16:58.921613 master-0 kubenswrapper[31420]: E0220 12:16:58.921553 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert podName:1db07cb7-a520-4044-95b9-05f1ec724217 nodeName:}" failed. No retries permitted until 2026-02-20 12:17:02.921480749 +0000 UTC m=+727.640719050 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" (UID: "1db07cb7-a520-4044-95b9-05f1ec724217") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 12:16:59.110207 master-0 kubenswrapper[31420]: I0220 12:16:59.110081 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qpwhf" event={"ID":"862d9673-54d9-4647-bcc5-146f5ab37483","Type":"ContainerStarted","Data":"c36dc5a56ae3e5123c89dfcf42f4fc4cf991e7874b21dd2928da619c6b7b1dea"} Feb 20 12:16:59.111294 master-0 kubenswrapper[31420]: I0220 12:16:59.111224 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-57jpf" event={"ID":"25065d47-a25e-4035-8c33-c73eb191f1b2","Type":"ContainerStarted","Data":"c542f86862ad0821fd6c60356bdaebf1c5aaccb3332ccee28bddd11bc55ad284"} Feb 20 12:16:59.115922 master-0 kubenswrapper[31420]: I0220 12:16:59.115561 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fvzrc" event={"ID":"4bb94d70-fe73-4d78-8f5a-6ebb2f0ea9d5","Type":"ContainerStarted","Data":"84015796b1634ca45b5d4493a5efd816a315e209677f95e3e0bab7c88cce1839"} Feb 20 12:16:59.120054 master-0 kubenswrapper[31420]: I0220 12:16:59.120013 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d2ptm" event={"ID":"4d890613-25fd-4d7f-b82b-1295bb5a66fd","Type":"ContainerStarted","Data":"64954c8f96b7a21e84896c66a9a663b32728993f51d2dac3c6a2671b6fb6c436"} Feb 20 12:16:59.126517 master-0 kubenswrapper[31420]: I0220 12:16:59.126358 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:16:59.126517 master-0 kubenswrapper[31420]: I0220 12:16:59.126451 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:16:59.126653 master-0 kubenswrapper[31420]: E0220 12:16:59.126535 31420 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 20 12:16:59.126653 master-0 kubenswrapper[31420]: E0220 12:16:59.126621 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs podName:43ce6170-0fbe-4278-a2cc-131dad533824 nodeName:}" failed. No retries permitted until 2026-02-20 12:17:03.126604131 +0000 UTC m=+727.845842372 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-wnsg5" (UID: "43ce6170-0fbe-4278-a2cc-131dad533824") : secret "metrics-server-cert" not found Feb 20 12:16:59.126779 master-0 kubenswrapper[31420]: E0220 12:16:59.126729 31420 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 20 12:16:59.126865 master-0 kubenswrapper[31420]: E0220 12:16:59.126846 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs podName:43ce6170-0fbe-4278-a2cc-131dad533824 nodeName:}" failed. No retries permitted until 2026-02-20 12:17:03.126823217 +0000 UTC m=+727.846061468 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-wnsg5" (UID: "43ce6170-0fbe-4278-a2cc-131dad533824") : secret "webhook-server-cert" not found Feb 20 12:16:59.415083 master-0 kubenswrapper[31420]: W0220 12:16:59.414952 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0bc4973_cb77_49a8_be3a_35340c08e9a0.slice/crio-0b7197c274b213ed4f9477b1173d6b926f7be12a446f21a643adcb4b1c5df7d6 WatchSource:0}: Error finding container 0b7197c274b213ed4f9477b1173d6b926f7be12a446f21a643adcb4b1c5df7d6: Status 404 returned error can't find the container with id 0b7197c274b213ed4f9477b1173d6b926f7be12a446f21a643adcb4b1c5df7d6 Feb 20 12:17:00.139639 master-0 kubenswrapper[31420]: I0220 12:17:00.139573 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kwlp7" event={"ID":"b0bc4973-cb77-49a8-be3a-35340c08e9a0","Type":"ContainerStarted","Data":"0b7197c274b213ed4f9477b1173d6b926f7be12a446f21a643adcb4b1c5df7d6"} Feb 20 12:17:02.616229 master-0 kubenswrapper[31420]: I0220 12:17:02.616133 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bn5dg\" (UID: \"5d18777a-1196-401b-b94c-6c8504f5ce3b\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:17:02.621885 master-0 kubenswrapper[31420]: E0220 12:17:02.621514 31420 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 20 12:17:02.622276 master-0 kubenswrapper[31420]: E0220 12:17:02.621975 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert podName:5d18777a-1196-401b-b94c-6c8504f5ce3b nodeName:}" failed. No retries permitted until 2026-02-20 12:17:10.621569336 +0000 UTC m=+735.340807577 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert") pod "infra-operator-controller-manager-5f879c76b6-bn5dg" (UID: "5d18777a-1196-401b-b94c-6c8504f5ce3b") : secret "infra-operator-webhook-server-cert" not found Feb 20 12:17:02.922813 master-0 kubenswrapper[31420]: I0220 12:17:02.922683 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg\" (UID: \"1db07cb7-a520-4044-95b9-05f1ec724217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:17:02.923007 master-0 kubenswrapper[31420]: E0220 12:17:02.922912 31420 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 12:17:02.923043 master-0 kubenswrapper[31420]: E0220 12:17:02.923026 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert podName:1db07cb7-a520-4044-95b9-05f1ec724217 nodeName:}" failed. No retries permitted until 2026-02-20 12:17:10.923002465 +0000 UTC m=+735.642240706 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" (UID: "1db07cb7-a520-4044-95b9-05f1ec724217") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 12:17:03.127433 master-0 kubenswrapper[31420]: I0220 12:17:03.127352 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:17:03.127433 master-0 kubenswrapper[31420]: I0220 12:17:03.127413 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:17:03.127751 master-0 kubenswrapper[31420]: E0220 12:17:03.127561 31420 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 20 12:17:03.127751 master-0 kubenswrapper[31420]: E0220 12:17:03.127627 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs podName:43ce6170-0fbe-4278-a2cc-131dad533824 nodeName:}" failed. No retries permitted until 2026-02-20 12:17:11.127609663 +0000 UTC m=+735.846847904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-wnsg5" (UID: "43ce6170-0fbe-4278-a2cc-131dad533824") : secret "metrics-server-cert" not found Feb 20 12:17:03.127751 master-0 kubenswrapper[31420]: E0220 12:17:03.127566 31420 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 20 12:17:03.127751 master-0 kubenswrapper[31420]: E0220 12:17:03.127669 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs podName:43ce6170-0fbe-4278-a2cc-131dad533824 nodeName:}" failed. No retries permitted until 2026-02-20 12:17:11.127662855 +0000 UTC m=+735.846901096 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-wnsg5" (UID: "43ce6170-0fbe-4278-a2cc-131dad533824") : secret "webhook-server-cert" not found Feb 20 12:17:10.668748 master-0 kubenswrapper[31420]: I0220 12:17:10.668614 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bn5dg\" (UID: \"5d18777a-1196-401b-b94c-6c8504f5ce3b\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:17:10.669814 master-0 kubenswrapper[31420]: E0220 12:17:10.668864 31420 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 20 12:17:10.669814 master-0 kubenswrapper[31420]: E0220 12:17:10.669014 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert podName:5d18777a-1196-401b-b94c-6c8504f5ce3b nodeName:}" failed. No retries permitted until 2026-02-20 12:17:26.66897346 +0000 UTC m=+751.388211751 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert") pod "infra-operator-controller-manager-5f879c76b6-bn5dg" (UID: "5d18777a-1196-401b-b94c-6c8504f5ce3b") : secret "infra-operator-webhook-server-cert" not found Feb 20 12:17:10.976351 master-0 kubenswrapper[31420]: I0220 12:17:10.975940 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg\" (UID: \"1db07cb7-a520-4044-95b9-05f1ec724217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:17:10.976351 master-0 kubenswrapper[31420]: E0220 12:17:10.976134 31420 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 12:17:10.976351 master-0 kubenswrapper[31420]: E0220 12:17:10.976210 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert podName:1db07cb7-a520-4044-95b9-05f1ec724217 nodeName:}" failed. No retries permitted until 2026-02-20 12:17:26.976195362 +0000 UTC m=+751.695433603 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" (UID: "1db07cb7-a520-4044-95b9-05f1ec724217") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 12:17:11.179939 master-0 kubenswrapper[31420]: I0220 12:17:11.179888 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:17:11.180157 master-0 kubenswrapper[31420]: E0220 12:17:11.180083 31420 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 20 12:17:11.180218 master-0 kubenswrapper[31420]: I0220 12:17:11.180135 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:17:11.180371 master-0 kubenswrapper[31420]: E0220 12:17:11.180185 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs podName:43ce6170-0fbe-4278-a2cc-131dad533824 nodeName:}" failed. No retries permitted until 2026-02-20 12:17:27.180162903 +0000 UTC m=+751.899401154 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-wnsg5" (UID: "43ce6170-0fbe-4278-a2cc-131dad533824") : secret "metrics-server-cert" not found Feb 20 12:17:11.180435 master-0 kubenswrapper[31420]: E0220 12:17:11.180267 31420 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 20 12:17:11.180547 master-0 kubenswrapper[31420]: E0220 12:17:11.180493 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs podName:43ce6170-0fbe-4278-a2cc-131dad533824 nodeName:}" failed. No retries permitted until 2026-02-20 12:17:27.180458891 +0000 UTC m=+751.899697172 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-wnsg5" (UID: "43ce6170-0fbe-4278-a2cc-131dad533824") : secret "webhook-server-cert" not found Feb 20 12:17:13.294470 master-0 kubenswrapper[31420]: I0220 12:17:13.293901 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2db7x" event={"ID":"5b412160-9ed7-4c10-9dc9-7fbe93d45803","Type":"ContainerStarted","Data":"7fcb27ba96cdc5e1ef612419ce83c496d5354242684e2321a8aba66b680c0255"} Feb 20 12:17:13.294470 master-0 kubenswrapper[31420]: I0220 12:17:13.293978 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2db7x" Feb 20 12:17:13.299545 master-0 kubenswrapper[31420]: I0220 12:17:13.299498 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bl8h7" event={"ID":"0503483c-565e-4e79-ba1b-bf0ad98481b0","Type":"ContainerStarted","Data":"3b9a70fb507cadf6e5fe50693ae7055b9a999c8c86f9ddf93417cf34c4beb816"} Feb 20 12:17:13.300149 master-0 kubenswrapper[31420]: I0220 12:17:13.299747 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bl8h7" Feb 20 12:17:13.316040 master-0 kubenswrapper[31420]: I0220 12:17:13.315994 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rn9t8" event={"ID":"7e9508f3-a3ab-4df1-b9fb-775bba9a0f43","Type":"ContainerStarted","Data":"2a1028742a942055e4e249264de79f7b5d7889b1bcf91dfbe92a2bf371e8a0ac"} Feb 20 12:17:13.316120 master-0 kubenswrapper[31420]: I0220 12:17:13.316105 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rn9t8" Feb 20 12:17:13.320494 master-0 kubenswrapper[31420]: I0220 12:17:13.320440 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2db7x" podStartSLOduration=12.200748119 podStartE2EDuration="20.320425007s" podCreationTimestamp="2026-02-20 12:16:53 +0000 UTC" firstStartedPulling="2026-02-20 12:16:55.077659728 +0000 UTC m=+719.796897969" lastFinishedPulling="2026-02-20 12:17:03.197336616 +0000 UTC m=+727.916574857" observedRunningTime="2026-02-20 12:17:13.309543443 +0000 UTC m=+738.028781684" watchObservedRunningTime="2026-02-20 12:17:13.320425007 +0000 UTC m=+738.039663248" Feb 20 12:17:13.331962 master-0 kubenswrapper[31420]: I0220 12:17:13.329663 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fvzrc" event={"ID":"4bb94d70-fe73-4d78-8f5a-6ebb2f0ea9d5","Type":"ContainerStarted","Data":"180cb721b2a3ceb9bb61473291b8c0062268c3a32576c626ccc36cb0b1642b7a"} Feb 20 12:17:13.331962 master-0 kubenswrapper[31420]: I0220 12:17:13.330267 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fvzrc" Feb 20 12:17:13.344745 master-0 kubenswrapper[31420]: I0220 12:17:13.344110 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bl8h7" podStartSLOduration=4.96722256 podStartE2EDuration="19.34409047s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:16:57.5403882 +0000 UTC m=+722.259626441" lastFinishedPulling="2026-02-20 12:17:11.91725611 +0000 UTC m=+736.636494351" observedRunningTime="2026-02-20 12:17:13.342227188 +0000 UTC m=+738.061465429" watchObservedRunningTime="2026-02-20 12:17:13.34409047 +0000 UTC m=+738.063328721" Feb 20 12:17:13.366662 master-0 kubenswrapper[31420]: I0220 12:17:13.360561 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h5qzd" event={"ID":"8e537287-9fc2-4c6f-bea9-0b7c8565a6c7","Type":"ContainerStarted","Data":"3b7844b663573962c29e8b3de401a7c55be1b5c3ea45c6b764a756aea224eba6"} Feb 20 12:17:13.366662 master-0 kubenswrapper[31420]: I0220 12:17:13.362541 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h5qzd" Feb 20 12:17:13.413473 master-0 kubenswrapper[31420]: I0220 12:17:13.409857 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bhhzk" event={"ID":"c4b62567-b85d-476e-a92a-24b43173afd3","Type":"ContainerStarted","Data":"927a538606d9b1c5534d6b95cf71442597237e3edd74dea5cf057b5cffb15163"} Feb 20 12:17:13.413473 master-0 kubenswrapper[31420]: I0220 12:17:13.411099 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bhhzk" Feb 20 12:17:13.423009 master-0 kubenswrapper[31420]: I0220 12:17:13.420224 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fvzrc" podStartSLOduration=5.207789835 podStartE2EDuration="19.420199881s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:16:58.173586309 +0000 UTC m=+722.892824550" lastFinishedPulling="2026-02-20 12:17:12.385996345 +0000 UTC m=+737.105234596" observedRunningTime="2026-02-20 12:17:13.409855531 +0000 UTC m=+738.129093772" watchObservedRunningTime="2026-02-20 12:17:13.420199881 +0000 UTC m=+738.139438122" Feb 20 12:17:13.426456 master-0 kubenswrapper[31420]: I0220 12:17:13.424819 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rpcfg" event={"ID":"38f6b140-e4b4-4999-af19-6dc2973ca6ed","Type":"ContainerStarted","Data":"3da851e88d3c04d32cc5f1dbeb0342df6839ebcd08a8b49124c0d2155765ca13"} Feb 20 12:17:13.426456 master-0 kubenswrapper[31420]: I0220 12:17:13.425718 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rpcfg" Feb 20 12:17:13.486553 master-0 kubenswrapper[31420]: I0220 12:17:13.482491 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rn9t8" podStartSLOduration=4.507185668 podStartE2EDuration="19.482466563s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:16:56.336457702 +0000 UTC m=+721.055695943" lastFinishedPulling="2026-02-20 12:17:11.311738577 +0000 UTC m=+736.030976838" observedRunningTime="2026-02-20 12:17:13.4651923 +0000 UTC m=+738.184430541" watchObservedRunningTime="2026-02-20 12:17:13.482466563 +0000 UTC m=+738.201704804" Feb 20 12:17:13.486553 master-0 kubenswrapper[31420]: I0220 12:17:13.484679 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rgsrf" event={"ID":"b55110b9-7c65-46ee-a4f2-4e9b6a69158e","Type":"ContainerStarted","Data":"f8b215fc25f7645b50a33adc715606517707dcfeebe54c1cef7d49e916373b1e"} Feb 20 12:17:13.486553 master-0 kubenswrapper[31420]: I0220 12:17:13.485714 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rgsrf" Feb 20 12:17:13.524549 master-0 kubenswrapper[31420]: I0220 12:17:13.520807 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-57jpf" Feb 20 12:17:13.524549 master-0 kubenswrapper[31420]: I0220 12:17:13.520876 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-57jpf" event={"ID":"25065d47-a25e-4035-8c33-c73eb191f1b2","Type":"ContainerStarted","Data":"c62a1fc628b7bda3bc23d026d13dafe72af7ea7d324568744bcdb26c77be0670"} Feb 20 12:17:13.528445 master-0 kubenswrapper[31420]: I0220 12:17:13.525897 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6x7r" event={"ID":"61d84cd4-22bd-4958-8c16-ea0edee7180e","Type":"ContainerStarted","Data":"a75900ebda6122ee4c742c2a50259518d33bedfd2af18935d3960104d87bfed1"} Feb 20 12:17:13.528445 master-0 kubenswrapper[31420]: I0220 12:17:13.526096 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6x7r" Feb 20 12:17:13.540951 master-0 kubenswrapper[31420]: I0220 12:17:13.539408 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-6slqx" event={"ID":"7d5ba596-526c-42b9-845a-9a4ec0b084e9","Type":"ContainerStarted","Data":"e4df81672ed9e5fb3cae0fcb801cf945d18d4170aa95a39b08ee2eb30607caa2"} Feb 20 12:17:13.540951 master-0 kubenswrapper[31420]: I0220 12:17:13.540350 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-6slqx" Feb 20 12:17:13.544544 master-0 kubenswrapper[31420]: I0220 12:17:13.542440 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-f7vz4" event={"ID":"3e4015cc-c404-4a2d-8ac0-a550b2b168f3","Type":"ContainerStarted","Data":"8cc2d3785f63bfe35e7cb1ba1c8f686a89df658eb2b69d5d6b4741f208b99974"} Feb 20 12:17:13.544544 master-0 kubenswrapper[31420]: I0220 12:17:13.543199 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-f7vz4" Feb 20 12:17:13.562547 master-0 kubenswrapper[31420]: I0220 12:17:13.557853 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5qkng" event={"ID":"a86f22c3-c162-407b-9f7c-ee9fec02d78e","Type":"ContainerStarted","Data":"0a4cbcfdb2f1db469430d42b97d119b25cdbab49c441cb54a3d83a5fabdef141"} Feb 20 12:17:13.562547 master-0 kubenswrapper[31420]: I0220 12:17:13.558861 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5qkng" Feb 20 12:17:13.562547 master-0 kubenswrapper[31420]: I0220 12:17:13.561906 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h5qzd" podStartSLOduration=4.622702433 podStartE2EDuration="19.561885957s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:16:57.542400797 +0000 UTC m=+722.261639038" lastFinishedPulling="2026-02-20 12:17:12.481584321 +0000 UTC m=+737.200822562" observedRunningTime="2026-02-20 12:17:13.538889783 +0000 UTC m=+738.258128034" watchObservedRunningTime="2026-02-20 12:17:13.561885957 +0000 UTC m=+738.281124198" Feb 20 12:17:13.591976 master-0 kubenswrapper[31420]: I0220 12:17:13.591918 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8xjmf" event={"ID":"f0fda7fa-0935-47fc-8c9b-723d5b352c04","Type":"ContainerStarted","Data":"c9dcc7177297427763cd8f13cfb83a9bbaafa1f4f7cd48b76f29439998cd6d36"} Feb 20 12:17:13.592689 master-0 kubenswrapper[31420]: I0220 12:17:13.592667 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8xjmf" Feb 20 12:17:13.725614 master-0 kubenswrapper[31420]: I0220 12:17:13.722390 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rpcfg" podStartSLOduration=4.689681098 podStartE2EDuration="19.722375971s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:16:57.528314962 +0000 UTC m=+722.247553203" lastFinishedPulling="2026-02-20 12:17:12.561009825 +0000 UTC m=+737.280248076" observedRunningTime="2026-02-20 12:17:13.71915279 +0000 UTC m=+738.438391031" watchObservedRunningTime="2026-02-20 12:17:13.722375971 +0000 UTC m=+738.441614212" Feb 20 12:17:13.949309 master-0 kubenswrapper[31420]: I0220 12:17:13.949211 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bhhzk" podStartSLOduration=7.902649006 podStartE2EDuration="19.949191611s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:16:56.283684714 +0000 UTC m=+721.002922965" lastFinishedPulling="2026-02-20 12:17:08.330227329 +0000 UTC m=+733.049465570" observedRunningTime="2026-02-20 12:17:13.849337045 +0000 UTC m=+738.568575286" watchObservedRunningTime="2026-02-20 12:17:13.949191611 +0000 UTC m=+738.668429852" Feb 20 12:17:13.952131 master-0 kubenswrapper[31420]: I0220 12:17:13.952094 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rgsrf" podStartSLOduration=5.098018661 podStartE2EDuration="19.952085712s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:16:57.531914963 +0000 UTC m=+722.251153204" lastFinishedPulling="2026-02-20 12:17:12.385982004 +0000 UTC m=+737.105220255" observedRunningTime="2026-02-20 12:17:13.948556193 +0000 UTC m=+738.667794454" watchObservedRunningTime="2026-02-20 12:17:13.952085712 +0000 UTC m=+738.671323953" Feb 20 12:17:14.176545 master-0 kubenswrapper[31420]: I0220 12:17:14.176062 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8xjmf" podStartSLOduration=13.256752436 podStartE2EDuration="21.176038683s" podCreationTimestamp="2026-02-20 12:16:53 +0000 UTC" firstStartedPulling="2026-02-20 12:16:55.250996661 +0000 UTC m=+719.970234902" lastFinishedPulling="2026-02-20 12:17:03.170282888 +0000 UTC m=+727.889521149" observedRunningTime="2026-02-20 12:17:14.059961833 +0000 UTC m=+738.779200094" watchObservedRunningTime="2026-02-20 12:17:14.176038683 +0000 UTC m=+738.895276924" Feb 20 12:17:14.298259 master-0 kubenswrapper[31420]: I0220 12:17:14.298178 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-57jpf" podStartSLOduration=6.078582965 podStartE2EDuration="20.298162332s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:16:58.1664924 +0000 UTC m=+722.885730651" lastFinishedPulling="2026-02-20 12:17:12.386071777 +0000 UTC m=+737.105310018" observedRunningTime="2026-02-20 12:17:14.294433168 +0000 UTC m=+739.013671419" watchObservedRunningTime="2026-02-20 12:17:14.298162332 +0000 UTC m=+739.017400573" Feb 20 12:17:14.301769 master-0 kubenswrapper[31420]: I0220 12:17:14.301730 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-6slqx" podStartSLOduration=10.925305557 podStartE2EDuration="21.301723552s" podCreationTimestamp="2026-02-20 12:16:53 +0000 UTC" firstStartedPulling="2026-02-20 12:16:55.815563937 +0000 UTC m=+720.534802178" lastFinishedPulling="2026-02-20 12:17:06.191981892 +0000 UTC m=+730.911220173" observedRunningTime="2026-02-20 12:17:14.177605716 +0000 UTC m=+738.896843947" watchObservedRunningTime="2026-02-20 12:17:14.301723552 +0000 UTC m=+739.020961793" Feb 20 12:17:14.337731 master-0 kubenswrapper[31420]: I0220 12:17:14.337647 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6x7r" podStartSLOduration=13.947102903 podStartE2EDuration="21.337624637s" podCreationTimestamp="2026-02-20 12:16:53 +0000 UTC" firstStartedPulling="2026-02-20 12:16:55.778702385 +0000 UTC m=+720.497940626" lastFinishedPulling="2026-02-20 12:17:03.169224109 +0000 UTC m=+727.888462360" observedRunningTime="2026-02-20 12:17:14.336709771 +0000 UTC m=+739.055948022" watchObservedRunningTime="2026-02-20 12:17:14.337624637 +0000 UTC m=+739.056862878" Feb 20 12:17:14.375136 master-0 kubenswrapper[31420]: I0220 12:17:14.375047 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-f7vz4" podStartSLOduration=13.569242865 podStartE2EDuration="21.375026234s" podCreationTimestamp="2026-02-20 12:16:53 +0000 UTC" firstStartedPulling="2026-02-20 12:16:55.360064445 +0000 UTC m=+720.079302676" lastFinishedPulling="2026-02-20 12:17:03.165847764 +0000 UTC m=+727.885086045" observedRunningTime="2026-02-20 12:17:14.373555293 +0000 UTC m=+739.092793534" watchObservedRunningTime="2026-02-20 12:17:14.375026234 +0000 UTC m=+739.094264475" Feb 20 12:17:14.411684 master-0 kubenswrapper[31420]: I0220 12:17:14.411600 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5qkng" podStartSLOduration=12.229165653 podStartE2EDuration="21.411581157s" podCreationTimestamp="2026-02-20 12:16:53 +0000 UTC" firstStartedPulling="2026-02-20 12:16:55.858611723 +0000 UTC m=+720.577849964" lastFinishedPulling="2026-02-20 12:17:05.041027217 +0000 UTC m=+729.760265468" observedRunningTime="2026-02-20 12:17:14.404686064 +0000 UTC m=+739.123924305" watchObservedRunningTime="2026-02-20 12:17:14.411581157 +0000 UTC m=+739.130819388" Feb 20 12:17:14.670553 master-0 kubenswrapper[31420]: I0220 12:17:14.668715 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kwlp7" event={"ID":"b0bc4973-cb77-49a8-be3a-35340c08e9a0","Type":"ContainerStarted","Data":"80ca3e3627d55cb6e437478eee8ebbd27d13f1c0ca962fcd63661c95f0c2d7ae"} Feb 20 12:17:14.679086 master-0 kubenswrapper[31420]: I0220 12:17:14.679045 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d2ptm" event={"ID":"4d890613-25fd-4d7f-b82b-1295bb5a66fd","Type":"ContainerStarted","Data":"a486366868077c29f4b58cb03c201bc9a62d986c8a1bee8451c36437ba57ebc6"} Feb 20 12:17:14.680577 master-0 kubenswrapper[31420]: I0220 12:17:14.679849 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d2ptm" Feb 20 12:17:14.681061 master-0 kubenswrapper[31420]: I0220 12:17:14.681045 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qpwhf" event={"ID":"862d9673-54d9-4647-bcc5-146f5ab37483","Type":"ContainerStarted","Data":"e2740191e150d1f7336ff7d629c336a25968d63cb1f6bc0453dbb17036b9d6af"} Feb 20 12:17:14.681486 master-0 kubenswrapper[31420]: I0220 12:17:14.681471 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qpwhf" Feb 20 12:17:14.682463 master-0 kubenswrapper[31420]: I0220 12:17:14.682447 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-8sqt4" event={"ID":"c76a47e8-3df4-4cdf-9eb2-dfc28847f5ad","Type":"ContainerStarted","Data":"3ab26e64faa2fd808cd5156d4066a26aaf917a20b44bdd5f128caf42944f5fc3"} Feb 20 12:17:14.682714 master-0 kubenswrapper[31420]: I0220 12:17:14.682673 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-8sqt4" Feb 20 12:17:14.683774 master-0 kubenswrapper[31420]: I0220 12:17:14.683757 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-mt88g" event={"ID":"4bec2508-5bbe-4c35-8292-94a77950167a","Type":"ContainerStarted","Data":"520fa68b02d0f0cb462c80ab37f26ccbd58236e738b63c64110ff518120c905d"} Feb 20 12:17:14.684204 master-0 kubenswrapper[31420]: I0220 12:17:14.684189 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-mt88g" Feb 20 12:17:14.685858 master-0 kubenswrapper[31420]: I0220 12:17:14.685841 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xdnq5" event={"ID":"ec3aef87-8ef5-4e4c-a06e-3d9424c62df6","Type":"ContainerStarted","Data":"6e900e801637133b1cdf2b346acf44d839b5e2b07bcdc5237612b5d25c6281bd"} Feb 20 12:17:14.685951 master-0 kubenswrapper[31420]: I0220 12:17:14.685939 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xdnq5" Feb 20 12:17:14.696216 master-0 kubenswrapper[31420]: I0220 12:17:14.696126 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kwlp7" podStartSLOduration=7.453625534 podStartE2EDuration="20.696110824s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:16:59.417882667 +0000 UTC m=+724.137120908" lastFinishedPulling="2026-02-20 12:17:12.660367957 +0000 UTC m=+737.379606198" observedRunningTime="2026-02-20 12:17:14.691480484 +0000 UTC m=+739.410718725" watchObservedRunningTime="2026-02-20 12:17:14.696110824 +0000 UTC m=+739.415349065" Feb 20 12:17:14.720553 master-0 kubenswrapper[31420]: I0220 12:17:14.719167 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xdnq5" podStartSLOduration=4.480328546 podStartE2EDuration="20.719151919s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:16:56.322219183 +0000 UTC m=+721.041457424" lastFinishedPulling="2026-02-20 12:17:12.561042556 +0000 UTC m=+737.280280797" observedRunningTime="2026-02-20 12:17:14.718239183 +0000 UTC m=+739.437477424" watchObservedRunningTime="2026-02-20 12:17:14.719151919 +0000 UTC m=+739.438390160" Feb 20 12:17:14.740684 master-0 kubenswrapper[31420]: I0220 12:17:14.740603 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-8sqt4" podStartSLOduration=5.6833787099999995 podStartE2EDuration="20.740587039s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:16:57.534785583 +0000 UTC m=+722.254023824" lastFinishedPulling="2026-02-20 12:17:12.591993912 +0000 UTC m=+737.311232153" observedRunningTime="2026-02-20 12:17:14.739805427 +0000 UTC m=+739.459043678" watchObservedRunningTime="2026-02-20 12:17:14.740587039 +0000 UTC m=+739.459825270" Feb 20 12:17:14.777828 master-0 kubenswrapper[31420]: I0220 12:17:14.777748 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qpwhf" podStartSLOduration=6.389345526 podStartE2EDuration="20.777731059s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:16:58.18362781 +0000 UTC m=+722.902866051" lastFinishedPulling="2026-02-20 12:17:12.572013343 +0000 UTC m=+737.291251584" observedRunningTime="2026-02-20 12:17:14.771503725 +0000 UTC m=+739.490741966" watchObservedRunningTime="2026-02-20 12:17:14.777731059 +0000 UTC m=+739.496969300" Feb 20 12:17:14.808313 master-0 kubenswrapper[31420]: I0220 12:17:14.808234 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-mt88g" podStartSLOduration=5.763461603 podStartE2EDuration="20.808212453s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:16:57.547942332 +0000 UTC m=+722.267180573" lastFinishedPulling="2026-02-20 12:17:12.592693182 +0000 UTC m=+737.311931423" observedRunningTime="2026-02-20 12:17:14.795885898 +0000 UTC m=+739.515124139" watchObservedRunningTime="2026-02-20 12:17:14.808212453 +0000 UTC m=+739.527450694" Feb 20 12:17:14.824548 master-0 kubenswrapper[31420]: I0220 12:17:14.821643 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d2ptm" podStartSLOduration=7.162943436 podStartE2EDuration="20.821627378s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:16:58.833935488 +0000 UTC m=+723.553173729" lastFinishedPulling="2026-02-20 12:17:12.49261943 +0000 UTC m=+737.211857671" observedRunningTime="2026-02-20 12:17:14.816490454 +0000 UTC m=+739.535728695" watchObservedRunningTime="2026-02-20 12:17:14.821627378 +0000 UTC m=+739.540865619" Feb 20 12:17:24.305947 master-0 kubenswrapper[31420]: I0220 12:17:24.305786 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2db7x" Feb 20 12:17:24.408914 master-0 kubenswrapper[31420]: I0220 12:17:24.408701 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8xjmf" Feb 20 12:17:24.594596 master-0 kubenswrapper[31420]: I0220 12:17:24.594435 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-f7vz4" Feb 20 12:17:24.617676 master-0 kubenswrapper[31420]: I0220 12:17:24.615897 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6x7r" Feb 20 12:17:24.668901 master-0 kubenswrapper[31420]: I0220 12:17:24.668595 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-6slqx" Feb 20 12:17:24.703675 master-0 kubenswrapper[31420]: I0220 12:17:24.703609 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5qkng" Feb 20 12:17:25.068790 master-0 kubenswrapper[31420]: I0220 12:17:25.068732 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bhhzk" Feb 20 12:17:25.163445 master-0 kubenswrapper[31420]: I0220 12:17:25.163386 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xdnq5" Feb 20 12:17:25.216102 master-0 kubenswrapper[31420]: I0220 12:17:25.216036 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-57jpf" Feb 20 12:17:25.240794 master-0 kubenswrapper[31420]: I0220 12:17:25.236055 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rn9t8" Feb 20 12:17:25.301118 master-0 kubenswrapper[31420]: I0220 12:17:25.301052 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-mt88g" Feb 20 12:17:25.378831 master-0 kubenswrapper[31420]: I0220 12:17:25.378728 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rgsrf" Feb 20 12:17:25.421306 master-0 kubenswrapper[31420]: I0220 12:17:25.421233 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bl8h7" Feb 20 12:17:25.469961 master-0 kubenswrapper[31420]: I0220 12:17:25.469874 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fvzrc" Feb 20 12:17:25.488969 master-0 kubenswrapper[31420]: I0220 12:17:25.486894 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d2ptm" Feb 20 12:17:25.529743 master-0 kubenswrapper[31420]: I0220 12:17:25.528550 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qpwhf" Feb 20 12:17:25.543211 master-0 kubenswrapper[31420]: I0220 12:17:25.543167 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rpcfg" Feb 20 12:17:25.594395 master-0 kubenswrapper[31420]: I0220 12:17:25.594332 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-8sqt4" Feb 20 12:17:25.600219 master-0 kubenswrapper[31420]: I0220 12:17:25.599254 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h5qzd" Feb 20 12:17:26.692867 master-0 kubenswrapper[31420]: I0220 12:17:26.692793 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bn5dg\" (UID: \"5d18777a-1196-401b-b94c-6c8504f5ce3b\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:17:26.697533 master-0 kubenswrapper[31420]: I0220 12:17:26.697455 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d18777a-1196-401b-b94c-6c8504f5ce3b-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bn5dg\" (UID: \"5d18777a-1196-401b-b94c-6c8504f5ce3b\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:17:26.853582 master-0 kubenswrapper[31420]: I0220 12:17:26.853295 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:17:26.998483 master-0 kubenswrapper[31420]: I0220 12:17:26.998230 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg\" (UID: \"1db07cb7-a520-4044-95b9-05f1ec724217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:17:27.048201 master-0 kubenswrapper[31420]: I0220 12:17:27.047754 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1db07cb7-a520-4044-95b9-05f1ec724217-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg\" (UID: \"1db07cb7-a520-4044-95b9-05f1ec724217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:17:27.199678 master-0 kubenswrapper[31420]: I0220 12:17:27.199608 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:17:27.201293 master-0 kubenswrapper[31420]: I0220 12:17:27.201214 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:17:27.201572 master-0 kubenswrapper[31420]: I0220 12:17:27.201503 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:17:27.217360 master-0 kubenswrapper[31420]: I0220 12:17:27.217298 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:17:27.217906 master-0 kubenswrapper[31420]: I0220 12:17:27.217803 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/43ce6170-0fbe-4278-a2cc-131dad533824-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-wnsg5\" (UID: \"43ce6170-0fbe-4278-a2cc-131dad533824\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:17:27.443931 master-0 kubenswrapper[31420]: I0220 12:17:27.443878 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:17:27.909492 master-0 kubenswrapper[31420]: I0220 12:17:27.909276 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg"] Feb 20 12:17:28.382605 master-0 kubenswrapper[31420]: W0220 12:17:28.382485 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1db07cb7_a520_4044_95b9_05f1ec724217.slice/crio-3fbe8acf83e54aedf037a66f49e11d7ebef2274c0f326557042a7f823cb67af7 WatchSource:0}: Error finding container 3fbe8acf83e54aedf037a66f49e11d7ebef2274c0f326557042a7f823cb67af7: Status 404 returned error can't find the container with id 3fbe8acf83e54aedf037a66f49e11d7ebef2274c0f326557042a7f823cb67af7 Feb 20 12:17:28.384051 master-0 kubenswrapper[31420]: I0220 12:17:28.384001 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5"] Feb 20 12:17:28.389500 master-0 kubenswrapper[31420]: W0220 12:17:28.389430 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43ce6170_0fbe_4278_a2cc_131dad533824.slice/crio-87a224083db2d92a38f49ad09b7d0b369a35a89caac9a1b5cdf72e1719f721ac WatchSource:0}: Error finding container 87a224083db2d92a38f49ad09b7d0b369a35a89caac9a1b5cdf72e1719f721ac: Status 404 returned error can't find the container with id 87a224083db2d92a38f49ad09b7d0b369a35a89caac9a1b5cdf72e1719f721ac Feb 20 12:17:28.404425 master-0 kubenswrapper[31420]: I0220 12:17:28.404347 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg"] Feb 20 12:17:28.848061 master-0 kubenswrapper[31420]: I0220 12:17:28.847939 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" event={"ID":"5d18777a-1196-401b-b94c-6c8504f5ce3b","Type":"ContainerStarted","Data":"798018b23b7b71eb918c8095229ddaee665c8cf5322d6ec3bfa538242fb6a119"} Feb 20 12:17:28.849333 master-0 kubenswrapper[31420]: I0220 12:17:28.849174 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" event={"ID":"1db07cb7-a520-4044-95b9-05f1ec724217","Type":"ContainerStarted","Data":"3fbe8acf83e54aedf037a66f49e11d7ebef2274c0f326557042a7f823cb67af7"} Feb 20 12:17:28.851328 master-0 kubenswrapper[31420]: I0220 12:17:28.851233 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" event={"ID":"43ce6170-0fbe-4278-a2cc-131dad533824","Type":"ContainerStarted","Data":"f38b720648a4e21a550092a0c29fc784bd2bca4ecfec3b9c640af136d024455d"} Feb 20 12:17:28.851328 master-0 kubenswrapper[31420]: I0220 12:17:28.851274 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" event={"ID":"43ce6170-0fbe-4278-a2cc-131dad533824","Type":"ContainerStarted","Data":"87a224083db2d92a38f49ad09b7d0b369a35a89caac9a1b5cdf72e1719f721ac"} Feb 20 12:17:29.865728 master-0 kubenswrapper[31420]: I0220 12:17:29.862650 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:17:30.428085 master-0 kubenswrapper[31420]: I0220 12:17:30.427974 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" podStartSLOduration=36.427953142 podStartE2EDuration="36.427953142s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:17:30.418555479 +0000 UTC m=+755.137793760" watchObservedRunningTime="2026-02-20 12:17:30.427953142 +0000 UTC m=+755.147191373" Feb 20 12:17:32.900279 master-0 kubenswrapper[31420]: I0220 12:17:32.900178 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" event={"ID":"5d18777a-1196-401b-b94c-6c8504f5ce3b","Type":"ContainerStarted","Data":"ae2ef6f99d0e169b70fb281101df47d43af710c1a32ab67382ba161311298f60"} Feb 20 12:17:32.901442 master-0 kubenswrapper[31420]: I0220 12:17:32.900358 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:17:32.903054 master-0 kubenswrapper[31420]: I0220 12:17:32.902960 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" event={"ID":"1db07cb7-a520-4044-95b9-05f1ec724217","Type":"ContainerStarted","Data":"532347d291739579a89b3de48e2b73cc2c38e02ee4937e7d801c62c03d9a431e"} Feb 20 12:17:32.903288 master-0 kubenswrapper[31420]: I0220 12:17:32.903223 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:17:32.946586 master-0 kubenswrapper[31420]: I0220 12:17:32.944451 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" podStartSLOduration=35.130391812 podStartE2EDuration="38.944420699s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:17:27.913236413 +0000 UTC m=+752.632474664" lastFinishedPulling="2026-02-20 12:17:31.72726529 +0000 UTC m=+756.446503551" observedRunningTime="2026-02-20 12:17:32.924384988 +0000 UTC m=+757.643623269" watchObservedRunningTime="2026-02-20 12:17:32.944420699 +0000 UTC m=+757.663658960" Feb 20 12:17:32.983966 master-0 kubenswrapper[31420]: I0220 12:17:32.983858 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" podStartSLOduration=35.64679056 podStartE2EDuration="38.983832342s" podCreationTimestamp="2026-02-20 12:16:54 +0000 UTC" firstStartedPulling="2026-02-20 12:17:28.390399133 +0000 UTC m=+753.109637404" lastFinishedPulling="2026-02-20 12:17:31.727440945 +0000 UTC m=+756.446679186" observedRunningTime="2026-02-20 12:17:32.974315306 +0000 UTC m=+757.693553587" watchObservedRunningTime="2026-02-20 12:17:32.983832342 +0000 UTC m=+757.703070593" Feb 20 12:17:36.860759 master-0 kubenswrapper[31420]: I0220 12:17:36.860684 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bn5dg" Feb 20 12:17:37.208289 master-0 kubenswrapper[31420]: I0220 12:17:37.208177 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg" Feb 20 12:17:37.455066 master-0 kubenswrapper[31420]: I0220 12:17:37.454995 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-wnsg5" Feb 20 12:18:18.560547 master-0 kubenswrapper[31420]: I0220 12:18:18.559824 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-bdgzn"] Feb 20 12:18:18.576382 master-0 kubenswrapper[31420]: I0220 12:18:18.561978 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-bdgzn" Feb 20 12:18:18.576382 master-0 kubenswrapper[31420]: I0220 12:18:18.566112 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 20 12:18:18.576382 master-0 kubenswrapper[31420]: I0220 12:18:18.566615 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 20 12:18:18.576382 master-0 kubenswrapper[31420]: I0220 12:18:18.566743 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 20 12:18:18.577423 master-0 kubenswrapper[31420]: I0220 12:18:18.577369 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-bdgzn"] Feb 20 12:18:18.653545 master-0 kubenswrapper[31420]: I0220 12:18:18.651647 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09686517-53db-438b-85ea-646904a52235-config\") pod \"dnsmasq-dns-5c7b6fb887-bdgzn\" (UID: \"09686517-53db-438b-85ea-646904a52235\") " pod="openstack/dnsmasq-dns-5c7b6fb887-bdgzn" Feb 20 12:18:18.653545 master-0 kubenswrapper[31420]: I0220 12:18:18.651874 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd2pd\" (UniqueName: \"kubernetes.io/projected/09686517-53db-438b-85ea-646904a52235-kube-api-access-fd2pd\") pod \"dnsmasq-dns-5c7b6fb887-bdgzn\" (UID: \"09686517-53db-438b-85ea-646904a52235\") " pod="openstack/dnsmasq-dns-5c7b6fb887-bdgzn" Feb 20 12:18:18.674549 master-0 kubenswrapper[31420]: I0220 12:18:18.664005 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d78499c-zbtsx"] Feb 20 12:18:18.674549 master-0 kubenswrapper[31420]: I0220 12:18:18.665849 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-zbtsx" Feb 20 12:18:18.674549 master-0 kubenswrapper[31420]: I0220 12:18:18.668582 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 20 12:18:18.697551 master-0 kubenswrapper[31420]: I0220 12:18:18.675997 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-zbtsx"] Feb 20 12:18:18.752517 master-0 kubenswrapper[31420]: I0220 12:18:18.752439 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd2pd\" (UniqueName: \"kubernetes.io/projected/09686517-53db-438b-85ea-646904a52235-kube-api-access-fd2pd\") pod \"dnsmasq-dns-5c7b6fb887-bdgzn\" (UID: \"09686517-53db-438b-85ea-646904a52235\") " pod="openstack/dnsmasq-dns-5c7b6fb887-bdgzn" Feb 20 12:18:18.752517 master-0 kubenswrapper[31420]: I0220 12:18:18.752520 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r64np\" (UniqueName: \"kubernetes.io/projected/c7efaf54-8294-4db8-acff-033af4e873ee-kube-api-access-r64np\") pod \"dnsmasq-dns-7d78499c-zbtsx\" (UID: \"c7efaf54-8294-4db8-acff-033af4e873ee\") " pod="openstack/dnsmasq-dns-7d78499c-zbtsx" Feb 20 12:18:18.752854 master-0 kubenswrapper[31420]: I0220 12:18:18.752580 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7efaf54-8294-4db8-acff-033af4e873ee-config\") pod \"dnsmasq-dns-7d78499c-zbtsx\" (UID: \"c7efaf54-8294-4db8-acff-033af4e873ee\") " pod="openstack/dnsmasq-dns-7d78499c-zbtsx" Feb 20 12:18:18.752854 master-0 kubenswrapper[31420]: I0220 12:18:18.752607 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7efaf54-8294-4db8-acff-033af4e873ee-dns-svc\") pod \"dnsmasq-dns-7d78499c-zbtsx\" (UID: \"c7efaf54-8294-4db8-acff-033af4e873ee\") " pod="openstack/dnsmasq-dns-7d78499c-zbtsx" Feb 20 12:18:18.752854 master-0 kubenswrapper[31420]: I0220 12:18:18.752641 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09686517-53db-438b-85ea-646904a52235-config\") pod \"dnsmasq-dns-5c7b6fb887-bdgzn\" (UID: \"09686517-53db-438b-85ea-646904a52235\") " pod="openstack/dnsmasq-dns-5c7b6fb887-bdgzn" Feb 20 12:18:18.753711 master-0 kubenswrapper[31420]: I0220 12:18:18.753643 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09686517-53db-438b-85ea-646904a52235-config\") pod \"dnsmasq-dns-5c7b6fb887-bdgzn\" (UID: \"09686517-53db-438b-85ea-646904a52235\") " pod="openstack/dnsmasq-dns-5c7b6fb887-bdgzn" Feb 20 12:18:18.768609 master-0 kubenswrapper[31420]: I0220 12:18:18.768563 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd2pd\" (UniqueName: \"kubernetes.io/projected/09686517-53db-438b-85ea-646904a52235-kube-api-access-fd2pd\") pod \"dnsmasq-dns-5c7b6fb887-bdgzn\" (UID: \"09686517-53db-438b-85ea-646904a52235\") " pod="openstack/dnsmasq-dns-5c7b6fb887-bdgzn" Feb 20 12:18:18.854060 master-0 kubenswrapper[31420]: I0220 12:18:18.853873 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r64np\" (UniqueName: \"kubernetes.io/projected/c7efaf54-8294-4db8-acff-033af4e873ee-kube-api-access-r64np\") pod \"dnsmasq-dns-7d78499c-zbtsx\" (UID: \"c7efaf54-8294-4db8-acff-033af4e873ee\") " pod="openstack/dnsmasq-dns-7d78499c-zbtsx" Feb 20 12:18:18.854421 master-0 kubenswrapper[31420]: I0220 12:18:18.854380 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7efaf54-8294-4db8-acff-033af4e873ee-config\") pod \"dnsmasq-dns-7d78499c-zbtsx\" (UID: \"c7efaf54-8294-4db8-acff-033af4e873ee\") " pod="openstack/dnsmasq-dns-7d78499c-zbtsx" Feb 20 12:18:18.854568 master-0 kubenswrapper[31420]: I0220 12:18:18.854444 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7efaf54-8294-4db8-acff-033af4e873ee-dns-svc\") pod \"dnsmasq-dns-7d78499c-zbtsx\" (UID: \"c7efaf54-8294-4db8-acff-033af4e873ee\") " pod="openstack/dnsmasq-dns-7d78499c-zbtsx" Feb 20 12:18:18.855688 master-0 kubenswrapper[31420]: I0220 12:18:18.855631 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7efaf54-8294-4db8-acff-033af4e873ee-config\") pod \"dnsmasq-dns-7d78499c-zbtsx\" (UID: \"c7efaf54-8294-4db8-acff-033af4e873ee\") " pod="openstack/dnsmasq-dns-7d78499c-zbtsx" Feb 20 12:18:18.855823 master-0 kubenswrapper[31420]: I0220 12:18:18.855760 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7efaf54-8294-4db8-acff-033af4e873ee-dns-svc\") pod \"dnsmasq-dns-7d78499c-zbtsx\" (UID: \"c7efaf54-8294-4db8-acff-033af4e873ee\") " pod="openstack/dnsmasq-dns-7d78499c-zbtsx" Feb 20 12:18:18.874748 master-0 kubenswrapper[31420]: I0220 12:18:18.874693 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r64np\" (UniqueName: \"kubernetes.io/projected/c7efaf54-8294-4db8-acff-033af4e873ee-kube-api-access-r64np\") pod \"dnsmasq-dns-7d78499c-zbtsx\" (UID: \"c7efaf54-8294-4db8-acff-033af4e873ee\") " pod="openstack/dnsmasq-dns-7d78499c-zbtsx" Feb 20 12:18:18.925154 master-0 kubenswrapper[31420]: I0220 12:18:18.925024 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-bdgzn" Feb 20 12:18:18.996568 master-0 kubenswrapper[31420]: I0220 12:18:18.993505 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-zbtsx" Feb 20 12:18:19.419735 master-0 kubenswrapper[31420]: I0220 12:18:19.419651 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-bdgzn"] Feb 20 12:18:19.431878 master-0 kubenswrapper[31420]: W0220 12:18:19.431810 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09686517_53db_438b_85ea_646904a52235.slice/crio-b99c8772dde43a57b9b639f1e10c192a2e8bab9f5ad07f573f30ec2a7138311b WatchSource:0}: Error finding container b99c8772dde43a57b9b639f1e10c192a2e8bab9f5ad07f573f30ec2a7138311b: Status 404 returned error can't find the container with id b99c8772dde43a57b9b639f1e10c192a2e8bab9f5ad07f573f30ec2a7138311b Feb 20 12:18:19.530490 master-0 kubenswrapper[31420]: I0220 12:18:19.530368 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-bdgzn" event={"ID":"09686517-53db-438b-85ea-646904a52235","Type":"ContainerStarted","Data":"b99c8772dde43a57b9b639f1e10c192a2e8bab9f5ad07f573f30ec2a7138311b"} Feb 20 12:18:19.557501 master-0 kubenswrapper[31420]: I0220 12:18:19.557427 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-zbtsx"] Feb 20 12:18:20.026548 master-0 kubenswrapper[31420]: I0220 12:18:20.025694 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-bdgzn"] Feb 20 12:18:20.136560 master-0 kubenswrapper[31420]: I0220 12:18:20.136167 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-42dnl"] Feb 20 12:18:20.173547 master-0 kubenswrapper[31420]: I0220 12:18:20.167955 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" Feb 20 12:18:20.251553 master-0 kubenswrapper[31420]: I0220 12:18:20.242825 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-42dnl"] Feb 20 12:18:20.273970 master-0 kubenswrapper[31420]: I0220 12:18:20.256858 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jz9n\" (UniqueName: \"kubernetes.io/projected/a8429e15-dbd4-44e4-ab7d-214ac5143db3-kube-api-access-5jz9n\") pod \"dnsmasq-dns-5bcd98d69f-42dnl\" (UID: \"a8429e15-dbd4-44e4-ab7d-214ac5143db3\") " pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" Feb 20 12:18:20.273970 master-0 kubenswrapper[31420]: I0220 12:18:20.256972 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8429e15-dbd4-44e4-ab7d-214ac5143db3-config\") pod \"dnsmasq-dns-5bcd98d69f-42dnl\" (UID: \"a8429e15-dbd4-44e4-ab7d-214ac5143db3\") " pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" Feb 20 12:18:20.273970 master-0 kubenswrapper[31420]: I0220 12:18:20.257031 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8429e15-dbd4-44e4-ab7d-214ac5143db3-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-42dnl\" (UID: \"a8429e15-dbd4-44e4-ab7d-214ac5143db3\") " pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" Feb 20 12:18:20.359699 master-0 kubenswrapper[31420]: I0220 12:18:20.359516 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jz9n\" (UniqueName: \"kubernetes.io/projected/a8429e15-dbd4-44e4-ab7d-214ac5143db3-kube-api-access-5jz9n\") pod \"dnsmasq-dns-5bcd98d69f-42dnl\" (UID: \"a8429e15-dbd4-44e4-ab7d-214ac5143db3\") " pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" Feb 20 12:18:20.359699 master-0 kubenswrapper[31420]: I0220 12:18:20.359683 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8429e15-dbd4-44e4-ab7d-214ac5143db3-config\") pod \"dnsmasq-dns-5bcd98d69f-42dnl\" (UID: \"a8429e15-dbd4-44e4-ab7d-214ac5143db3\") " pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" Feb 20 12:18:20.360063 master-0 kubenswrapper[31420]: I0220 12:18:20.359716 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8429e15-dbd4-44e4-ab7d-214ac5143db3-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-42dnl\" (UID: \"a8429e15-dbd4-44e4-ab7d-214ac5143db3\") " pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" Feb 20 12:18:20.360773 master-0 kubenswrapper[31420]: I0220 12:18:20.360603 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8429e15-dbd4-44e4-ab7d-214ac5143db3-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-42dnl\" (UID: \"a8429e15-dbd4-44e4-ab7d-214ac5143db3\") " pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" Feb 20 12:18:20.361960 master-0 kubenswrapper[31420]: I0220 12:18:20.361918 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8429e15-dbd4-44e4-ab7d-214ac5143db3-config\") pod \"dnsmasq-dns-5bcd98d69f-42dnl\" (UID: \"a8429e15-dbd4-44e4-ab7d-214ac5143db3\") " pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" Feb 20 12:18:20.381424 master-0 kubenswrapper[31420]: I0220 12:18:20.381381 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jz9n\" (UniqueName: \"kubernetes.io/projected/a8429e15-dbd4-44e4-ab7d-214ac5143db3-kube-api-access-5jz9n\") pod \"dnsmasq-dns-5bcd98d69f-42dnl\" (UID: \"a8429e15-dbd4-44e4-ab7d-214ac5143db3\") " pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" Feb 20 12:18:20.542317 master-0 kubenswrapper[31420]: I0220 12:18:20.542216 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-zbtsx" event={"ID":"c7efaf54-8294-4db8-acff-033af4e873ee","Type":"ContainerStarted","Data":"9c24ae3d60b2917701ee5b89569b6e90ecc50b1f99b3035abf04706573e39336"} Feb 20 12:18:20.549083 master-0 kubenswrapper[31420]: I0220 12:18:20.548863 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" Feb 20 12:18:20.862690 master-0 kubenswrapper[31420]: I0220 12:18:20.862516 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-zbtsx"] Feb 20 12:18:20.881634 master-0 kubenswrapper[31420]: I0220 12:18:20.880351 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-xg5ps"] Feb 20 12:18:20.887646 master-0 kubenswrapper[31420]: I0220 12:18:20.884458 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" Feb 20 12:18:20.970864 master-0 kubenswrapper[31420]: I0220 12:18:20.970779 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-xg5ps"] Feb 20 12:18:21.100514 master-0 kubenswrapper[31420]: I0220 12:18:21.098939 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6gxz\" (UniqueName: \"kubernetes.io/projected/9a5729d2-75e9-403d-828f-2739cfc261e0-kube-api-access-k6gxz\") pod \"dnsmasq-dns-6b98d7b55c-xg5ps\" (UID: \"9a5729d2-75e9-403d-828f-2739cfc261e0\") " pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" Feb 20 12:18:21.100514 master-0 kubenswrapper[31420]: I0220 12:18:21.099192 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a5729d2-75e9-403d-828f-2739cfc261e0-config\") pod \"dnsmasq-dns-6b98d7b55c-xg5ps\" (UID: \"9a5729d2-75e9-403d-828f-2739cfc261e0\") " pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" Feb 20 12:18:21.100514 master-0 kubenswrapper[31420]: I0220 12:18:21.099221 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a5729d2-75e9-403d-828f-2739cfc261e0-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-xg5ps\" (UID: \"9a5729d2-75e9-403d-828f-2739cfc261e0\") " pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" Feb 20 12:18:21.231630 master-0 kubenswrapper[31420]: I0220 12:18:21.229459 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a5729d2-75e9-403d-828f-2739cfc261e0-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-xg5ps\" (UID: \"9a5729d2-75e9-403d-828f-2739cfc261e0\") " pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" Feb 20 12:18:21.231630 master-0 kubenswrapper[31420]: I0220 12:18:21.229518 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a5729d2-75e9-403d-828f-2739cfc261e0-config\") pod \"dnsmasq-dns-6b98d7b55c-xg5ps\" (UID: \"9a5729d2-75e9-403d-828f-2739cfc261e0\") " pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" Feb 20 12:18:21.231630 master-0 kubenswrapper[31420]: I0220 12:18:21.229592 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6gxz\" (UniqueName: \"kubernetes.io/projected/9a5729d2-75e9-403d-828f-2739cfc261e0-kube-api-access-k6gxz\") pod \"dnsmasq-dns-6b98d7b55c-xg5ps\" (UID: \"9a5729d2-75e9-403d-828f-2739cfc261e0\") " pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" Feb 20 12:18:21.231630 master-0 kubenswrapper[31420]: I0220 12:18:21.230782 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a5729d2-75e9-403d-828f-2739cfc261e0-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-xg5ps\" (UID: \"9a5729d2-75e9-403d-828f-2739cfc261e0\") " pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" Feb 20 12:18:21.231630 master-0 kubenswrapper[31420]: I0220 12:18:21.231318 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a5729d2-75e9-403d-828f-2739cfc261e0-config\") pod \"dnsmasq-dns-6b98d7b55c-xg5ps\" (UID: \"9a5729d2-75e9-403d-828f-2739cfc261e0\") " pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" Feb 20 12:18:21.253649 master-0 kubenswrapper[31420]: I0220 12:18:21.253610 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6gxz\" (UniqueName: \"kubernetes.io/projected/9a5729d2-75e9-403d-828f-2739cfc261e0-kube-api-access-k6gxz\") pod \"dnsmasq-dns-6b98d7b55c-xg5ps\" (UID: \"9a5729d2-75e9-403d-828f-2739cfc261e0\") " pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" Feb 20 12:18:21.277107 master-0 kubenswrapper[31420]: I0220 12:18:21.277055 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" Feb 20 12:18:21.335034 master-0 kubenswrapper[31420]: I0220 12:18:21.334992 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-42dnl"] Feb 20 12:18:21.558691 master-0 kubenswrapper[31420]: I0220 12:18:21.558627 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" event={"ID":"a8429e15-dbd4-44e4-ab7d-214ac5143db3","Type":"ContainerStarted","Data":"12ed51c26e4584f071bd18669e760ad6c676a0f2f7bec5710ab38bcff6b5e628"} Feb 20 12:18:21.804893 master-0 kubenswrapper[31420]: I0220 12:18:21.804786 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-xg5ps"] Feb 20 12:18:21.806372 master-0 kubenswrapper[31420]: W0220 12:18:21.806179 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a5729d2_75e9_403d_828f_2739cfc261e0.slice/crio-1244e04c421302b01dd1d7e7ba86be129bf7a258bc8810867677d8f00fb23cf1 WatchSource:0}: Error finding container 1244e04c421302b01dd1d7e7ba86be129bf7a258bc8810867677d8f00fb23cf1: Status 404 returned error can't find the container with id 1244e04c421302b01dd1d7e7ba86be129bf7a258bc8810867677d8f00fb23cf1 Feb 20 12:18:22.598826 master-0 kubenswrapper[31420]: I0220 12:18:22.598680 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" event={"ID":"9a5729d2-75e9-403d-828f-2739cfc261e0","Type":"ContainerStarted","Data":"1244e04c421302b01dd1d7e7ba86be129bf7a258bc8810867677d8f00fb23cf1"} Feb 20 12:18:24.303690 master-0 kubenswrapper[31420]: I0220 12:18:24.302458 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 20 12:18:24.305459 master-0 kubenswrapper[31420]: I0220 12:18:24.304830 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.309822 master-0 kubenswrapper[31420]: I0220 12:18:24.309799 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 20 12:18:24.309975 master-0 kubenswrapper[31420]: I0220 12:18:24.309935 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 20 12:18:24.309975 master-0 kubenswrapper[31420]: I0220 12:18:24.309968 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 20 12:18:24.310094 master-0 kubenswrapper[31420]: I0220 12:18:24.310017 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 20 12:18:24.310094 master-0 kubenswrapper[31420]: I0220 12:18:24.309964 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 20 12:18:24.310172 master-0 kubenswrapper[31420]: I0220 12:18:24.310109 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 20 12:18:24.312203 master-0 kubenswrapper[31420]: I0220 12:18:24.312127 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 20 12:18:24.413443 master-0 kubenswrapper[31420]: I0220 12:18:24.413378 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3027dc76-27b3-44c4-b217-885670c3e29e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.413635 master-0 kubenswrapper[31420]: I0220 12:18:24.413478 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3027dc76-27b3-44c4-b217-885670c3e29e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.413635 master-0 kubenswrapper[31420]: I0220 12:18:24.413513 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3027dc76-27b3-44c4-b217-885670c3e29e-config-data\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.413635 master-0 kubenswrapper[31420]: I0220 12:18:24.413546 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfxsr\" (UniqueName: \"kubernetes.io/projected/3027dc76-27b3-44c4-b217-885670c3e29e-kube-api-access-sfxsr\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.413635 master-0 kubenswrapper[31420]: I0220 12:18:24.413579 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3027dc76-27b3-44c4-b217-885670c3e29e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.413635 master-0 kubenswrapper[31420]: I0220 12:18:24.413630 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3027dc76-27b3-44c4-b217-885670c3e29e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.413820 master-0 kubenswrapper[31420]: I0220 12:18:24.413731 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3027dc76-27b3-44c4-b217-885670c3e29e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.413820 master-0 kubenswrapper[31420]: I0220 12:18:24.413789 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3027dc76-27b3-44c4-b217-885670c3e29e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.413887 master-0 kubenswrapper[31420]: I0220 12:18:24.413826 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3027dc76-27b3-44c4-b217-885670c3e29e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.413887 master-0 kubenswrapper[31420]: I0220 12:18:24.413854 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-91f43e2a-8940-4137-b25e-8121af35f97b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9ed47c89-dfe8-4d49-b8b1-2259a32956d3\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.413958 master-0 kubenswrapper[31420]: I0220 12:18:24.413937 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3027dc76-27b3-44c4-b217-885670c3e29e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.517159 master-0 kubenswrapper[31420]: I0220 12:18:24.517088 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3027dc76-27b3-44c4-b217-885670c3e29e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.517159 master-0 kubenswrapper[31420]: I0220 12:18:24.517151 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3027dc76-27b3-44c4-b217-885670c3e29e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.517601 master-0 kubenswrapper[31420]: I0220 12:18:24.517490 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3027dc76-27b3-44c4-b217-885670c3e29e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.517601 master-0 kubenswrapper[31420]: I0220 12:18:24.517578 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-91f43e2a-8940-4137-b25e-8121af35f97b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9ed47c89-dfe8-4d49-b8b1-2259a32956d3\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.517745 master-0 kubenswrapper[31420]: I0220 12:18:24.517632 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3027dc76-27b3-44c4-b217-885670c3e29e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.517745 master-0 kubenswrapper[31420]: I0220 12:18:24.517700 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3027dc76-27b3-44c4-b217-885670c3e29e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.518092 master-0 kubenswrapper[31420]: I0220 12:18:24.518058 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3027dc76-27b3-44c4-b217-885670c3e29e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.518160 master-0 kubenswrapper[31420]: I0220 12:18:24.518137 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3027dc76-27b3-44c4-b217-885670c3e29e-config-data\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.518208 master-0 kubenswrapper[31420]: I0220 12:18:24.518170 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfxsr\" (UniqueName: \"kubernetes.io/projected/3027dc76-27b3-44c4-b217-885670c3e29e-kube-api-access-sfxsr\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.518208 master-0 kubenswrapper[31420]: I0220 12:18:24.518190 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3027dc76-27b3-44c4-b217-885670c3e29e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.518615 master-0 kubenswrapper[31420]: I0220 12:18:24.518193 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3027dc76-27b3-44c4-b217-885670c3e29e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.518707 master-0 kubenswrapper[31420]: I0220 12:18:24.518654 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3027dc76-27b3-44c4-b217-885670c3e29e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.519242 master-0 kubenswrapper[31420]: I0220 12:18:24.519203 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3027dc76-27b3-44c4-b217-885670c3e29e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.519681 master-0 kubenswrapper[31420]: I0220 12:18:24.519643 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3027dc76-27b3-44c4-b217-885670c3e29e-config-data\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.519813 master-0 kubenswrapper[31420]: I0220 12:18:24.519774 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3027dc76-27b3-44c4-b217-885670c3e29e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.519850 master-0 kubenswrapper[31420]: I0220 12:18:24.519777 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3027dc76-27b3-44c4-b217-885670c3e29e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.521481 master-0 kubenswrapper[31420]: I0220 12:18:24.521282 31420 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 12:18:24.521481 master-0 kubenswrapper[31420]: I0220 12:18:24.521311 31420 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-91f43e2a-8940-4137-b25e-8121af35f97b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9ed47c89-dfe8-4d49-b8b1-2259a32956d3\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/17c81c841efbdf95dbc08e70bf0a9c286bff7354bb99cb56a2e5225e1fe3e570/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.526546 master-0 kubenswrapper[31420]: I0220 12:18:24.526482 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3027dc76-27b3-44c4-b217-885670c3e29e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.527591 master-0 kubenswrapper[31420]: I0220 12:18:24.527377 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3027dc76-27b3-44c4-b217-885670c3e29e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.530122 master-0 kubenswrapper[31420]: I0220 12:18:24.530038 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3027dc76-27b3-44c4-b217-885670c3e29e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.533611 master-0 kubenswrapper[31420]: I0220 12:18:24.533063 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3027dc76-27b3-44c4-b217-885670c3e29e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:24.540852 master-0 kubenswrapper[31420]: I0220 12:18:24.540816 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfxsr\" (UniqueName: \"kubernetes.io/projected/3027dc76-27b3-44c4-b217-885670c3e29e-kube-api-access-sfxsr\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:25.077398 master-0 kubenswrapper[31420]: I0220 12:18:25.077337 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 20 12:18:25.079380 master-0 kubenswrapper[31420]: I0220 12:18:25.079346 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.081725 master-0 kubenswrapper[31420]: I0220 12:18:25.081678 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 20 12:18:25.082082 master-0 kubenswrapper[31420]: I0220 12:18:25.081907 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 20 12:18:25.084384 master-0 kubenswrapper[31420]: I0220 12:18:25.083483 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 20 12:18:25.084384 master-0 kubenswrapper[31420]: I0220 12:18:25.083507 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 20 12:18:25.084384 master-0 kubenswrapper[31420]: I0220 12:18:25.083616 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 20 12:18:25.085440 master-0 kubenswrapper[31420]: I0220 12:18:25.085296 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 20 12:18:25.096351 master-0 kubenswrapper[31420]: I0220 12:18:25.096295 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 20 12:18:25.130488 master-0 kubenswrapper[31420]: I0220 12:18:25.130435 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de0e242c-6018-42c0-8a59-b755e2bd36b0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.130696 master-0 kubenswrapper[31420]: I0220 12:18:25.130492 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-880d2b66-0523-4802-943d-67899302c777\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b0322a6a-cf3b-463e-8c3a-0a2846c5f5dc\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.130696 master-0 kubenswrapper[31420]: I0220 12:18:25.130537 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de0e242c-6018-42c0-8a59-b755e2bd36b0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.130696 master-0 kubenswrapper[31420]: I0220 12:18:25.130583 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297tt\" (UniqueName: \"kubernetes.io/projected/de0e242c-6018-42c0-8a59-b755e2bd36b0-kube-api-access-297tt\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.130696 master-0 kubenswrapper[31420]: I0220 12:18:25.130619 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de0e242c-6018-42c0-8a59-b755e2bd36b0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.130696 master-0 kubenswrapper[31420]: I0220 12:18:25.130648 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de0e242c-6018-42c0-8a59-b755e2bd36b0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.130696 master-0 kubenswrapper[31420]: I0220 12:18:25.130671 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de0e242c-6018-42c0-8a59-b755e2bd36b0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.130696 master-0 kubenswrapper[31420]: I0220 12:18:25.130701 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de0e242c-6018-42c0-8a59-b755e2bd36b0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.130926 master-0 kubenswrapper[31420]: I0220 12:18:25.130721 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de0e242c-6018-42c0-8a59-b755e2bd36b0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.130926 master-0 kubenswrapper[31420]: I0220 12:18:25.130801 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de0e242c-6018-42c0-8a59-b755e2bd36b0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.130926 master-0 kubenswrapper[31420]: I0220 12:18:25.130826 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de0e242c-6018-42c0-8a59-b755e2bd36b0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.225379 master-0 kubenswrapper[31420]: I0220 12:18:25.225302 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 20 12:18:25.226625 master-0 kubenswrapper[31420]: I0220 12:18:25.226567 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 20 12:18:25.229322 master-0 kubenswrapper[31420]: I0220 12:18:25.228989 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 20 12:18:25.230401 master-0 kubenswrapper[31420]: I0220 12:18:25.230380 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 20 12:18:25.233618 master-0 kubenswrapper[31420]: I0220 12:18:25.233571 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de0e242c-6018-42c0-8a59-b755e2bd36b0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.233618 master-0 kubenswrapper[31420]: I0220 12:18:25.233615 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d420cbb8-46c6-400b-b143-ab6a11e0ac04-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d420cbb8-46c6-400b-b143-ab6a11e0ac04\") " pod="openstack/memcached-0" Feb 20 12:18:25.233728 master-0 kubenswrapper[31420]: I0220 12:18:25.233642 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de0e242c-6018-42c0-8a59-b755e2bd36b0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.233728 master-0 kubenswrapper[31420]: I0220 12:18:25.233672 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de0e242c-6018-42c0-8a59-b755e2bd36b0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.233814 master-0 kubenswrapper[31420]: I0220 12:18:25.233745 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-880d2b66-0523-4802-943d-67899302c777\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b0322a6a-cf3b-463e-8c3a-0a2846c5f5dc\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.233814 master-0 kubenswrapper[31420]: I0220 12:18:25.233777 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de0e242c-6018-42c0-8a59-b755e2bd36b0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.233814 master-0 kubenswrapper[31420]: I0220 12:18:25.233807 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d420cbb8-46c6-400b-b143-ab6a11e0ac04-kolla-config\") pod \"memcached-0\" (UID: \"d420cbb8-46c6-400b-b143-ab6a11e0ac04\") " pod="openstack/memcached-0" Feb 20 12:18:25.233920 master-0 kubenswrapper[31420]: I0220 12:18:25.233838 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297tt\" (UniqueName: \"kubernetes.io/projected/de0e242c-6018-42c0-8a59-b755e2bd36b0-kube-api-access-297tt\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.233920 master-0 kubenswrapper[31420]: I0220 12:18:25.233896 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skcbp\" (UniqueName: \"kubernetes.io/projected/d420cbb8-46c6-400b-b143-ab6a11e0ac04-kube-api-access-skcbp\") pod \"memcached-0\" (UID: \"d420cbb8-46c6-400b-b143-ab6a11e0ac04\") " pod="openstack/memcached-0" Feb 20 12:18:25.233978 master-0 kubenswrapper[31420]: I0220 12:18:25.233949 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de0e242c-6018-42c0-8a59-b755e2bd36b0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.234014 master-0 kubenswrapper[31420]: I0220 12:18:25.233986 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de0e242c-6018-42c0-8a59-b755e2bd36b0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.234056 master-0 kubenswrapper[31420]: I0220 12:18:25.234019 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de0e242c-6018-42c0-8a59-b755e2bd36b0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.234056 master-0 kubenswrapper[31420]: I0220 12:18:25.234040 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d420cbb8-46c6-400b-b143-ab6a11e0ac04-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d420cbb8-46c6-400b-b143-ab6a11e0ac04\") " pod="openstack/memcached-0" Feb 20 12:18:25.234131 master-0 kubenswrapper[31420]: I0220 12:18:25.234076 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d420cbb8-46c6-400b-b143-ab6a11e0ac04-config-data\") pod \"memcached-0\" (UID: \"d420cbb8-46c6-400b-b143-ab6a11e0ac04\") " pod="openstack/memcached-0" Feb 20 12:18:25.234131 master-0 kubenswrapper[31420]: I0220 12:18:25.234104 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de0e242c-6018-42c0-8a59-b755e2bd36b0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.234131 master-0 kubenswrapper[31420]: I0220 12:18:25.234128 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de0e242c-6018-42c0-8a59-b755e2bd36b0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.236456 master-0 kubenswrapper[31420]: I0220 12:18:25.235173 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de0e242c-6018-42c0-8a59-b755e2bd36b0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.236456 master-0 kubenswrapper[31420]: I0220 12:18:25.235848 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de0e242c-6018-42c0-8a59-b755e2bd36b0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.236456 master-0 kubenswrapper[31420]: I0220 12:18:25.235970 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de0e242c-6018-42c0-8a59-b755e2bd36b0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.238060 master-0 kubenswrapper[31420]: I0220 12:18:25.238011 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de0e242c-6018-42c0-8a59-b755e2bd36b0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.238233 master-0 kubenswrapper[31420]: I0220 12:18:25.238167 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de0e242c-6018-42c0-8a59-b755e2bd36b0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.240791 master-0 kubenswrapper[31420]: I0220 12:18:25.240735 31420 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 12:18:25.240791 master-0 kubenswrapper[31420]: I0220 12:18:25.240772 31420 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-880d2b66-0523-4802-943d-67899302c777\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b0322a6a-cf3b-463e-8c3a-0a2846c5f5dc\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/065bae3c09cedaa80a4b9cfcf666184a0a38acf1fd5d46220613413447d036d0/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.248491 master-0 kubenswrapper[31420]: I0220 12:18:25.248402 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de0e242c-6018-42c0-8a59-b755e2bd36b0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.248491 master-0 kubenswrapper[31420]: I0220 12:18:25.248428 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de0e242c-6018-42c0-8a59-b755e2bd36b0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.248838 master-0 kubenswrapper[31420]: I0220 12:18:25.248784 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de0e242c-6018-42c0-8a59-b755e2bd36b0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.248916 master-0 kubenswrapper[31420]: I0220 12:18:25.248899 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 20 12:18:25.251109 master-0 kubenswrapper[31420]: I0220 12:18:25.251045 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de0e242c-6018-42c0-8a59-b755e2bd36b0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.273070 master-0 kubenswrapper[31420]: I0220 12:18:25.271401 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 20 12:18:25.273070 master-0 kubenswrapper[31420]: I0220 12:18:25.271651 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297tt\" (UniqueName: \"kubernetes.io/projected/de0e242c-6018-42c0-8a59-b755e2bd36b0-kube-api-access-297tt\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:25.335628 master-0 kubenswrapper[31420]: I0220 12:18:25.335512 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d420cbb8-46c6-400b-b143-ab6a11e0ac04-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d420cbb8-46c6-400b-b143-ab6a11e0ac04\") " pod="openstack/memcached-0" Feb 20 12:18:25.336291 master-0 kubenswrapper[31420]: I0220 12:18:25.335954 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d420cbb8-46c6-400b-b143-ab6a11e0ac04-kolla-config\") pod \"memcached-0\" (UID: \"d420cbb8-46c6-400b-b143-ab6a11e0ac04\") " pod="openstack/memcached-0" Feb 20 12:18:25.336291 master-0 kubenswrapper[31420]: I0220 12:18:25.336170 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skcbp\" (UniqueName: \"kubernetes.io/projected/d420cbb8-46c6-400b-b143-ab6a11e0ac04-kube-api-access-skcbp\") pod \"memcached-0\" (UID: \"d420cbb8-46c6-400b-b143-ab6a11e0ac04\") " pod="openstack/memcached-0" Feb 20 12:18:25.336438 master-0 kubenswrapper[31420]: I0220 12:18:25.336364 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d420cbb8-46c6-400b-b143-ab6a11e0ac04-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d420cbb8-46c6-400b-b143-ab6a11e0ac04\") " pod="openstack/memcached-0" Feb 20 12:18:25.336539 master-0 kubenswrapper[31420]: I0220 12:18:25.336448 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d420cbb8-46c6-400b-b143-ab6a11e0ac04-config-data\") pod \"memcached-0\" (UID: \"d420cbb8-46c6-400b-b143-ab6a11e0ac04\") " pod="openstack/memcached-0" Feb 20 12:18:25.336648 master-0 kubenswrapper[31420]: I0220 12:18:25.336604 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d420cbb8-46c6-400b-b143-ab6a11e0ac04-kolla-config\") pod \"memcached-0\" (UID: \"d420cbb8-46c6-400b-b143-ab6a11e0ac04\") " pod="openstack/memcached-0" Feb 20 12:18:25.337690 master-0 kubenswrapper[31420]: I0220 12:18:25.337665 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d420cbb8-46c6-400b-b143-ab6a11e0ac04-config-data\") pod \"memcached-0\" (UID: \"d420cbb8-46c6-400b-b143-ab6a11e0ac04\") " pod="openstack/memcached-0" Feb 20 12:18:25.339646 master-0 kubenswrapper[31420]: I0220 12:18:25.339614 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d420cbb8-46c6-400b-b143-ab6a11e0ac04-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d420cbb8-46c6-400b-b143-ab6a11e0ac04\") " pod="openstack/memcached-0" Feb 20 12:18:25.343738 master-0 kubenswrapper[31420]: I0220 12:18:25.343519 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d420cbb8-46c6-400b-b143-ab6a11e0ac04-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d420cbb8-46c6-400b-b143-ab6a11e0ac04\") " pod="openstack/memcached-0" Feb 20 12:18:25.353808 master-0 kubenswrapper[31420]: I0220 12:18:25.353744 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skcbp\" (UniqueName: \"kubernetes.io/projected/d420cbb8-46c6-400b-b143-ab6a11e0ac04-kube-api-access-skcbp\") pod \"memcached-0\" (UID: \"d420cbb8-46c6-400b-b143-ab6a11e0ac04\") " pod="openstack/memcached-0" Feb 20 12:18:25.616126 master-0 kubenswrapper[31420]: I0220 12:18:25.615996 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 20 12:18:26.102627 master-0 kubenswrapper[31420]: I0220 12:18:26.102557 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-91f43e2a-8940-4137-b25e-8121af35f97b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9ed47c89-dfe8-4d49-b8b1-2259a32956d3\") pod \"rabbitmq-server-0\" (UID: \"3027dc76-27b3-44c4-b217-885670c3e29e\") " pod="openstack/rabbitmq-server-0" Feb 20 12:18:26.161425 master-0 kubenswrapper[31420]: I0220 12:18:26.161368 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 20 12:18:26.483289 master-0 kubenswrapper[31420]: I0220 12:18:26.483079 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 20 12:18:26.484683 master-0 kubenswrapper[31420]: I0220 12:18:26.484638 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 20 12:18:26.487756 master-0 kubenswrapper[31420]: I0220 12:18:26.487700 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 20 12:18:26.488090 master-0 kubenswrapper[31420]: I0220 12:18:26.488040 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 20 12:18:26.488342 master-0 kubenswrapper[31420]: I0220 12:18:26.488319 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 20 12:18:26.512395 master-0 kubenswrapper[31420]: I0220 12:18:26.512319 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 20 12:18:26.601747 master-0 kubenswrapper[31420]: I0220 12:18:26.601676 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a777598a-6198-4385-860b-a04696e29a88-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.601747 master-0 kubenswrapper[31420]: I0220 12:18:26.601741 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d6368fce-10fb-4323-b33e-2761b1a5a593\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b1d5f825-dbdc-4d34-94d4-1bdb4c9195a4\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.602226 master-0 kubenswrapper[31420]: I0220 12:18:26.601774 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a777598a-6198-4385-860b-a04696e29a88-kolla-config\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.602461 master-0 kubenswrapper[31420]: I0220 12:18:26.602401 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a777598a-6198-4385-860b-a04696e29a88-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.602615 master-0 kubenswrapper[31420]: I0220 12:18:26.602552 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dcln\" (UniqueName: \"kubernetes.io/projected/a777598a-6198-4385-860b-a04696e29a88-kube-api-access-7dcln\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.602615 master-0 kubenswrapper[31420]: I0220 12:18:26.602603 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a777598a-6198-4385-860b-a04696e29a88-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.602729 master-0 kubenswrapper[31420]: I0220 12:18:26.602673 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a777598a-6198-4385-860b-a04696e29a88-config-data-default\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.602829 master-0 kubenswrapper[31420]: I0220 12:18:26.602797 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a777598a-6198-4385-860b-a04696e29a88-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.704127 master-0 kubenswrapper[31420]: I0220 12:18:26.704057 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a777598a-6198-4385-860b-a04696e29a88-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.704328 master-0 kubenswrapper[31420]: I0220 12:18:26.704195 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a777598a-6198-4385-860b-a04696e29a88-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.704328 master-0 kubenswrapper[31420]: I0220 12:18:26.704223 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d6368fce-10fb-4323-b33e-2761b1a5a593\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b1d5f825-dbdc-4d34-94d4-1bdb4c9195a4\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.704328 master-0 kubenswrapper[31420]: I0220 12:18:26.704243 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a777598a-6198-4385-860b-a04696e29a88-kolla-config\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.704328 master-0 kubenswrapper[31420]: I0220 12:18:26.704274 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a777598a-6198-4385-860b-a04696e29a88-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.704328 master-0 kubenswrapper[31420]: I0220 12:18:26.704308 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dcln\" (UniqueName: \"kubernetes.io/projected/a777598a-6198-4385-860b-a04696e29a88-kube-api-access-7dcln\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.718017 master-0 kubenswrapper[31420]: I0220 12:18:26.704520 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a777598a-6198-4385-860b-a04696e29a88-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.718017 master-0 kubenswrapper[31420]: I0220 12:18:26.704560 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a777598a-6198-4385-860b-a04696e29a88-config-data-default\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.718017 master-0 kubenswrapper[31420]: I0220 12:18:26.705797 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a777598a-6198-4385-860b-a04696e29a88-config-data-default\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.718017 master-0 kubenswrapper[31420]: I0220 12:18:26.706245 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a777598a-6198-4385-860b-a04696e29a88-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.718017 master-0 kubenswrapper[31420]: I0220 12:18:26.706717 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a777598a-6198-4385-860b-a04696e29a88-kolla-config\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.718017 master-0 kubenswrapper[31420]: I0220 12:18:26.707160 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a777598a-6198-4385-860b-a04696e29a88-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.718017 master-0 kubenswrapper[31420]: I0220 12:18:26.710047 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a777598a-6198-4385-860b-a04696e29a88-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.718017 master-0 kubenswrapper[31420]: I0220 12:18:26.712192 31420 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 12:18:26.718017 master-0 kubenswrapper[31420]: I0220 12:18:26.712223 31420 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d6368fce-10fb-4323-b33e-2761b1a5a593\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b1d5f825-dbdc-4d34-94d4-1bdb4c9195a4\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/0ec4906c8c76051adbca5aefbf66b4b9e855075d8fbebf4ec1834fc30fbaf132/globalmount\"" pod="openstack/openstack-galera-0" Feb 20 12:18:26.719601 master-0 kubenswrapper[31420]: I0220 12:18:26.719511 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a777598a-6198-4385-860b-a04696e29a88-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.723103 master-0 kubenswrapper[31420]: I0220 12:18:26.723049 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dcln\" (UniqueName: \"kubernetes.io/projected/a777598a-6198-4385-860b-a04696e29a88-kube-api-access-7dcln\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:26.888794 master-0 kubenswrapper[31420]: I0220 12:18:26.888671 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 20 12:18:26.890977 master-0 kubenswrapper[31420]: I0220 12:18:26.890939 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:26.909184 master-0 kubenswrapper[31420]: I0220 12:18:26.898931 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 20 12:18:26.909184 master-0 kubenswrapper[31420]: I0220 12:18:26.899341 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 20 12:18:26.909184 master-0 kubenswrapper[31420]: I0220 12:18:26.904572 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 20 12:18:26.926281 master-0 kubenswrapper[31420]: I0220 12:18:26.925121 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 20 12:18:27.013077 master-0 kubenswrapper[31420]: I0220 12:18:27.011037 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc7ea609-4c79-4ae2-b3a4-b1da7628df33\" (UniqueName: \"kubernetes.io/csi/topolvm.io^151a262d-f1da-419d-ade6-45bde84713de\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.013077 master-0 kubenswrapper[31420]: I0220 12:18:27.011138 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5e587a0-149e-4023-9766-0ac33a7a5d4d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.013077 master-0 kubenswrapper[31420]: I0220 12:18:27.011185 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c5e587a0-149e-4023-9766-0ac33a7a5d4d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.013077 master-0 kubenswrapper[31420]: I0220 12:18:27.011208 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c5e587a0-149e-4023-9766-0ac33a7a5d4d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.013077 master-0 kubenswrapper[31420]: I0220 12:18:27.011322 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c5e587a0-149e-4023-9766-0ac33a7a5d4d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.013077 master-0 kubenswrapper[31420]: I0220 12:18:27.011417 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzxnh\" (UniqueName: \"kubernetes.io/projected/c5e587a0-149e-4023-9766-0ac33a7a5d4d-kube-api-access-bzxnh\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.013077 master-0 kubenswrapper[31420]: I0220 12:18:27.011457 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e587a0-149e-4023-9766-0ac33a7a5d4d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.013077 master-0 kubenswrapper[31420]: I0220 12:18:27.011485 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e587a0-149e-4023-9766-0ac33a7a5d4d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.113067 master-0 kubenswrapper[31420]: I0220 12:18:27.112997 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e587a0-149e-4023-9766-0ac33a7a5d4d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.113067 master-0 kubenswrapper[31420]: I0220 12:18:27.113058 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e587a0-149e-4023-9766-0ac33a7a5d4d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.113303 master-0 kubenswrapper[31420]: I0220 12:18:27.113083 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc7ea609-4c79-4ae2-b3a4-b1da7628df33\" (UniqueName: \"kubernetes.io/csi/topolvm.io^151a262d-f1da-419d-ade6-45bde84713de\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.113303 master-0 kubenswrapper[31420]: I0220 12:18:27.113111 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5e587a0-149e-4023-9766-0ac33a7a5d4d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.113303 master-0 kubenswrapper[31420]: I0220 12:18:27.113137 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c5e587a0-149e-4023-9766-0ac33a7a5d4d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.113303 master-0 kubenswrapper[31420]: I0220 12:18:27.113153 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c5e587a0-149e-4023-9766-0ac33a7a5d4d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.113303 master-0 kubenswrapper[31420]: I0220 12:18:27.113225 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c5e587a0-149e-4023-9766-0ac33a7a5d4d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.113303 master-0 kubenswrapper[31420]: I0220 12:18:27.113284 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzxnh\" (UniqueName: \"kubernetes.io/projected/c5e587a0-149e-4023-9766-0ac33a7a5d4d-kube-api-access-bzxnh\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.128487 master-0 kubenswrapper[31420]: I0220 12:18:27.114059 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c5e587a0-149e-4023-9766-0ac33a7a5d4d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.128487 master-0 kubenswrapper[31420]: I0220 12:18:27.114357 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c5e587a0-149e-4023-9766-0ac33a7a5d4d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.128487 master-0 kubenswrapper[31420]: I0220 12:18:27.114642 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c5e587a0-149e-4023-9766-0ac33a7a5d4d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.128487 master-0 kubenswrapper[31420]: I0220 12:18:27.128178 31420 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 12:18:27.128487 master-0 kubenswrapper[31420]: I0220 12:18:27.128241 31420 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc7ea609-4c79-4ae2-b3a4-b1da7628df33\" (UniqueName: \"kubernetes.io/csi/topolvm.io^151a262d-f1da-419d-ade6-45bde84713de\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/266c88ec7be563ceada5d717cd2069490e030c0894d7976ed1c3e5ecf0055208/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.129378 master-0 kubenswrapper[31420]: I0220 12:18:27.129297 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5e587a0-149e-4023-9766-0ac33a7a5d4d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.136123 master-0 kubenswrapper[31420]: I0220 12:18:27.136089 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e587a0-149e-4023-9766-0ac33a7a5d4d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.144599 master-0 kubenswrapper[31420]: I0220 12:18:27.142463 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e587a0-149e-4023-9766-0ac33a7a5d4d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:27.161549 master-0 kubenswrapper[31420]: I0220 12:18:27.145342 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzxnh\" (UniqueName: \"kubernetes.io/projected/c5e587a0-149e-4023-9766-0ac33a7a5d4d-kube-api-access-bzxnh\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:28.266301 master-0 kubenswrapper[31420]: I0220 12:18:28.266243 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-880d2b66-0523-4802-943d-67899302c777\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b0322a6a-cf3b-463e-8c3a-0a2846c5f5dc\") pod \"rabbitmq-cell1-server-0\" (UID: \"de0e242c-6018-42c0-8a59-b755e2bd36b0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:28.455832 master-0 kubenswrapper[31420]: I0220 12:18:28.455662 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:18:29.351238 master-0 kubenswrapper[31420]: I0220 12:18:29.351183 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d6368fce-10fb-4323-b33e-2761b1a5a593\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b1d5f825-dbdc-4d34-94d4-1bdb4c9195a4\") pod \"openstack-galera-0\" (UID: \"a777598a-6198-4385-860b-a04696e29a88\") " pod="openstack/openstack-galera-0" Feb 20 12:18:29.517998 master-0 kubenswrapper[31420]: I0220 12:18:29.517934 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 20 12:18:30.862689 master-0 kubenswrapper[31420]: I0220 12:18:30.862573 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc7ea609-4c79-4ae2-b3a4-b1da7628df33\" (UniqueName: \"kubernetes.io/csi/topolvm.io^151a262d-f1da-419d-ade6-45bde84713de\") pod \"openstack-cell1-galera-0\" (UID: \"c5e587a0-149e-4023-9766-0ac33a7a5d4d\") " pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:31.129148 master-0 kubenswrapper[31420]: I0220 12:18:31.125006 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 20 12:18:31.142377 master-0 kubenswrapper[31420]: I0220 12:18:31.142302 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4twdx"] Feb 20 12:18:31.143573 master-0 kubenswrapper[31420]: I0220 12:18:31.143505 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.151860 master-0 kubenswrapper[31420]: I0220 12:18:31.147191 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 20 12:18:31.151860 master-0 kubenswrapper[31420]: I0220 12:18:31.148996 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 20 12:18:31.157608 master-0 kubenswrapper[31420]: I0220 12:18:31.153519 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-hzmsb"] Feb 20 12:18:31.160193 master-0 kubenswrapper[31420]: I0220 12:18:31.160159 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.166593 master-0 kubenswrapper[31420]: I0220 12:18:31.163665 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4twdx"] Feb 20 12:18:31.177003 master-0 kubenswrapper[31420]: I0220 12:18:31.176938 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hzmsb"] Feb 20 12:18:31.213277 master-0 kubenswrapper[31420]: I0220 12:18:31.212024 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-var-run\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.213277 master-0 kubenswrapper[31420]: I0220 12:18:31.212117 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-var-lib\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.213277 master-0 kubenswrapper[31420]: I0220 12:18:31.212160 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-var-log\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.213277 master-0 kubenswrapper[31420]: I0220 12:18:31.212196 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d653c352-bccc-4fb7-bba0-97ad923e92e4-combined-ca-bundle\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.213277 master-0 kubenswrapper[31420]: I0220 12:18:31.212247 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d653c352-bccc-4fb7-bba0-97ad923e92e4-scripts\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.213277 master-0 kubenswrapper[31420]: I0220 12:18:31.212291 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d653c352-bccc-4fb7-bba0-97ad923e92e4-var-log-ovn\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.213277 master-0 kubenswrapper[31420]: I0220 12:18:31.212335 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-etc-ovs\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.213277 master-0 kubenswrapper[31420]: I0220 12:18:31.212376 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d653c352-bccc-4fb7-bba0-97ad923e92e4-var-run\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.213277 master-0 kubenswrapper[31420]: I0220 12:18:31.212411 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d653c352-bccc-4fb7-bba0-97ad923e92e4-var-run-ovn\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.213277 master-0 kubenswrapper[31420]: I0220 12:18:31.212443 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp227\" (UniqueName: \"kubernetes.io/projected/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-kube-api-access-xp227\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.213277 master-0 kubenswrapper[31420]: I0220 12:18:31.212602 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb2xp\" (UniqueName: \"kubernetes.io/projected/d653c352-bccc-4fb7-bba0-97ad923e92e4-kube-api-access-kb2xp\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.213277 master-0 kubenswrapper[31420]: I0220 12:18:31.212682 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-scripts\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.213277 master-0 kubenswrapper[31420]: I0220 12:18:31.212737 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d653c352-bccc-4fb7-bba0-97ad923e92e4-ovn-controller-tls-certs\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.314617 master-0 kubenswrapper[31420]: I0220 12:18:31.314495 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-var-lib\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.314617 master-0 kubenswrapper[31420]: I0220 12:18:31.314616 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-var-log\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.314916 master-0 kubenswrapper[31420]: I0220 12:18:31.314638 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d653c352-bccc-4fb7-bba0-97ad923e92e4-combined-ca-bundle\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.314916 master-0 kubenswrapper[31420]: I0220 12:18:31.314683 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d653c352-bccc-4fb7-bba0-97ad923e92e4-scripts\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.314916 master-0 kubenswrapper[31420]: I0220 12:18:31.314712 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d653c352-bccc-4fb7-bba0-97ad923e92e4-var-log-ovn\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.314916 master-0 kubenswrapper[31420]: I0220 12:18:31.314737 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-etc-ovs\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.314916 master-0 kubenswrapper[31420]: I0220 12:18:31.314764 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d653c352-bccc-4fb7-bba0-97ad923e92e4-var-run\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.314916 master-0 kubenswrapper[31420]: I0220 12:18:31.314789 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d653c352-bccc-4fb7-bba0-97ad923e92e4-var-run-ovn\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.314916 master-0 kubenswrapper[31420]: I0220 12:18:31.314808 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp227\" (UniqueName: \"kubernetes.io/projected/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-kube-api-access-xp227\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.314916 master-0 kubenswrapper[31420]: I0220 12:18:31.314837 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb2xp\" (UniqueName: \"kubernetes.io/projected/d653c352-bccc-4fb7-bba0-97ad923e92e4-kube-api-access-kb2xp\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.314916 master-0 kubenswrapper[31420]: I0220 12:18:31.314862 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-scripts\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.314916 master-0 kubenswrapper[31420]: I0220 12:18:31.314890 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d653c352-bccc-4fb7-bba0-97ad923e92e4-ovn-controller-tls-certs\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.314916 master-0 kubenswrapper[31420]: I0220 12:18:31.314920 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-var-run\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.315916 master-0 kubenswrapper[31420]: I0220 12:18:31.315447 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-var-run\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.315916 master-0 kubenswrapper[31420]: I0220 12:18:31.315566 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-var-lib\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.315916 master-0 kubenswrapper[31420]: I0220 12:18:31.315595 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-etc-ovs\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.315916 master-0 kubenswrapper[31420]: I0220 12:18:31.315638 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d653c352-bccc-4fb7-bba0-97ad923e92e4-var-run\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.315916 master-0 kubenswrapper[31420]: I0220 12:18:31.315719 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d653c352-bccc-4fb7-bba0-97ad923e92e4-var-run-ovn\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.315916 master-0 kubenswrapper[31420]: I0220 12:18:31.315761 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-var-log\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.318461 master-0 kubenswrapper[31420]: I0220 12:18:31.317690 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d653c352-bccc-4fb7-bba0-97ad923e92e4-var-log-ovn\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.318642 master-0 kubenswrapper[31420]: I0220 12:18:31.318501 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d653c352-bccc-4fb7-bba0-97ad923e92e4-ovn-controller-tls-certs\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.318877 master-0 kubenswrapper[31420]: I0220 12:18:31.318848 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d653c352-bccc-4fb7-bba0-97ad923e92e4-combined-ca-bundle\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.320620 master-0 kubenswrapper[31420]: I0220 12:18:31.319717 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d653c352-bccc-4fb7-bba0-97ad923e92e4-scripts\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.320620 master-0 kubenswrapper[31420]: I0220 12:18:31.320064 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-scripts\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.334706 master-0 kubenswrapper[31420]: I0220 12:18:31.334645 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb2xp\" (UniqueName: \"kubernetes.io/projected/d653c352-bccc-4fb7-bba0-97ad923e92e4-kube-api-access-kb2xp\") pod \"ovn-controller-4twdx\" (UID: \"d653c352-bccc-4fb7-bba0-97ad923e92e4\") " pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.336442 master-0 kubenswrapper[31420]: I0220 12:18:31.336399 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp227\" (UniqueName: \"kubernetes.io/projected/c0dc8f5a-78ac-4bdf-9b05-953e0edf6616-kube-api-access-xp227\") pod \"ovn-controller-ovs-hzmsb\" (UID: \"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616\") " pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:31.503655 master-0 kubenswrapper[31420]: I0220 12:18:31.503518 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4twdx" Feb 20 12:18:31.521459 master-0 kubenswrapper[31420]: I0220 12:18:31.521382 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:33.402615 master-0 kubenswrapper[31420]: I0220 12:18:33.398700 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 20 12:18:33.402615 master-0 kubenswrapper[31420]: I0220 12:18:33.400656 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.405443 master-0 kubenswrapper[31420]: I0220 12:18:33.405399 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 20 12:18:33.405705 master-0 kubenswrapper[31420]: I0220 12:18:33.405678 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 20 12:18:33.405871 master-0 kubenswrapper[31420]: I0220 12:18:33.405844 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 20 12:18:33.406038 master-0 kubenswrapper[31420]: I0220 12:18:33.406011 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 20 12:18:33.408147 master-0 kubenswrapper[31420]: I0220 12:18:33.408088 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 20 12:18:33.566080 master-0 kubenswrapper[31420]: I0220 12:18:33.565951 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a534afa2-10da-4837-9cf8-6b2416df04dd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.566080 master-0 kubenswrapper[31420]: I0220 12:18:33.566034 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a534afa2-10da-4837-9cf8-6b2416df04dd-config\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.566080 master-0 kubenswrapper[31420]: I0220 12:18:33.566055 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b6f21ad3-3cfc-4b51-a176-975852c6496a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^57783d47-4fc0-44ca-9806-68c4afbf7bd5\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.566080 master-0 kubenswrapper[31420]: I0220 12:18:33.566074 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clnfb\" (UniqueName: \"kubernetes.io/projected/a534afa2-10da-4837-9cf8-6b2416df04dd-kube-api-access-clnfb\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.566433 master-0 kubenswrapper[31420]: I0220 12:18:33.566117 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a534afa2-10da-4837-9cf8-6b2416df04dd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.566433 master-0 kubenswrapper[31420]: I0220 12:18:33.566142 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a534afa2-10da-4837-9cf8-6b2416df04dd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.566433 master-0 kubenswrapper[31420]: I0220 12:18:33.566263 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a534afa2-10da-4837-9cf8-6b2416df04dd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.566433 master-0 kubenswrapper[31420]: I0220 12:18:33.566305 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a534afa2-10da-4837-9cf8-6b2416df04dd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.668696 master-0 kubenswrapper[31420]: I0220 12:18:33.668477 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a534afa2-10da-4837-9cf8-6b2416df04dd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.668696 master-0 kubenswrapper[31420]: I0220 12:18:33.668556 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a534afa2-10da-4837-9cf8-6b2416df04dd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.668696 master-0 kubenswrapper[31420]: I0220 12:18:33.668601 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a534afa2-10da-4837-9cf8-6b2416df04dd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.668696 master-0 kubenswrapper[31420]: I0220 12:18:33.668632 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a534afa2-10da-4837-9cf8-6b2416df04dd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.668696 master-0 kubenswrapper[31420]: I0220 12:18:33.668707 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a534afa2-10da-4837-9cf8-6b2416df04dd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.669446 master-0 kubenswrapper[31420]: I0220 12:18:33.668780 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a534afa2-10da-4837-9cf8-6b2416df04dd-config\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.669446 master-0 kubenswrapper[31420]: I0220 12:18:33.668801 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b6f21ad3-3cfc-4b51-a176-975852c6496a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^57783d47-4fc0-44ca-9806-68c4afbf7bd5\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.669446 master-0 kubenswrapper[31420]: I0220 12:18:33.668831 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clnfb\" (UniqueName: \"kubernetes.io/projected/a534afa2-10da-4837-9cf8-6b2416df04dd-kube-api-access-clnfb\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.670813 master-0 kubenswrapper[31420]: I0220 12:18:33.670760 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a534afa2-10da-4837-9cf8-6b2416df04dd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.671253 master-0 kubenswrapper[31420]: I0220 12:18:33.671197 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a534afa2-10da-4837-9cf8-6b2416df04dd-config\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.671939 master-0 kubenswrapper[31420]: I0220 12:18:33.671880 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a534afa2-10da-4837-9cf8-6b2416df04dd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.672714 master-0 kubenswrapper[31420]: I0220 12:18:33.672680 31420 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 12:18:33.672896 master-0 kubenswrapper[31420]: I0220 12:18:33.672759 31420 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b6f21ad3-3cfc-4b51-a176-975852c6496a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^57783d47-4fc0-44ca-9806-68c4afbf7bd5\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/fcfe01f7396667593a3b91da7da78d42294362b9d7f2aabbaa0ba6981eaefdce/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.673847 master-0 kubenswrapper[31420]: I0220 12:18:33.673781 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a534afa2-10da-4837-9cf8-6b2416df04dd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.678913 master-0 kubenswrapper[31420]: I0220 12:18:33.678822 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a534afa2-10da-4837-9cf8-6b2416df04dd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.686550 master-0 kubenswrapper[31420]: I0220 12:18:33.686463 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a534afa2-10da-4837-9cf8-6b2416df04dd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:33.702181 master-0 kubenswrapper[31420]: I0220 12:18:33.697857 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clnfb\" (UniqueName: \"kubernetes.io/projected/a534afa2-10da-4837-9cf8-6b2416df04dd-kube-api-access-clnfb\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:34.620819 master-0 kubenswrapper[31420]: I0220 12:18:34.620637 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 20 12:18:34.623892 master-0 kubenswrapper[31420]: I0220 12:18:34.623819 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.626839 master-0 kubenswrapper[31420]: I0220 12:18:34.626767 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 20 12:18:34.630666 master-0 kubenswrapper[31420]: I0220 12:18:34.626872 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 20 12:18:34.630666 master-0 kubenswrapper[31420]: I0220 12:18:34.627327 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 20 12:18:34.630666 master-0 kubenswrapper[31420]: I0220 12:18:34.630439 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 20 12:18:34.791218 master-0 kubenswrapper[31420]: I0220 12:18:34.791141 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5704bbe-5b81-411c-9641-e54f70784e12-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.791474 master-0 kubenswrapper[31420]: I0220 12:18:34.791259 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5704bbe-5b81-411c-9641-e54f70784e12-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.791474 master-0 kubenswrapper[31420]: I0220 12:18:34.791304 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z97zx\" (UniqueName: \"kubernetes.io/projected/e5704bbe-5b81-411c-9641-e54f70784e12-kube-api-access-z97zx\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.791474 master-0 kubenswrapper[31420]: I0220 12:18:34.791345 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5704bbe-5b81-411c-9641-e54f70784e12-config\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.791474 master-0 kubenswrapper[31420]: I0220 12:18:34.791400 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c1a5fd64-50a6-41be-a75e-fc7ceaf698ae\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b02a2565-b59c-48fe-b71c-fc6c5c528aa1\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.791621 master-0 kubenswrapper[31420]: I0220 12:18:34.791479 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e5704bbe-5b81-411c-9641-e54f70784e12-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.791621 master-0 kubenswrapper[31420]: I0220 12:18:34.791515 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5704bbe-5b81-411c-9641-e54f70784e12-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.791738 master-0 kubenswrapper[31420]: I0220 12:18:34.791702 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5704bbe-5b81-411c-9641-e54f70784e12-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.895360 master-0 kubenswrapper[31420]: I0220 12:18:34.895279 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5704bbe-5b81-411c-9641-e54f70784e12-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.895604 master-0 kubenswrapper[31420]: I0220 12:18:34.895414 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5704bbe-5b81-411c-9641-e54f70784e12-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.895604 master-0 kubenswrapper[31420]: I0220 12:18:34.895456 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z97zx\" (UniqueName: \"kubernetes.io/projected/e5704bbe-5b81-411c-9641-e54f70784e12-kube-api-access-z97zx\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.895604 master-0 kubenswrapper[31420]: I0220 12:18:34.895499 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5704bbe-5b81-411c-9641-e54f70784e12-config\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.895823 master-0 kubenswrapper[31420]: I0220 12:18:34.895732 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c1a5fd64-50a6-41be-a75e-fc7ceaf698ae\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b02a2565-b59c-48fe-b71c-fc6c5c528aa1\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.895976 master-0 kubenswrapper[31420]: I0220 12:18:34.895941 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e5704bbe-5b81-411c-9641-e54f70784e12-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.896102 master-0 kubenswrapper[31420]: I0220 12:18:34.895992 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5704bbe-5b81-411c-9641-e54f70784e12-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.896222 master-0 kubenswrapper[31420]: I0220 12:18:34.896183 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5704bbe-5b81-411c-9641-e54f70784e12-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.897074 master-0 kubenswrapper[31420]: I0220 12:18:34.897041 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5704bbe-5b81-411c-9641-e54f70784e12-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.897857 master-0 kubenswrapper[31420]: I0220 12:18:34.897835 31420 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 12:18:34.897915 master-0 kubenswrapper[31420]: I0220 12:18:34.897865 31420 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c1a5fd64-50a6-41be-a75e-fc7ceaf698ae\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b02a2565-b59c-48fe-b71c-fc6c5c528aa1\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/12b7c24aab7db98ebced4d9ba3e4cf49ee20f7de01659d7251b216519e2e676a/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.897915 master-0 kubenswrapper[31420]: I0220 12:18:34.897876 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e5704bbe-5b81-411c-9641-e54f70784e12-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.898484 master-0 kubenswrapper[31420]: I0220 12:18:34.898235 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5704bbe-5b81-411c-9641-e54f70784e12-config\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.901361 master-0 kubenswrapper[31420]: I0220 12:18:34.901298 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5704bbe-5b81-411c-9641-e54f70784e12-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.901818 master-0 kubenswrapper[31420]: I0220 12:18:34.901770 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5704bbe-5b81-411c-9641-e54f70784e12-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.903820 master-0 kubenswrapper[31420]: I0220 12:18:34.903773 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5704bbe-5b81-411c-9641-e54f70784e12-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:34.912514 master-0 kubenswrapper[31420]: I0220 12:18:34.912447 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z97zx\" (UniqueName: \"kubernetes.io/projected/e5704bbe-5b81-411c-9641-e54f70784e12-kube-api-access-z97zx\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:35.107376 master-0 kubenswrapper[31420]: I0220 12:18:35.105783 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b6f21ad3-3cfc-4b51-a176-975852c6496a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^57783d47-4fc0-44ca-9806-68c4afbf7bd5\") pod \"ovsdbserver-sb-0\" (UID: \"a534afa2-10da-4837-9cf8-6b2416df04dd\") " pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:35.276558 master-0 kubenswrapper[31420]: I0220 12:18:35.276353 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:36.465034 master-0 kubenswrapper[31420]: I0220 12:18:36.464965 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c1a5fd64-50a6-41be-a75e-fc7ceaf698ae\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b02a2565-b59c-48fe-b71c-fc6c5c528aa1\") pod \"ovsdbserver-nb-0\" (UID: \"e5704bbe-5b81-411c-9641-e54f70784e12\") " pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:36.483482 master-0 kubenswrapper[31420]: I0220 12:18:36.483422 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:38.656502 master-0 kubenswrapper[31420]: I0220 12:18:38.656450 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 20 12:18:38.951951 master-0 kubenswrapper[31420]: I0220 12:18:38.951893 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 20 12:18:38.960856 master-0 kubenswrapper[31420]: I0220 12:18:38.960805 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 20 12:18:39.087130 master-0 kubenswrapper[31420]: E0220 12:18:39.087018 31420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5e587a0_149e_4023_9766_0ac33a7a5d4d.slice/crio-532d0ace4c389f5a9d8371e49dce55706040895136e05a94aff3afe37ae831ce\": RecentStats: unable to find data in memory cache]" Feb 20 12:18:39.107053 master-0 kubenswrapper[31420]: I0220 12:18:39.106997 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 20 12:18:39.176049 master-0 kubenswrapper[31420]: W0220 12:18:39.176003 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda777598a_6198_4385_860b_a04696e29a88.slice/crio-d1ffc24003e1e4f41f1e3d08e1312267bae3e9c996932834a2b658edf581f240 WatchSource:0}: Error finding container d1ffc24003e1e4f41f1e3d08e1312267bae3e9c996932834a2b658edf581f240: Status 404 returned error can't find the container with id d1ffc24003e1e4f41f1e3d08e1312267bae3e9c996932834a2b658edf581f240 Feb 20 12:18:39.179443 master-0 kubenswrapper[31420]: W0220 12:18:39.179409 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd653c352_bccc_4fb7_bba0_97ad923e92e4.slice/crio-2f5ca2e47ca5dca2121eb087177ea82ca178d9bdb5dfa552896aad89c16d2cf3 WatchSource:0}: Error finding container 2f5ca2e47ca5dca2121eb087177ea82ca178d9bdb5dfa552896aad89c16d2cf3: Status 404 returned error can't find the container with id 2f5ca2e47ca5dca2121eb087177ea82ca178d9bdb5dfa552896aad89c16d2cf3 Feb 20 12:18:39.185284 master-0 kubenswrapper[31420]: I0220 12:18:39.185239 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 20 12:18:39.198287 master-0 kubenswrapper[31420]: I0220 12:18:39.198193 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4twdx"] Feb 20 12:18:39.829831 master-0 kubenswrapper[31420]: I0220 12:18:39.829756 31420 generic.go:334] "Generic (PLEG): container finished" podID="09686517-53db-438b-85ea-646904a52235" containerID="0057453b67b08d74526d4f74a47dd22c285c7fce484786a2ce2b4d6f7041938a" exitCode=0 Feb 20 12:18:39.830771 master-0 kubenswrapper[31420]: I0220 12:18:39.829849 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-bdgzn" event={"ID":"09686517-53db-438b-85ea-646904a52235","Type":"ContainerDied","Data":"0057453b67b08d74526d4f74a47dd22c285c7fce484786a2ce2b4d6f7041938a"} Feb 20 12:18:39.833869 master-0 kubenswrapper[31420]: I0220 12:18:39.833805 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c5e587a0-149e-4023-9766-0ac33a7a5d4d","Type":"ContainerStarted","Data":"532d0ace4c389f5a9d8371e49dce55706040895136e05a94aff3afe37ae831ce"} Feb 20 12:18:39.835475 master-0 kubenswrapper[31420]: I0220 12:18:39.835442 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4twdx" event={"ID":"d653c352-bccc-4fb7-bba0-97ad923e92e4","Type":"ContainerStarted","Data":"2f5ca2e47ca5dca2121eb087177ea82ca178d9bdb5dfa552896aad89c16d2cf3"} Feb 20 12:18:39.837399 master-0 kubenswrapper[31420]: I0220 12:18:39.837342 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a777598a-6198-4385-860b-a04696e29a88","Type":"ContainerStarted","Data":"d1ffc24003e1e4f41f1e3d08e1312267bae3e9c996932834a2b658edf581f240"} Feb 20 12:18:39.839449 master-0 kubenswrapper[31420]: I0220 12:18:39.839044 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3027dc76-27b3-44c4-b217-885670c3e29e","Type":"ContainerStarted","Data":"8b8a5ef19930449729cda18bf364bc0e7c6765f1269b7aed6f3cf110ed7656cc"} Feb 20 12:18:39.841050 master-0 kubenswrapper[31420]: I0220 12:18:39.841003 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-zbtsx" event={"ID":"c7efaf54-8294-4db8-acff-033af4e873ee","Type":"ContainerStarted","Data":"33326e7090b289c1a6a9bef39ef6d0fbfbfb0d0d38d798b2d697625cb9efaf9a"} Feb 20 12:18:39.841666 master-0 kubenswrapper[31420]: I0220 12:18:39.841612 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d78499c-zbtsx" podUID="c7efaf54-8294-4db8-acff-033af4e873ee" containerName="init" containerID="cri-o://33326e7090b289c1a6a9bef39ef6d0fbfbfb0d0d38d798b2d697625cb9efaf9a" gracePeriod=10 Feb 20 12:18:39.842182 master-0 kubenswrapper[31420]: I0220 12:18:39.842136 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d420cbb8-46c6-400b-b143-ab6a11e0ac04","Type":"ContainerStarted","Data":"d71697d3240e7e316078b9d7ca0d099eaa5e941005a49261a037530cc0584455"} Feb 20 12:18:39.844487 master-0 kubenswrapper[31420]: I0220 12:18:39.844442 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de0e242c-6018-42c0-8a59-b755e2bd36b0","Type":"ContainerStarted","Data":"365ffb6dff8498e69012eedc6c5b2335cbc5a44a2b9aba951f3f6ff8a3f71994"} Feb 20 12:18:40.369063 master-0 kubenswrapper[31420]: I0220 12:18:40.368990 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 20 12:18:40.374672 master-0 kubenswrapper[31420]: W0220 12:18:40.374093 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda534afa2_10da_4837_9cf8_6b2416df04dd.slice/crio-33047ed22fc3008fd8d752d9edfbbbc4d095f0ae1c0420abe35619fda9f0d67a WatchSource:0}: Error finding container 33047ed22fc3008fd8d752d9edfbbbc4d095f0ae1c0420abe35619fda9f0d67a: Status 404 returned error can't find the container with id 33047ed22fc3008fd8d752d9edfbbbc4d095f0ae1c0420abe35619fda9f0d67a Feb 20 12:18:40.438204 master-0 kubenswrapper[31420]: I0220 12:18:40.438156 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-bdgzn" Feb 20 12:18:40.565813 master-0 kubenswrapper[31420]: I0220 12:18:40.565363 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd2pd\" (UniqueName: \"kubernetes.io/projected/09686517-53db-438b-85ea-646904a52235-kube-api-access-fd2pd\") pod \"09686517-53db-438b-85ea-646904a52235\" (UID: \"09686517-53db-438b-85ea-646904a52235\") " Feb 20 12:18:40.565813 master-0 kubenswrapper[31420]: I0220 12:18:40.565703 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09686517-53db-438b-85ea-646904a52235-config\") pod \"09686517-53db-438b-85ea-646904a52235\" (UID: \"09686517-53db-438b-85ea-646904a52235\") " Feb 20 12:18:40.579188 master-0 kubenswrapper[31420]: I0220 12:18:40.578874 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09686517-53db-438b-85ea-646904a52235-kube-api-access-fd2pd" (OuterVolumeSpecName: "kube-api-access-fd2pd") pod "09686517-53db-438b-85ea-646904a52235" (UID: "09686517-53db-438b-85ea-646904a52235"). InnerVolumeSpecName "kube-api-access-fd2pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:18:40.604412 master-0 kubenswrapper[31420]: I0220 12:18:40.604214 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09686517-53db-438b-85ea-646904a52235-config" (OuterVolumeSpecName: "config") pod "09686517-53db-438b-85ea-646904a52235" (UID: "09686517-53db-438b-85ea-646904a52235"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:18:40.669084 master-0 kubenswrapper[31420]: I0220 12:18:40.669032 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09686517-53db-438b-85ea-646904a52235-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:40.669084 master-0 kubenswrapper[31420]: I0220 12:18:40.669076 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd2pd\" (UniqueName: \"kubernetes.io/projected/09686517-53db-438b-85ea-646904a52235-kube-api-access-fd2pd\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:40.759784 master-0 kubenswrapper[31420]: I0220 12:18:40.759548 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hzmsb"] Feb 20 12:18:40.820164 master-0 kubenswrapper[31420]: W0220 12:18:40.819676 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0dc8f5a_78ac_4bdf_9b05_953e0edf6616.slice/crio-23fd852ae5e7a2f9024282224803fe74654004ca975dd748f5b1881185547d38 WatchSource:0}: Error finding container 23fd852ae5e7a2f9024282224803fe74654004ca975dd748f5b1881185547d38: Status 404 returned error can't find the container with id 23fd852ae5e7a2f9024282224803fe74654004ca975dd748f5b1881185547d38 Feb 20 12:18:40.858286 master-0 kubenswrapper[31420]: I0220 12:18:40.857761 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hzmsb" event={"ID":"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616","Type":"ContainerStarted","Data":"23fd852ae5e7a2f9024282224803fe74654004ca975dd748f5b1881185547d38"} Feb 20 12:18:40.860714 master-0 kubenswrapper[31420]: I0220 12:18:40.860678 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-bdgzn" event={"ID":"09686517-53db-438b-85ea-646904a52235","Type":"ContainerDied","Data":"b99c8772dde43a57b9b639f1e10c192a2e8bab9f5ad07f573f30ec2a7138311b"} Feb 20 12:18:40.860714 master-0 kubenswrapper[31420]: I0220 12:18:40.860726 31420 scope.go:117] "RemoveContainer" containerID="0057453b67b08d74526d4f74a47dd22c285c7fce484786a2ce2b4d6f7041938a" Feb 20 12:18:40.861123 master-0 kubenswrapper[31420]: I0220 12:18:40.860731 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-bdgzn" Feb 20 12:18:40.864079 master-0 kubenswrapper[31420]: I0220 12:18:40.863891 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a534afa2-10da-4837-9cf8-6b2416df04dd","Type":"ContainerStarted","Data":"33047ed22fc3008fd8d752d9edfbbbc4d095f0ae1c0420abe35619fda9f0d67a"} Feb 20 12:18:40.866901 master-0 kubenswrapper[31420]: I0220 12:18:40.866746 31420 generic.go:334] "Generic (PLEG): container finished" podID="a8429e15-dbd4-44e4-ab7d-214ac5143db3" containerID="6879234707ce811e2d6e9345c0a83efe0f713f97427052ef0c378849056e403a" exitCode=0 Feb 20 12:18:40.866901 master-0 kubenswrapper[31420]: I0220 12:18:40.866808 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" event={"ID":"a8429e15-dbd4-44e4-ab7d-214ac5143db3","Type":"ContainerDied","Data":"6879234707ce811e2d6e9345c0a83efe0f713f97427052ef0c378849056e403a"} Feb 20 12:18:40.873036 master-0 kubenswrapper[31420]: I0220 12:18:40.872971 31420 generic.go:334] "Generic (PLEG): container finished" podID="c7efaf54-8294-4db8-acff-033af4e873ee" containerID="33326e7090b289c1a6a9bef39ef6d0fbfbfb0d0d38d798b2d697625cb9efaf9a" exitCode=0 Feb 20 12:18:40.873154 master-0 kubenswrapper[31420]: I0220 12:18:40.873033 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-zbtsx" event={"ID":"c7efaf54-8294-4db8-acff-033af4e873ee","Type":"ContainerDied","Data":"33326e7090b289c1a6a9bef39ef6d0fbfbfb0d0d38d798b2d697625cb9efaf9a"} Feb 20 12:18:40.877342 master-0 kubenswrapper[31420]: I0220 12:18:40.877285 31420 generic.go:334] "Generic (PLEG): container finished" podID="9a5729d2-75e9-403d-828f-2739cfc261e0" containerID="b50961e40ba8960875a504af2b3dfed348310e41997560d7b5d15e26b548fbbd" exitCode=0 Feb 20 12:18:40.877441 master-0 kubenswrapper[31420]: I0220 12:18:40.877343 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" event={"ID":"9a5729d2-75e9-403d-828f-2739cfc261e0","Type":"ContainerDied","Data":"b50961e40ba8960875a504af2b3dfed348310e41997560d7b5d15e26b548fbbd"} Feb 20 12:18:40.976419 master-0 kubenswrapper[31420]: I0220 12:18:40.973767 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-bdgzn"] Feb 20 12:18:41.007171 master-0 kubenswrapper[31420]: I0220 12:18:41.007096 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-bdgzn"] Feb 20 12:18:41.060368 master-0 kubenswrapper[31420]: I0220 12:18:41.060137 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 20 12:18:41.206294 master-0 kubenswrapper[31420]: I0220 12:18:41.205762 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-zbtsx" Feb 20 12:18:41.276464 master-0 kubenswrapper[31420]: E0220 12:18:41.276402 31420 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 20 12:18:41.276464 master-0 kubenswrapper[31420]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/a8429e15-dbd4-44e4-ab7d-214ac5143db3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 20 12:18:41.276464 master-0 kubenswrapper[31420]: > podSandboxID="12ed51c26e4584f071bd18669e760ad6c676a0f2f7bec5710ab38bcff6b5e628" Feb 20 12:18:41.276676 master-0 kubenswrapper[31420]: E0220 12:18:41.276643 31420 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 20 12:18:41.276676 master-0 kubenswrapper[31420]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbchf8h696h5ffh5cdh585hc5hbfh597h58dhfh554h67bh9bh5c9hfch7dh5fbhbbh567h78h669hf8h65dh55dh588h5ddh88h694h669h95h8q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jz9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000800000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5bcd98d69f-42dnl_openstack(a8429e15-dbd4-44e4-ab7d-214ac5143db3): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/a8429e15-dbd4-44e4-ab7d-214ac5143db3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 20 12:18:41.276676 master-0 kubenswrapper[31420]: > logger="UnhandledError" Feb 20 12:18:41.278552 master-0 kubenswrapper[31420]: E0220 12:18:41.278493 31420 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/a8429e15-dbd4-44e4-ab7d-214ac5143db3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" podUID="a8429e15-dbd4-44e4-ab7d-214ac5143db3" Feb 20 12:18:41.281239 master-0 kubenswrapper[31420]: I0220 12:18:41.281202 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7efaf54-8294-4db8-acff-033af4e873ee-config\") pod \"c7efaf54-8294-4db8-acff-033af4e873ee\" (UID: \"c7efaf54-8294-4db8-acff-033af4e873ee\") " Feb 20 12:18:41.281327 master-0 kubenswrapper[31420]: I0220 12:18:41.281264 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7efaf54-8294-4db8-acff-033af4e873ee-dns-svc\") pod \"c7efaf54-8294-4db8-acff-033af4e873ee\" (UID: \"c7efaf54-8294-4db8-acff-033af4e873ee\") " Feb 20 12:18:41.281459 master-0 kubenswrapper[31420]: I0220 12:18:41.281432 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r64np\" (UniqueName: \"kubernetes.io/projected/c7efaf54-8294-4db8-acff-033af4e873ee-kube-api-access-r64np\") pod \"c7efaf54-8294-4db8-acff-033af4e873ee\" (UID: \"c7efaf54-8294-4db8-acff-033af4e873ee\") " Feb 20 12:18:41.285830 master-0 kubenswrapper[31420]: I0220 12:18:41.285751 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7efaf54-8294-4db8-acff-033af4e873ee-kube-api-access-r64np" (OuterVolumeSpecName: "kube-api-access-r64np") pod "c7efaf54-8294-4db8-acff-033af4e873ee" (UID: "c7efaf54-8294-4db8-acff-033af4e873ee"). InnerVolumeSpecName "kube-api-access-r64np". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:18:41.303680 master-0 kubenswrapper[31420]: I0220 12:18:41.303611 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7efaf54-8294-4db8-acff-033af4e873ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7efaf54-8294-4db8-acff-033af4e873ee" (UID: "c7efaf54-8294-4db8-acff-033af4e873ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:18:41.383920 master-0 kubenswrapper[31420]: I0220 12:18:41.383791 31420 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7efaf54-8294-4db8-acff-033af4e873ee-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:41.383920 master-0 kubenswrapper[31420]: I0220 12:18:41.383842 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r64np\" (UniqueName: \"kubernetes.io/projected/c7efaf54-8294-4db8-acff-033af4e873ee-kube-api-access-r64np\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:41.512833 master-0 kubenswrapper[31420]: I0220 12:18:41.512772 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09686517-53db-438b-85ea-646904a52235" path="/var/lib/kubelet/pods/09686517-53db-438b-85ea-646904a52235/volumes" Feb 20 12:18:41.715860 master-0 kubenswrapper[31420]: I0220 12:18:41.715640 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7efaf54-8294-4db8-acff-033af4e873ee-config" (OuterVolumeSpecName: "config") pod "c7efaf54-8294-4db8-acff-033af4e873ee" (UID: "c7efaf54-8294-4db8-acff-033af4e873ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:18:41.792477 master-0 kubenswrapper[31420]: I0220 12:18:41.792404 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7efaf54-8294-4db8-acff-033af4e873ee-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:41.891341 master-0 kubenswrapper[31420]: I0220 12:18:41.891248 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e5704bbe-5b81-411c-9641-e54f70784e12","Type":"ContainerStarted","Data":"1392e18c50750cf17f2ee073d285aeb9c50983c791094d9ea6f267732d021541"} Feb 20 12:18:41.893653 master-0 kubenswrapper[31420]: I0220 12:18:41.893584 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-zbtsx" Feb 20 12:18:41.893981 master-0 kubenswrapper[31420]: I0220 12:18:41.893590 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-zbtsx" event={"ID":"c7efaf54-8294-4db8-acff-033af4e873ee","Type":"ContainerDied","Data":"9c24ae3d60b2917701ee5b89569b6e90ecc50b1f99b3035abf04706573e39336"} Feb 20 12:18:41.894074 master-0 kubenswrapper[31420]: I0220 12:18:41.894002 31420 scope.go:117] "RemoveContainer" containerID="33326e7090b289c1a6a9bef39ef6d0fbfbfb0d0d38d798b2d697625cb9efaf9a" Feb 20 12:18:41.898064 master-0 kubenswrapper[31420]: I0220 12:18:41.897852 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" event={"ID":"9a5729d2-75e9-403d-828f-2739cfc261e0","Type":"ContainerStarted","Data":"d9a6cf245b522029fd2e222fa1221bf380cbf679290868b76b720ea1293c8a6c"} Feb 20 12:18:41.898159 master-0 kubenswrapper[31420]: I0220 12:18:41.898057 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" Feb 20 12:18:41.949061 master-0 kubenswrapper[31420]: I0220 12:18:41.946279 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" podStartSLOduration=3.972430164 podStartE2EDuration="21.946255724s" podCreationTimestamp="2026-02-20 12:18:20 +0000 UTC" firstStartedPulling="2026-02-20 12:18:21.81182076 +0000 UTC m=+806.531059001" lastFinishedPulling="2026-02-20 12:18:39.78564632 +0000 UTC m=+824.504884561" observedRunningTime="2026-02-20 12:18:41.944206857 +0000 UTC m=+826.663445108" watchObservedRunningTime="2026-02-20 12:18:41.946255724 +0000 UTC m=+826.665493965" Feb 20 12:18:42.072301 master-0 kubenswrapper[31420]: I0220 12:18:42.072091 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-zbtsx"] Feb 20 12:18:42.082956 master-0 kubenswrapper[31420]: I0220 12:18:42.082796 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-zbtsx"] Feb 20 12:18:42.910445 master-0 kubenswrapper[31420]: I0220 12:18:42.910362 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" event={"ID":"a8429e15-dbd4-44e4-ab7d-214ac5143db3","Type":"ContainerStarted","Data":"9806e7b239de0176ba6f7f7a557d41d5806d476f905bda54b2d713ae3e60c45b"} Feb 20 12:18:42.911183 master-0 kubenswrapper[31420]: I0220 12:18:42.911133 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" Feb 20 12:18:43.523019 master-0 kubenswrapper[31420]: I0220 12:18:43.522964 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7efaf54-8294-4db8-acff-033af4e873ee" path="/var/lib/kubelet/pods/c7efaf54-8294-4db8-acff-033af4e873ee/volumes" Feb 20 12:18:45.557963 master-0 kubenswrapper[31420]: I0220 12:18:45.556818 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" podStartSLOduration=6.448850491 podStartE2EDuration="25.556794314s" podCreationTimestamp="2026-02-20 12:18:20 +0000 UTC" firstStartedPulling="2026-02-20 12:18:21.348359725 +0000 UTC m=+806.067597966" lastFinishedPulling="2026-02-20 12:18:40.456303548 +0000 UTC m=+825.175541789" observedRunningTime="2026-02-20 12:18:42.953094833 +0000 UTC m=+827.672333094" watchObservedRunningTime="2026-02-20 12:18:45.556794314 +0000 UTC m=+830.276032555" Feb 20 12:18:46.278962 master-0 kubenswrapper[31420]: I0220 12:18:46.278874 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" Feb 20 12:18:46.380623 master-0 kubenswrapper[31420]: I0220 12:18:46.377389 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-42dnl"] Feb 20 12:18:46.380623 master-0 kubenswrapper[31420]: I0220 12:18:46.377702 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" podUID="a8429e15-dbd4-44e4-ab7d-214ac5143db3" containerName="dnsmasq-dns" containerID="cri-o://9806e7b239de0176ba6f7f7a557d41d5806d476f905bda54b2d713ae3e60c45b" gracePeriod=10 Feb 20 12:18:46.979882 master-0 kubenswrapper[31420]: I0220 12:18:46.979802 31420 generic.go:334] "Generic (PLEG): container finished" podID="a8429e15-dbd4-44e4-ab7d-214ac5143db3" containerID="9806e7b239de0176ba6f7f7a557d41d5806d476f905bda54b2d713ae3e60c45b" exitCode=0 Feb 20 12:18:46.979882 master-0 kubenswrapper[31420]: I0220 12:18:46.979868 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" event={"ID":"a8429e15-dbd4-44e4-ab7d-214ac5143db3","Type":"ContainerDied","Data":"9806e7b239de0176ba6f7f7a557d41d5806d476f905bda54b2d713ae3e60c45b"} Feb 20 12:18:50.484812 master-0 kubenswrapper[31420]: I0220 12:18:50.484539 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" Feb 20 12:18:50.596260 master-0 kubenswrapper[31420]: I0220 12:18:50.596145 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jz9n\" (UniqueName: \"kubernetes.io/projected/a8429e15-dbd4-44e4-ab7d-214ac5143db3-kube-api-access-5jz9n\") pod \"a8429e15-dbd4-44e4-ab7d-214ac5143db3\" (UID: \"a8429e15-dbd4-44e4-ab7d-214ac5143db3\") " Feb 20 12:18:50.596437 master-0 kubenswrapper[31420]: I0220 12:18:50.596396 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8429e15-dbd4-44e4-ab7d-214ac5143db3-config\") pod \"a8429e15-dbd4-44e4-ab7d-214ac5143db3\" (UID: \"a8429e15-dbd4-44e4-ab7d-214ac5143db3\") " Feb 20 12:18:50.596791 master-0 kubenswrapper[31420]: I0220 12:18:50.596575 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8429e15-dbd4-44e4-ab7d-214ac5143db3-dns-svc\") pod \"a8429e15-dbd4-44e4-ab7d-214ac5143db3\" (UID: \"a8429e15-dbd4-44e4-ab7d-214ac5143db3\") " Feb 20 12:18:50.625990 master-0 kubenswrapper[31420]: I0220 12:18:50.625927 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8429e15-dbd4-44e4-ab7d-214ac5143db3-kube-api-access-5jz9n" (OuterVolumeSpecName: "kube-api-access-5jz9n") pod "a8429e15-dbd4-44e4-ab7d-214ac5143db3" (UID: "a8429e15-dbd4-44e4-ab7d-214ac5143db3"). InnerVolumeSpecName "kube-api-access-5jz9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:18:50.699188 master-0 kubenswrapper[31420]: I0220 12:18:50.699127 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jz9n\" (UniqueName: \"kubernetes.io/projected/a8429e15-dbd4-44e4-ab7d-214ac5143db3-kube-api-access-5jz9n\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:50.767056 master-0 kubenswrapper[31420]: I0220 12:18:50.766562 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8429e15-dbd4-44e4-ab7d-214ac5143db3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a8429e15-dbd4-44e4-ab7d-214ac5143db3" (UID: "a8429e15-dbd4-44e4-ab7d-214ac5143db3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:18:50.770917 master-0 kubenswrapper[31420]: I0220 12:18:50.770868 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8429e15-dbd4-44e4-ab7d-214ac5143db3-config" (OuterVolumeSpecName: "config") pod "a8429e15-dbd4-44e4-ab7d-214ac5143db3" (UID: "a8429e15-dbd4-44e4-ab7d-214ac5143db3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:18:50.801401 master-0 kubenswrapper[31420]: I0220 12:18:50.801075 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8429e15-dbd4-44e4-ab7d-214ac5143db3-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:50.801401 master-0 kubenswrapper[31420]: I0220 12:18:50.801134 31420 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8429e15-dbd4-44e4-ab7d-214ac5143db3-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:51.025975 master-0 kubenswrapper[31420]: I0220 12:18:51.025836 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hzmsb" event={"ID":"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616","Type":"ContainerStarted","Data":"fdc795ab1f8b5cdcf6ccfb73ee9fafdbbdf8e23759ff966e02e1635e422eb68a"} Feb 20 12:18:51.028693 master-0 kubenswrapper[31420]: I0220 12:18:51.028654 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c5e587a0-149e-4023-9766-0ac33a7a5d4d","Type":"ContainerStarted","Data":"0fec9f565c0568e53192381059ef699238f15d199a83741a533c3964b487edfa"} Feb 20 12:18:51.031995 master-0 kubenswrapper[31420]: I0220 12:18:51.031928 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4twdx" event={"ID":"d653c352-bccc-4fb7-bba0-97ad923e92e4","Type":"ContainerStarted","Data":"2f819c4e5d78d7191c188c8479462383922f5515c06b2cd67f26d3643ccade9b"} Feb 20 12:18:51.033376 master-0 kubenswrapper[31420]: I0220 12:18:51.032910 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-4twdx" Feb 20 12:18:51.035258 master-0 kubenswrapper[31420]: I0220 12:18:51.035233 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a534afa2-10da-4837-9cf8-6b2416df04dd","Type":"ContainerStarted","Data":"fea95fe6ab9395a54aeafef359593c81ede5fe10d3dfb76e3eac9f3f80022089"} Feb 20 12:18:51.038706 master-0 kubenswrapper[31420]: I0220 12:18:51.038671 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a777598a-6198-4385-860b-a04696e29a88","Type":"ContainerStarted","Data":"92a68a762ab5afa94a2d9aabd4dc15dae090c062ad902779fbefec4a25d76748"} Feb 20 12:18:51.040740 master-0 kubenswrapper[31420]: I0220 12:18:51.040711 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e5704bbe-5b81-411c-9641-e54f70784e12","Type":"ContainerStarted","Data":"dd2cf713df755a54e88769010159ef2fbf3b134e60536050400845039b5ccf6e"} Feb 20 12:18:51.042727 master-0 kubenswrapper[31420]: I0220 12:18:51.042693 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" event={"ID":"a8429e15-dbd4-44e4-ab7d-214ac5143db3","Type":"ContainerDied","Data":"12ed51c26e4584f071bd18669e760ad6c676a0f2f7bec5710ab38bcff6b5e628"} Feb 20 12:18:51.042794 master-0 kubenswrapper[31420]: I0220 12:18:51.042732 31420 scope.go:117] "RemoveContainer" containerID="9806e7b239de0176ba6f7f7a557d41d5806d476f905bda54b2d713ae3e60c45b" Feb 20 12:18:51.042829 master-0 kubenswrapper[31420]: I0220 12:18:51.042821 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-42dnl" Feb 20 12:18:51.080808 master-0 kubenswrapper[31420]: I0220 12:18:51.076363 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d420cbb8-46c6-400b-b143-ab6a11e0ac04","Type":"ContainerStarted","Data":"5a02948331c5c4a48faecb7fa66e6de9a075edd87030e6428df4e8d7b788d7bd"} Feb 20 12:18:51.080808 master-0 kubenswrapper[31420]: I0220 12:18:51.076814 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 20 12:18:51.120264 master-0 kubenswrapper[31420]: I0220 12:18:51.120215 31420 scope.go:117] "RemoveContainer" containerID="6879234707ce811e2d6e9345c0a83efe0f713f97427052ef0c378849056e403a" Feb 20 12:18:51.122846 master-0 kubenswrapper[31420]: I0220 12:18:51.121878 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4twdx" podStartSLOduration=9.192387499 podStartE2EDuration="20.121854026s" podCreationTimestamp="2026-02-20 12:18:31 +0000 UTC" firstStartedPulling="2026-02-20 12:18:39.206511216 +0000 UTC m=+823.925749457" lastFinishedPulling="2026-02-20 12:18:50.135977703 +0000 UTC m=+834.855215984" observedRunningTime="2026-02-20 12:18:51.085241481 +0000 UTC m=+835.804479722" watchObservedRunningTime="2026-02-20 12:18:51.121854026 +0000 UTC m=+835.841092277" Feb 20 12:18:51.180850 master-0 kubenswrapper[31420]: I0220 12:18:51.180755 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.621844204 podStartE2EDuration="26.180733605s" podCreationTimestamp="2026-02-20 12:18:25 +0000 UTC" firstStartedPulling="2026-02-20 12:18:39.11166367 +0000 UTC m=+823.830901911" lastFinishedPulling="2026-02-20 12:18:43.670553071 +0000 UTC m=+828.389791312" observedRunningTime="2026-02-20 12:18:51.171833976 +0000 UTC m=+835.891072217" watchObservedRunningTime="2026-02-20 12:18:51.180733605 +0000 UTC m=+835.899971846" Feb 20 12:18:51.212034 master-0 kubenswrapper[31420]: I0220 12:18:51.211980 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-42dnl"] Feb 20 12:18:51.221894 master-0 kubenswrapper[31420]: I0220 12:18:51.221809 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-42dnl"] Feb 20 12:18:51.518406 master-0 kubenswrapper[31420]: I0220 12:18:51.518090 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8429e15-dbd4-44e4-ab7d-214ac5143db3" path="/var/lib/kubelet/pods/a8429e15-dbd4-44e4-ab7d-214ac5143db3/volumes" Feb 20 12:18:52.090988 master-0 kubenswrapper[31420]: I0220 12:18:52.090915 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de0e242c-6018-42c0-8a59-b755e2bd36b0","Type":"ContainerStarted","Data":"e8317f9af6dd37b6f0e47f2c5378d5aa457a3d9a46248b81bc77b1ada35e7013"} Feb 20 12:18:52.092933 master-0 kubenswrapper[31420]: I0220 12:18:52.092896 31420 generic.go:334] "Generic (PLEG): container finished" podID="c0dc8f5a-78ac-4bdf-9b05-953e0edf6616" containerID="fdc795ab1f8b5cdcf6ccfb73ee9fafdbbdf8e23759ff966e02e1635e422eb68a" exitCode=0 Feb 20 12:18:52.093023 master-0 kubenswrapper[31420]: I0220 12:18:52.092971 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hzmsb" event={"ID":"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616","Type":"ContainerDied","Data":"fdc795ab1f8b5cdcf6ccfb73ee9fafdbbdf8e23759ff966e02e1635e422eb68a"} Feb 20 12:18:53.105784 master-0 kubenswrapper[31420]: I0220 12:18:53.105685 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a534afa2-10da-4837-9cf8-6b2416df04dd","Type":"ContainerStarted","Data":"ad18a9cc430272edd1356b8eb8834838acca362ee655dc51156467c6711a3cb3"} Feb 20 12:18:53.111137 master-0 kubenswrapper[31420]: I0220 12:18:53.111060 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e5704bbe-5b81-411c-9641-e54f70784e12","Type":"ContainerStarted","Data":"069da04a4b52b67c3d62862406cb2dc0b36a84c76e4bf9c056f7a894eeed8adf"} Feb 20 12:18:53.113452 master-0 kubenswrapper[31420]: I0220 12:18:53.113395 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3027dc76-27b3-44c4-b217-885670c3e29e","Type":"ContainerStarted","Data":"16f9eba54beb3b518c519cf6dc87986f00661bde733dc9205d462389d48e75c7"} Feb 20 12:18:53.117050 master-0 kubenswrapper[31420]: I0220 12:18:53.116978 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hzmsb" event={"ID":"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616","Type":"ContainerStarted","Data":"83e6b4c5fa03e042f785552a46fb68c7bc6071677000c1c5b5b39e009bccb5de"} Feb 20 12:18:53.151970 master-0 kubenswrapper[31420]: I0220 12:18:53.151671 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.027539901 podStartE2EDuration="22.151640628s" podCreationTimestamp="2026-02-20 12:18:31 +0000 UTC" firstStartedPulling="2026-02-20 12:18:40.410758542 +0000 UTC m=+825.129996783" lastFinishedPulling="2026-02-20 12:18:52.534859269 +0000 UTC m=+837.254097510" observedRunningTime="2026-02-20 12:18:53.128012926 +0000 UTC m=+837.847251187" watchObservedRunningTime="2026-02-20 12:18:53.151640628 +0000 UTC m=+837.870878909" Feb 20 12:18:53.172784 master-0 kubenswrapper[31420]: I0220 12:18:53.172226 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.713450496 podStartE2EDuration="21.172205584s" podCreationTimestamp="2026-02-20 12:18:32 +0000 UTC" firstStartedPulling="2026-02-20 12:18:41.066737489 +0000 UTC m=+825.785975730" lastFinishedPulling="2026-02-20 12:18:52.525492577 +0000 UTC m=+837.244730818" observedRunningTime="2026-02-20 12:18:53.159630072 +0000 UTC m=+837.878868403" watchObservedRunningTime="2026-02-20 12:18:53.172205584 +0000 UTC m=+837.891443835" Feb 20 12:18:53.278039 master-0 kubenswrapper[31420]: I0220 12:18:53.277953 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:53.319181 master-0 kubenswrapper[31420]: I0220 12:18:53.318894 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:54.149738 master-0 kubenswrapper[31420]: I0220 12:18:54.149614 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hzmsb" event={"ID":"c0dc8f5a-78ac-4bdf-9b05-953e0edf6616","Type":"ContainerStarted","Data":"f841e80f13be79bf9de53668a7eb5e2aadd9e6b974f3cdca519827ae0aa02e48"} Feb 20 12:18:54.150591 master-0 kubenswrapper[31420]: I0220 12:18:54.150564 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:54.150859 master-0 kubenswrapper[31420]: I0220 12:18:54.150827 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:54.150976 master-0 kubenswrapper[31420]: I0220 12:18:54.150960 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:18:54.316774 master-0 kubenswrapper[31420]: I0220 12:18:54.316667 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-hzmsb" podStartSLOduration=14.01089786 podStartE2EDuration="23.316642706s" podCreationTimestamp="2026-02-20 12:18:31 +0000 UTC" firstStartedPulling="2026-02-20 12:18:40.823963942 +0000 UTC m=+825.543202183" lastFinishedPulling="2026-02-20 12:18:50.129708788 +0000 UTC m=+834.848947029" observedRunningTime="2026-02-20 12:18:54.309691791 +0000 UTC m=+839.028930072" watchObservedRunningTime="2026-02-20 12:18:54.316642706 +0000 UTC m=+839.035880987" Feb 20 12:18:54.484272 master-0 kubenswrapper[31420]: I0220 12:18:54.484072 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:54.563914 master-0 kubenswrapper[31420]: I0220 12:18:54.563819 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:54.842735 master-0 kubenswrapper[31420]: I0220 12:18:54.842655 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vjbdv"] Feb 20 12:18:54.843445 master-0 kubenswrapper[31420]: E0220 12:18:54.843405 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8429e15-dbd4-44e4-ab7d-214ac5143db3" containerName="init" Feb 20 12:18:54.843445 master-0 kubenswrapper[31420]: I0220 12:18:54.843436 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8429e15-dbd4-44e4-ab7d-214ac5143db3" containerName="init" Feb 20 12:18:54.843566 master-0 kubenswrapper[31420]: E0220 12:18:54.843465 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7efaf54-8294-4db8-acff-033af4e873ee" containerName="init" Feb 20 12:18:54.843566 master-0 kubenswrapper[31420]: I0220 12:18:54.843476 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7efaf54-8294-4db8-acff-033af4e873ee" containerName="init" Feb 20 12:18:54.843566 master-0 kubenswrapper[31420]: E0220 12:18:54.843493 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09686517-53db-438b-85ea-646904a52235" containerName="init" Feb 20 12:18:54.843566 master-0 kubenswrapper[31420]: I0220 12:18:54.843501 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="09686517-53db-438b-85ea-646904a52235" containerName="init" Feb 20 12:18:54.843566 master-0 kubenswrapper[31420]: E0220 12:18:54.843544 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8429e15-dbd4-44e4-ab7d-214ac5143db3" containerName="dnsmasq-dns" Feb 20 12:18:54.843566 master-0 kubenswrapper[31420]: I0220 12:18:54.843554 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8429e15-dbd4-44e4-ab7d-214ac5143db3" containerName="dnsmasq-dns" Feb 20 12:18:54.843971 master-0 kubenswrapper[31420]: I0220 12:18:54.843885 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8429e15-dbd4-44e4-ab7d-214ac5143db3" containerName="dnsmasq-dns" Feb 20 12:18:54.843971 master-0 kubenswrapper[31420]: I0220 12:18:54.843929 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="09686517-53db-438b-85ea-646904a52235" containerName="init" Feb 20 12:18:54.843971 master-0 kubenswrapper[31420]: I0220 12:18:54.843950 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7efaf54-8294-4db8-acff-033af4e873ee" containerName="init" Feb 20 12:18:54.845288 master-0 kubenswrapper[31420]: I0220 12:18:54.845219 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:54.848417 master-0 kubenswrapper[31420]: I0220 12:18:54.848354 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 20 12:18:54.851741 master-0 kubenswrapper[31420]: I0220 12:18:54.851692 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vjbdv"] Feb 20 12:18:54.949735 master-0 kubenswrapper[31420]: I0220 12:18:54.949646 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/87719869-6d9c-4b7e-b423-1c3c97c501c6-ovn-rundir\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:54.949958 master-0 kubenswrapper[31420]: I0220 12:18:54.949820 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87719869-6d9c-4b7e-b423-1c3c97c501c6-config\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:54.949958 master-0 kubenswrapper[31420]: I0220 12:18:54.949865 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/87719869-6d9c-4b7e-b423-1c3c97c501c6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:54.949958 master-0 kubenswrapper[31420]: I0220 12:18:54.949913 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87719869-6d9c-4b7e-b423-1c3c97c501c6-combined-ca-bundle\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:54.949958 master-0 kubenswrapper[31420]: I0220 12:18:54.949949 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfd2c\" (UniqueName: \"kubernetes.io/projected/87719869-6d9c-4b7e-b423-1c3c97c501c6-kube-api-access-lfd2c\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:54.950159 master-0 kubenswrapper[31420]: I0220 12:18:54.950003 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/87719869-6d9c-4b7e-b423-1c3c97c501c6-ovs-rundir\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:55.051544 master-0 kubenswrapper[31420]: I0220 12:18:55.051449 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/87719869-6d9c-4b7e-b423-1c3c97c501c6-ovs-rundir\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:55.051686 master-0 kubenswrapper[31420]: I0220 12:18:55.051578 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/87719869-6d9c-4b7e-b423-1c3c97c501c6-ovn-rundir\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:55.051737 master-0 kubenswrapper[31420]: I0220 12:18:55.051688 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87719869-6d9c-4b7e-b423-1c3c97c501c6-config\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:55.051737 master-0 kubenswrapper[31420]: I0220 12:18:55.051716 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/87719869-6d9c-4b7e-b423-1c3c97c501c6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:55.051826 master-0 kubenswrapper[31420]: I0220 12:18:55.051750 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87719869-6d9c-4b7e-b423-1c3c97c501c6-combined-ca-bundle\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:55.051826 master-0 kubenswrapper[31420]: I0220 12:18:55.051777 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfd2c\" (UniqueName: \"kubernetes.io/projected/87719869-6d9c-4b7e-b423-1c3c97c501c6-kube-api-access-lfd2c\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:55.052117 master-0 kubenswrapper[31420]: I0220 12:18:55.052035 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/87719869-6d9c-4b7e-b423-1c3c97c501c6-ovs-rundir\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:55.052209 master-0 kubenswrapper[31420]: I0220 12:18:55.052124 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/87719869-6d9c-4b7e-b423-1c3c97c501c6-ovn-rundir\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:55.052703 master-0 kubenswrapper[31420]: I0220 12:18:55.052662 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87719869-6d9c-4b7e-b423-1c3c97c501c6-config\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:55.055377 master-0 kubenswrapper[31420]: I0220 12:18:55.055338 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/87719869-6d9c-4b7e-b423-1c3c97c501c6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:55.055579 master-0 kubenswrapper[31420]: I0220 12:18:55.055509 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87719869-6d9c-4b7e-b423-1c3c97c501c6-combined-ca-bundle\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:55.166815 master-0 kubenswrapper[31420]: I0220 12:18:55.166730 31420 generic.go:334] "Generic (PLEG): container finished" podID="c5e587a0-149e-4023-9766-0ac33a7a5d4d" containerID="0fec9f565c0568e53192381059ef699238f15d199a83741a533c3964b487edfa" exitCode=0 Feb 20 12:18:55.167571 master-0 kubenswrapper[31420]: I0220 12:18:55.166857 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c5e587a0-149e-4023-9766-0ac33a7a5d4d","Type":"ContainerDied","Data":"0fec9f565c0568e53192381059ef699238f15d199a83741a533c3964b487edfa"} Feb 20 12:18:55.171511 master-0 kubenswrapper[31420]: I0220 12:18:55.171462 31420 generic.go:334] "Generic (PLEG): container finished" podID="a777598a-6198-4385-860b-a04696e29a88" containerID="92a68a762ab5afa94a2d9aabd4dc15dae090c062ad902779fbefec4a25d76748" exitCode=0 Feb 20 12:18:55.172386 master-0 kubenswrapper[31420]: I0220 12:18:55.172330 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a777598a-6198-4385-860b-a04696e29a88","Type":"ContainerDied","Data":"92a68a762ab5afa94a2d9aabd4dc15dae090c062ad902779fbefec4a25d76748"} Feb 20 12:18:55.173097 master-0 kubenswrapper[31420]: I0220 12:18:55.173057 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfd2c\" (UniqueName: \"kubernetes.io/projected/87719869-6d9c-4b7e-b423-1c3c97c501c6-kube-api-access-lfd2c\") pod \"ovn-controller-metrics-vjbdv\" (UID: \"87719869-6d9c-4b7e-b423-1c3c97c501c6\") " pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:55.173172 master-0 kubenswrapper[31420]: I0220 12:18:55.173136 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:55.214137 master-0 kubenswrapper[31420]: I0220 12:18:55.214082 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 20 12:18:55.220008 master-0 kubenswrapper[31420]: I0220 12:18:55.219972 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 20 12:18:55.471473 master-0 kubenswrapper[31420]: I0220 12:18:55.471424 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vjbdv" Feb 20 12:18:55.622694 master-0 kubenswrapper[31420]: I0220 12:18:55.622636 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 20 12:18:56.189044 master-0 kubenswrapper[31420]: I0220 12:18:56.188996 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a777598a-6198-4385-860b-a04696e29a88","Type":"ContainerStarted","Data":"3ea032c257503c63bdd5d02aa8dfb8ff6921a26898251c365bdda99d76f6c7f5"} Feb 20 12:18:56.192615 master-0 kubenswrapper[31420]: I0220 12:18:56.192513 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c5e587a0-149e-4023-9766-0ac33a7a5d4d","Type":"ContainerStarted","Data":"2e77b47092a5f79210ed1e14f7fe25fcb7bfd1530b26b912fce928ad67783b55"} Feb 20 12:18:56.370859 master-0 kubenswrapper[31420]: I0220 12:18:56.370755 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vjbdv"] Feb 20 12:18:56.415010 master-0 kubenswrapper[31420]: I0220 12:18:56.410882 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.481818015 podStartE2EDuration="34.41083686s" podCreationTimestamp="2026-02-20 12:18:22 +0000 UTC" firstStartedPulling="2026-02-20 12:18:39.206800964 +0000 UTC m=+823.926039205" lastFinishedPulling="2026-02-20 12:18:50.135819809 +0000 UTC m=+834.855058050" observedRunningTime="2026-02-20 12:18:56.385157181 +0000 UTC m=+841.104395422" watchObservedRunningTime="2026-02-20 12:18:56.41083686 +0000 UTC m=+841.130075111" Feb 20 12:18:56.451216 master-0 kubenswrapper[31420]: I0220 12:18:56.450603 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dfbcfb9f5-49mk4"] Feb 20 12:18:56.454100 master-0 kubenswrapper[31420]: I0220 12:18:56.453987 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=22.438198844 podStartE2EDuration="33.453963528s" podCreationTimestamp="2026-02-20 12:18:23 +0000 UTC" firstStartedPulling="2026-02-20 12:18:39.146940258 +0000 UTC m=+823.866178499" lastFinishedPulling="2026-02-20 12:18:50.162704942 +0000 UTC m=+834.881943183" observedRunningTime="2026-02-20 12:18:56.439184094 +0000 UTC m=+841.158422335" watchObservedRunningTime="2026-02-20 12:18:56.453963528 +0000 UTC m=+841.173201789" Feb 20 12:18:56.457049 master-0 kubenswrapper[31420]: I0220 12:18:56.454469 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:56.467006 master-0 kubenswrapper[31420]: I0220 12:18:56.465978 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 20 12:18:56.516217 master-0 kubenswrapper[31420]: I0220 12:18:56.515571 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dfbcfb9f5-49mk4"] Feb 20 12:18:56.605512 master-0 kubenswrapper[31420]: I0220 12:18:56.605426 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6vlt\" (UniqueName: \"kubernetes.io/projected/5bb38b19-69ca-4974-8b86-b3c3823757e7-kube-api-access-h6vlt\") pod \"dnsmasq-dns-dfbcfb9f5-49mk4\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:56.605836 master-0 kubenswrapper[31420]: I0220 12:18:56.605785 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-ovsdbserver-sb\") pod \"dnsmasq-dns-dfbcfb9f5-49mk4\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:56.607109 master-0 kubenswrapper[31420]: I0220 12:18:56.606162 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-config\") pod \"dnsmasq-dns-dfbcfb9f5-49mk4\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:56.607109 master-0 kubenswrapper[31420]: I0220 12:18:56.606450 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-dns-svc\") pod \"dnsmasq-dns-dfbcfb9f5-49mk4\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:56.622676 master-0 kubenswrapper[31420]: I0220 12:18:56.620494 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dfbcfb9f5-49mk4"] Feb 20 12:18:56.622676 master-0 kubenswrapper[31420]: E0220 12:18:56.621254 31420 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-h6vlt ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" podUID="5bb38b19-69ca-4974-8b86-b3c3823757e7" Feb 20 12:18:56.665272 master-0 kubenswrapper[31420]: I0220 12:18:56.664815 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cffcf5497-zk4vk"] Feb 20 12:18:56.666558 master-0 kubenswrapper[31420]: I0220 12:18:56.666517 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:56.671986 master-0 kubenswrapper[31420]: I0220 12:18:56.670664 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 20 12:18:56.676945 master-0 kubenswrapper[31420]: I0220 12:18:56.676880 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cffcf5497-zk4vk"] Feb 20 12:18:56.708449 master-0 kubenswrapper[31420]: I0220 12:18:56.708389 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-config\") pod \"dnsmasq-dns-dfbcfb9f5-49mk4\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:56.708679 master-0 kubenswrapper[31420]: I0220 12:18:56.708463 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-dns-svc\") pod \"dnsmasq-dns-dfbcfb9f5-49mk4\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:56.708679 master-0 kubenswrapper[31420]: I0220 12:18:56.708505 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-ovsdbserver-nb\") pod \"dnsmasq-dns-cffcf5497-zk4vk\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:56.708679 master-0 kubenswrapper[31420]: I0220 12:18:56.708543 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6vlt\" (UniqueName: \"kubernetes.io/projected/5bb38b19-69ca-4974-8b86-b3c3823757e7-kube-api-access-h6vlt\") pod \"dnsmasq-dns-dfbcfb9f5-49mk4\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:56.708679 master-0 kubenswrapper[31420]: I0220 12:18:56.708578 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-config\") pod \"dnsmasq-dns-cffcf5497-zk4vk\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:56.708679 master-0 kubenswrapper[31420]: I0220 12:18:56.708611 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-ovsdbserver-sb\") pod \"dnsmasq-dns-dfbcfb9f5-49mk4\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:56.708679 master-0 kubenswrapper[31420]: I0220 12:18:56.708644 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-dns-svc\") pod \"dnsmasq-dns-cffcf5497-zk4vk\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:56.726415 master-0 kubenswrapper[31420]: I0220 12:18:56.726326 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wnjw\" (UniqueName: \"kubernetes.io/projected/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-kube-api-access-8wnjw\") pod \"dnsmasq-dns-cffcf5497-zk4vk\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:56.726415 master-0 kubenswrapper[31420]: I0220 12:18:56.726398 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-ovsdbserver-sb\") pod \"dnsmasq-dns-cffcf5497-zk4vk\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:56.726727 master-0 kubenswrapper[31420]: I0220 12:18:56.712668 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-config\") pod \"dnsmasq-dns-dfbcfb9f5-49mk4\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:56.726727 master-0 kubenswrapper[31420]: I0220 12:18:56.713582 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-ovsdbserver-sb\") pod \"dnsmasq-dns-dfbcfb9f5-49mk4\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:56.752676 master-0 kubenswrapper[31420]: I0220 12:18:56.751161 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6vlt\" (UniqueName: \"kubernetes.io/projected/5bb38b19-69ca-4974-8b86-b3c3823757e7-kube-api-access-h6vlt\") pod \"dnsmasq-dns-dfbcfb9f5-49mk4\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:56.766232 master-0 kubenswrapper[31420]: I0220 12:18:56.766173 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-dns-svc\") pod \"dnsmasq-dns-dfbcfb9f5-49mk4\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:56.832782 master-0 kubenswrapper[31420]: I0220 12:18:56.832608 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-config\") pod \"dnsmasq-dns-cffcf5497-zk4vk\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:56.832782 master-0 kubenswrapper[31420]: I0220 12:18:56.832700 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-dns-svc\") pod \"dnsmasq-dns-cffcf5497-zk4vk\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:56.832782 master-0 kubenswrapper[31420]: I0220 12:18:56.832729 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wnjw\" (UniqueName: \"kubernetes.io/projected/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-kube-api-access-8wnjw\") pod \"dnsmasq-dns-cffcf5497-zk4vk\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:56.832782 master-0 kubenswrapper[31420]: I0220 12:18:56.832749 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-ovsdbserver-sb\") pod \"dnsmasq-dns-cffcf5497-zk4vk\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:56.832997 master-0 kubenswrapper[31420]: I0220 12:18:56.832835 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-ovsdbserver-nb\") pod \"dnsmasq-dns-cffcf5497-zk4vk\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:56.855609 master-0 kubenswrapper[31420]: I0220 12:18:56.837047 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-dns-svc\") pod \"dnsmasq-dns-cffcf5497-zk4vk\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:56.855609 master-0 kubenswrapper[31420]: I0220 12:18:56.841266 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-ovsdbserver-nb\") pod \"dnsmasq-dns-cffcf5497-zk4vk\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:56.855609 master-0 kubenswrapper[31420]: I0220 12:18:56.843631 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-ovsdbserver-sb\") pod \"dnsmasq-dns-cffcf5497-zk4vk\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:56.855609 master-0 kubenswrapper[31420]: I0220 12:18:56.843876 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-config\") pod \"dnsmasq-dns-cffcf5497-zk4vk\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:56.873188 master-0 kubenswrapper[31420]: I0220 12:18:56.869248 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wnjw\" (UniqueName: \"kubernetes.io/projected/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-kube-api-access-8wnjw\") pod \"dnsmasq-dns-cffcf5497-zk4vk\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:57.093846 master-0 kubenswrapper[31420]: I0220 12:18:57.093647 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:57.144733 master-0 kubenswrapper[31420]: I0220 12:18:57.144660 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 20 12:18:57.146421 master-0 kubenswrapper[31420]: I0220 12:18:57.146384 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 20 12:18:57.150254 master-0 kubenswrapper[31420]: I0220 12:18:57.150182 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 20 12:18:57.151236 master-0 kubenswrapper[31420]: I0220 12:18:57.151201 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 20 12:18:57.213601 master-0 kubenswrapper[31420]: I0220 12:18:57.209009 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 20 12:18:57.216568 master-0 kubenswrapper[31420]: I0220 12:18:57.216513 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 20 12:18:57.242537 master-0 kubenswrapper[31420]: I0220 12:18:57.242464 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20df0749-3b28-484d-905e-4c9027c36fb3-config\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.242704 master-0 kubenswrapper[31420]: I0220 12:18:57.242560 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/20df0749-3b28-484d-905e-4c9027c36fb3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.242704 master-0 kubenswrapper[31420]: I0220 12:18:57.242662 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/20df0749-3b28-484d-905e-4c9027c36fb3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.243194 master-0 kubenswrapper[31420]: I0220 12:18:57.242944 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/20df0749-3b28-484d-905e-4c9027c36fb3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.243194 master-0 kubenswrapper[31420]: I0220 12:18:57.242973 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df0749-3b28-484d-905e-4c9027c36fb3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.243194 master-0 kubenswrapper[31420]: I0220 12:18:57.243059 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20df0749-3b28-484d-905e-4c9027c36fb3-scripts\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.243194 master-0 kubenswrapper[31420]: I0220 12:18:57.243098 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llhln\" (UniqueName: \"kubernetes.io/projected/20df0749-3b28-484d-905e-4c9027c36fb3-kube-api-access-llhln\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.265584 master-0 kubenswrapper[31420]: I0220 12:18:57.265510 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:57.265678 master-0 kubenswrapper[31420]: I0220 12:18:57.265582 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vjbdv" event={"ID":"87719869-6d9c-4b7e-b423-1c3c97c501c6","Type":"ContainerStarted","Data":"e7000547697f82e159f816b421a086d3ffb7843f8eee7af1042e9b254e8f7557"} Feb 20 12:18:57.265755 master-0 kubenswrapper[31420]: I0220 12:18:57.265703 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vjbdv" event={"ID":"87719869-6d9c-4b7e-b423-1c3c97c501c6","Type":"ContainerStarted","Data":"2d7b0c76e0e5437e0369161a3d138af46dbffcd4d8cb1283940950abfde787d3"} Feb 20 12:18:57.292925 master-0 kubenswrapper[31420]: I0220 12:18:57.292851 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:57.308669 master-0 kubenswrapper[31420]: I0220 12:18:57.308593 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vjbdv" podStartSLOduration=3.308571265 podStartE2EDuration="3.308571265s" podCreationTimestamp="2026-02-20 12:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:18:57.296666702 +0000 UTC m=+842.015904943" watchObservedRunningTime="2026-02-20 12:18:57.308571265 +0000 UTC m=+842.027809506" Feb 20 12:18:57.343995 master-0 kubenswrapper[31420]: I0220 12:18:57.343925 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-ovsdbserver-sb\") pod \"5bb38b19-69ca-4974-8b86-b3c3823757e7\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " Feb 20 12:18:57.344201 master-0 kubenswrapper[31420]: I0220 12:18:57.344098 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-dns-svc\") pod \"5bb38b19-69ca-4974-8b86-b3c3823757e7\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " Feb 20 12:18:57.344201 master-0 kubenswrapper[31420]: I0220 12:18:57.344175 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-config\") pod \"5bb38b19-69ca-4974-8b86-b3c3823757e7\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " Feb 20 12:18:57.344392 master-0 kubenswrapper[31420]: I0220 12:18:57.344375 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6vlt\" (UniqueName: \"kubernetes.io/projected/5bb38b19-69ca-4974-8b86-b3c3823757e7-kube-api-access-h6vlt\") pod \"5bb38b19-69ca-4974-8b86-b3c3823757e7\" (UID: \"5bb38b19-69ca-4974-8b86-b3c3823757e7\") " Feb 20 12:18:57.344655 master-0 kubenswrapper[31420]: I0220 12:18:57.344635 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llhln\" (UniqueName: \"kubernetes.io/projected/20df0749-3b28-484d-905e-4c9027c36fb3-kube-api-access-llhln\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.344737 master-0 kubenswrapper[31420]: I0220 12:18:57.344719 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20df0749-3b28-484d-905e-4c9027c36fb3-config\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.344782 master-0 kubenswrapper[31420]: I0220 12:18:57.344754 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/20df0749-3b28-484d-905e-4c9027c36fb3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.344899 master-0 kubenswrapper[31420]: I0220 12:18:57.344855 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/20df0749-3b28-484d-905e-4c9027c36fb3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.344978 master-0 kubenswrapper[31420]: I0220 12:18:57.344962 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/20df0749-3b28-484d-905e-4c9027c36fb3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.345009 master-0 kubenswrapper[31420]: I0220 12:18:57.344985 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df0749-3b28-484d-905e-4c9027c36fb3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.345039 master-0 kubenswrapper[31420]: I0220 12:18:57.345009 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20df0749-3b28-484d-905e-4c9027c36fb3-scripts\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.345972 master-0 kubenswrapper[31420]: I0220 12:18:57.345939 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5bb38b19-69ca-4974-8b86-b3c3823757e7" (UID: "5bb38b19-69ca-4974-8b86-b3c3823757e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:18:57.346478 master-0 kubenswrapper[31420]: I0220 12:18:57.346460 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5bb38b19-69ca-4974-8b86-b3c3823757e7" (UID: "5bb38b19-69ca-4974-8b86-b3c3823757e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:18:57.348067 master-0 kubenswrapper[31420]: I0220 12:18:57.347554 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-config" (OuterVolumeSpecName: "config") pod "5bb38b19-69ca-4974-8b86-b3c3823757e7" (UID: "5bb38b19-69ca-4974-8b86-b3c3823757e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:18:57.348129 master-0 kubenswrapper[31420]: I0220 12:18:57.348097 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/20df0749-3b28-484d-905e-4c9027c36fb3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.348299 master-0 kubenswrapper[31420]: I0220 12:18:57.348243 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20df0749-3b28-484d-905e-4c9027c36fb3-scripts\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.350080 master-0 kubenswrapper[31420]: I0220 12:18:57.350035 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20df0749-3b28-484d-905e-4c9027c36fb3-config\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.352488 master-0 kubenswrapper[31420]: I0220 12:18:57.352448 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df0749-3b28-484d-905e-4c9027c36fb3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.352716 master-0 kubenswrapper[31420]: I0220 12:18:57.352686 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bb38b19-69ca-4974-8b86-b3c3823757e7-kube-api-access-h6vlt" (OuterVolumeSpecName: "kube-api-access-h6vlt") pod "5bb38b19-69ca-4974-8b86-b3c3823757e7" (UID: "5bb38b19-69ca-4974-8b86-b3c3823757e7"). InnerVolumeSpecName "kube-api-access-h6vlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:18:57.354682 master-0 kubenswrapper[31420]: I0220 12:18:57.353836 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/20df0749-3b28-484d-905e-4c9027c36fb3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.354682 master-0 kubenswrapper[31420]: I0220 12:18:57.354663 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/20df0749-3b28-484d-905e-4c9027c36fb3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.374838 master-0 kubenswrapper[31420]: I0220 12:18:57.374799 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llhln\" (UniqueName: \"kubernetes.io/projected/20df0749-3b28-484d-905e-4c9027c36fb3-kube-api-access-llhln\") pod \"ovn-northd-0\" (UID: \"20df0749-3b28-484d-905e-4c9027c36fb3\") " pod="openstack/ovn-northd-0" Feb 20 12:18:57.476844 master-0 kubenswrapper[31420]: I0220 12:18:57.471405 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:57.476844 master-0 kubenswrapper[31420]: I0220 12:18:57.471460 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6vlt\" (UniqueName: \"kubernetes.io/projected/5bb38b19-69ca-4974-8b86-b3c3823757e7-kube-api-access-h6vlt\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:57.476844 master-0 kubenswrapper[31420]: I0220 12:18:57.471477 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:57.476844 master-0 kubenswrapper[31420]: I0220 12:18:57.471490 31420 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bb38b19-69ca-4974-8b86-b3c3823757e7-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:57.540006 master-0 kubenswrapper[31420]: I0220 12:18:57.538274 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 20 12:18:57.624793 master-0 kubenswrapper[31420]: I0220 12:18:57.619194 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cffcf5497-zk4vk"] Feb 20 12:18:57.688688 master-0 kubenswrapper[31420]: I0220 12:18:57.686968 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-676b9854bc-9lfkk"] Feb 20 12:18:57.692283 master-0 kubenswrapper[31420]: I0220 12:18:57.691693 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:57.745833 master-0 kubenswrapper[31420]: W0220 12:18:57.743747 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfea9dd3f_4d6e_453c_9558_d2963dc7ae75.slice/crio-62819c67b20693620b0c33b99a5839d047b494d27f3e5486699882086d5d2dc9 WatchSource:0}: Error finding container 62819c67b20693620b0c33b99a5839d047b494d27f3e5486699882086d5d2dc9: Status 404 returned error can't find the container with id 62819c67b20693620b0c33b99a5839d047b494d27f3e5486699882086d5d2dc9 Feb 20 12:18:57.762162 master-0 kubenswrapper[31420]: I0220 12:18:57.760710 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cffcf5497-zk4vk"] Feb 20 12:18:57.770033 master-0 kubenswrapper[31420]: I0220 12:18:57.769974 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-676b9854bc-9lfkk"] Feb 20 12:18:57.778154 master-0 kubenswrapper[31420]: I0220 12:18:57.776902 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z62j8\" (UniqueName: \"kubernetes.io/projected/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-kube-api-access-z62j8\") pod \"dnsmasq-dns-676b9854bc-9lfkk\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:57.778154 master-0 kubenswrapper[31420]: I0220 12:18:57.777045 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-ovsdbserver-nb\") pod \"dnsmasq-dns-676b9854bc-9lfkk\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:57.778154 master-0 kubenswrapper[31420]: I0220 12:18:57.777103 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-config\") pod \"dnsmasq-dns-676b9854bc-9lfkk\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:57.778154 master-0 kubenswrapper[31420]: I0220 12:18:57.777122 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-ovsdbserver-sb\") pod \"dnsmasq-dns-676b9854bc-9lfkk\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:57.778154 master-0 kubenswrapper[31420]: I0220 12:18:57.777146 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-dns-svc\") pod \"dnsmasq-dns-676b9854bc-9lfkk\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:57.886350 master-0 kubenswrapper[31420]: I0220 12:18:57.883370 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-ovsdbserver-nb\") pod \"dnsmasq-dns-676b9854bc-9lfkk\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:57.886350 master-0 kubenswrapper[31420]: I0220 12:18:57.883503 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-config\") pod \"dnsmasq-dns-676b9854bc-9lfkk\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:57.886350 master-0 kubenswrapper[31420]: I0220 12:18:57.883536 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-ovsdbserver-sb\") pod \"dnsmasq-dns-676b9854bc-9lfkk\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:57.886350 master-0 kubenswrapper[31420]: I0220 12:18:57.883564 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-dns-svc\") pod \"dnsmasq-dns-676b9854bc-9lfkk\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:57.886350 master-0 kubenswrapper[31420]: I0220 12:18:57.883619 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z62j8\" (UniqueName: \"kubernetes.io/projected/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-kube-api-access-z62j8\") pod \"dnsmasq-dns-676b9854bc-9lfkk\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:57.886350 master-0 kubenswrapper[31420]: I0220 12:18:57.884603 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-ovsdbserver-nb\") pod \"dnsmasq-dns-676b9854bc-9lfkk\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:57.886350 master-0 kubenswrapper[31420]: I0220 12:18:57.885143 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-config\") pod \"dnsmasq-dns-676b9854bc-9lfkk\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:57.886350 master-0 kubenswrapper[31420]: I0220 12:18:57.885693 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-ovsdbserver-sb\") pod \"dnsmasq-dns-676b9854bc-9lfkk\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:57.886350 master-0 kubenswrapper[31420]: I0220 12:18:57.886195 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-dns-svc\") pod \"dnsmasq-dns-676b9854bc-9lfkk\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:57.900503 master-0 kubenswrapper[31420]: I0220 12:18:57.900459 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z62j8\" (UniqueName: \"kubernetes.io/projected/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-kube-api-access-z62j8\") pod \"dnsmasq-dns-676b9854bc-9lfkk\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:58.062860 master-0 kubenswrapper[31420]: I0220 12:18:58.062435 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:18:58.155577 master-0 kubenswrapper[31420]: I0220 12:18:58.154884 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 20 12:18:58.167870 master-0 kubenswrapper[31420]: W0220 12:18:58.167717 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20df0749_3b28_484d_905e_4c9027c36fb3.slice/crio-6d1fa4d9752db400e16a0b748645e2dd082b34883ed81ee7d56a05f238e8bde4 WatchSource:0}: Error finding container 6d1fa4d9752db400e16a0b748645e2dd082b34883ed81ee7d56a05f238e8bde4: Status 404 returned error can't find the container with id 6d1fa4d9752db400e16a0b748645e2dd082b34883ed81ee7d56a05f238e8bde4 Feb 20 12:18:58.284827 master-0 kubenswrapper[31420]: I0220 12:18:58.284750 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"20df0749-3b28-484d-905e-4c9027c36fb3","Type":"ContainerStarted","Data":"6d1fa4d9752db400e16a0b748645e2dd082b34883ed81ee7d56a05f238e8bde4"} Feb 20 12:18:58.287083 master-0 kubenswrapper[31420]: I0220 12:18:58.287033 31420 generic.go:334] "Generic (PLEG): container finished" podID="fea9dd3f-4d6e-453c-9558-d2963dc7ae75" containerID="ae67a553eaf905c1e9b9e9cbb94b7681751710da2caf525aca942ee2452eb470" exitCode=0 Feb 20 12:18:58.287200 master-0 kubenswrapper[31420]: I0220 12:18:58.287117 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dfbcfb9f5-49mk4" Feb 20 12:18:58.287274 master-0 kubenswrapper[31420]: I0220 12:18:58.287185 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" event={"ID":"fea9dd3f-4d6e-453c-9558-d2963dc7ae75","Type":"ContainerDied","Data":"ae67a553eaf905c1e9b9e9cbb94b7681751710da2caf525aca942ee2452eb470"} Feb 20 12:18:58.287274 master-0 kubenswrapper[31420]: I0220 12:18:58.287259 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" event={"ID":"fea9dd3f-4d6e-453c-9558-d2963dc7ae75","Type":"ContainerStarted","Data":"62819c67b20693620b0c33b99a5839d047b494d27f3e5486699882086d5d2dc9"} Feb 20 12:18:58.409238 master-0 kubenswrapper[31420]: I0220 12:18:58.407047 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dfbcfb9f5-49mk4"] Feb 20 12:18:58.421597 master-0 kubenswrapper[31420]: I0220 12:18:58.418730 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dfbcfb9f5-49mk4"] Feb 20 12:18:58.558082 master-0 kubenswrapper[31420]: I0220 12:18:58.557989 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-676b9854bc-9lfkk"] Feb 20 12:18:58.781725 master-0 kubenswrapper[31420]: I0220 12:18:58.781095 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:58.841294 master-0 kubenswrapper[31420]: I0220 12:18:58.841234 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-config\") pod \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " Feb 20 12:18:58.841414 master-0 kubenswrapper[31420]: I0220 12:18:58.841389 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-dns-svc\") pod \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " Feb 20 12:18:58.841455 master-0 kubenswrapper[31420]: I0220 12:18:58.841420 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-ovsdbserver-nb\") pod \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " Feb 20 12:18:58.841502 master-0 kubenswrapper[31420]: I0220 12:18:58.841475 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wnjw\" (UniqueName: \"kubernetes.io/projected/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-kube-api-access-8wnjw\") pod \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " Feb 20 12:18:58.841594 master-0 kubenswrapper[31420]: I0220 12:18:58.841575 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-ovsdbserver-sb\") pod \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\" (UID: \"fea9dd3f-4d6e-453c-9558-d2963dc7ae75\") " Feb 20 12:18:58.846757 master-0 kubenswrapper[31420]: I0220 12:18:58.846691 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-kube-api-access-8wnjw" (OuterVolumeSpecName: "kube-api-access-8wnjw") pod "fea9dd3f-4d6e-453c-9558-d2963dc7ae75" (UID: "fea9dd3f-4d6e-453c-9558-d2963dc7ae75"). InnerVolumeSpecName "kube-api-access-8wnjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:18:58.885579 master-0 kubenswrapper[31420]: I0220 12:18:58.885418 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fea9dd3f-4d6e-453c-9558-d2963dc7ae75" (UID: "fea9dd3f-4d6e-453c-9558-d2963dc7ae75"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:18:58.887830 master-0 kubenswrapper[31420]: I0220 12:18:58.887763 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fea9dd3f-4d6e-453c-9558-d2963dc7ae75" (UID: "fea9dd3f-4d6e-453c-9558-d2963dc7ae75"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:18:58.888458 master-0 kubenswrapper[31420]: I0220 12:18:58.888420 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fea9dd3f-4d6e-453c-9558-d2963dc7ae75" (UID: "fea9dd3f-4d6e-453c-9558-d2963dc7ae75"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:18:58.896571 master-0 kubenswrapper[31420]: I0220 12:18:58.896485 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-config" (OuterVolumeSpecName: "config") pod "fea9dd3f-4d6e-453c-9558-d2963dc7ae75" (UID: "fea9dd3f-4d6e-453c-9558-d2963dc7ae75"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:18:58.944851 master-0 kubenswrapper[31420]: I0220 12:18:58.944699 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wnjw\" (UniqueName: \"kubernetes.io/projected/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-kube-api-access-8wnjw\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:58.944851 master-0 kubenswrapper[31420]: I0220 12:18:58.944749 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:58.944851 master-0 kubenswrapper[31420]: I0220 12:18:58.944764 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:58.944851 master-0 kubenswrapper[31420]: I0220 12:18:58.944799 31420 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:58.944851 master-0 kubenswrapper[31420]: I0220 12:18:58.944809 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fea9dd3f-4d6e-453c-9558-d2963dc7ae75-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:18:59.306653 master-0 kubenswrapper[31420]: I0220 12:18:59.306591 31420 generic.go:334] "Generic (PLEG): container finished" podID="739e8b2d-4f4b-4b30-8836-c655c9ebb68a" containerID="0bd56b6dff0477fda0ff5cbfddf3cc3bc39c7fc06b098cadfaf944cff0bf86ce" exitCode=0 Feb 20 12:18:59.307190 master-0 kubenswrapper[31420]: I0220 12:18:59.306699 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" event={"ID":"739e8b2d-4f4b-4b30-8836-c655c9ebb68a","Type":"ContainerDied","Data":"0bd56b6dff0477fda0ff5cbfddf3cc3bc39c7fc06b098cadfaf944cff0bf86ce"} Feb 20 12:18:59.307190 master-0 kubenswrapper[31420]: I0220 12:18:59.306757 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" event={"ID":"739e8b2d-4f4b-4b30-8836-c655c9ebb68a","Type":"ContainerStarted","Data":"88ea0d441e05ac9923eaad90b285acfde61d86ccfb7f745125e748895217002f"} Feb 20 12:18:59.311821 master-0 kubenswrapper[31420]: I0220 12:18:59.311761 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" event={"ID":"fea9dd3f-4d6e-453c-9558-d2963dc7ae75","Type":"ContainerDied","Data":"62819c67b20693620b0c33b99a5839d047b494d27f3e5486699882086d5d2dc9"} Feb 20 12:18:59.311821 master-0 kubenswrapper[31420]: I0220 12:18:59.311810 31420 scope.go:117] "RemoveContainer" containerID="ae67a553eaf905c1e9b9e9cbb94b7681751710da2caf525aca942ee2452eb470" Feb 20 12:18:59.311963 master-0 kubenswrapper[31420]: I0220 12:18:59.311948 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cffcf5497-zk4vk" Feb 20 12:18:59.514790 master-0 kubenswrapper[31420]: I0220 12:18:59.514661 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bb38b19-69ca-4974-8b86-b3c3823757e7" path="/var/lib/kubelet/pods/5bb38b19-69ca-4974-8b86-b3c3823757e7/volumes" Feb 20 12:18:59.518320 master-0 kubenswrapper[31420]: I0220 12:18:59.518266 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 20 12:18:59.518460 master-0 kubenswrapper[31420]: I0220 12:18:59.518447 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 20 12:18:59.557521 master-0 kubenswrapper[31420]: I0220 12:18:59.557394 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cffcf5497-zk4vk"] Feb 20 12:18:59.566397 master-0 kubenswrapper[31420]: I0220 12:18:59.566332 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cffcf5497-zk4vk"] Feb 20 12:18:59.770745 master-0 kubenswrapper[31420]: I0220 12:18:59.770654 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 20 12:18:59.771666 master-0 kubenswrapper[31420]: E0220 12:18:59.771235 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fea9dd3f-4d6e-453c-9558-d2963dc7ae75" containerName="init" Feb 20 12:18:59.771666 master-0 kubenswrapper[31420]: I0220 12:18:59.771254 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea9dd3f-4d6e-453c-9558-d2963dc7ae75" containerName="init" Feb 20 12:18:59.771666 master-0 kubenswrapper[31420]: I0220 12:18:59.771633 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="fea9dd3f-4d6e-453c-9558-d2963dc7ae75" containerName="init" Feb 20 12:18:59.791552 master-0 kubenswrapper[31420]: I0220 12:18:59.791362 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 20 12:18:59.795598 master-0 kubenswrapper[31420]: I0220 12:18:59.795355 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 20 12:18:59.795791 master-0 kubenswrapper[31420]: I0220 12:18:59.795773 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 20 12:18:59.797969 master-0 kubenswrapper[31420]: I0220 12:18:59.795996 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 20 12:18:59.804730 master-0 kubenswrapper[31420]: I0220 12:18:59.802613 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 20 12:18:59.980259 master-0 kubenswrapper[31420]: I0220 12:18:59.980190 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4df0a86-1936-4efc-9b26-892f994fab0d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^702ed618-6e62-4be8-9c94-57d8e98dc457\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:18:59.980485 master-0 kubenswrapper[31420]: I0220 12:18:59.980271 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-lock\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:18:59.980485 master-0 kubenswrapper[31420]: I0220 12:18:59.980359 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:18:59.980852 master-0 kubenswrapper[31420]: I0220 12:18:59.980777 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-cache\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:18:59.981045 master-0 kubenswrapper[31420]: I0220 12:18:59.981021 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:18:59.981141 master-0 kubenswrapper[31420]: I0220 12:18:59.981125 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plt5v\" (UniqueName: \"kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-kube-api-access-plt5v\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:00.083963 master-0 kubenswrapper[31420]: I0220 12:19:00.083834 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-cache\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:00.084154 master-0 kubenswrapper[31420]: I0220 12:19:00.084002 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plt5v\" (UniqueName: \"kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-kube-api-access-plt5v\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:00.084154 master-0 kubenswrapper[31420]: I0220 12:19:00.084041 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:00.084224 master-0 kubenswrapper[31420]: I0220 12:19:00.084183 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b4df0a86-1936-4efc-9b26-892f994fab0d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^702ed618-6e62-4be8-9c94-57d8e98dc457\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:00.084270 master-0 kubenswrapper[31420]: I0220 12:19:00.084242 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-lock\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:00.084328 master-0 kubenswrapper[31420]: I0220 12:19:00.084278 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:00.084779 master-0 kubenswrapper[31420]: I0220 12:19:00.084718 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-cache\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:00.084912 master-0 kubenswrapper[31420]: E0220 12:19:00.084893 31420 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 20 12:19:00.084978 master-0 kubenswrapper[31420]: E0220 12:19:00.084968 31420 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 20 12:19:00.085087 master-0 kubenswrapper[31420]: E0220 12:19:00.085073 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift podName:b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad nodeName:}" failed. No retries permitted until 2026-02-20 12:19:00.585055343 +0000 UTC m=+845.304293574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift") pod "swift-storage-0" (UID: "b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad") : configmap "swift-ring-files" not found Feb 20 12:19:00.085157 master-0 kubenswrapper[31420]: I0220 12:19:00.085064 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-lock\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:00.096417 master-0 kubenswrapper[31420]: I0220 12:19:00.096384 31420 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 12:19:00.096738 master-0 kubenswrapper[31420]: I0220 12:19:00.096710 31420 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b4df0a86-1936-4efc-9b26-892f994fab0d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^702ed618-6e62-4be8-9c94-57d8e98dc457\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6d25cbe54bda3eb21544ceb5a7eb36c1a9a6c09ff62961bea4d584cf5e17dfa3/globalmount\"" pod="openstack/swift-storage-0" Feb 20 12:19:00.096895 master-0 kubenswrapper[31420]: I0220 12:19:00.096654 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:00.107158 master-0 kubenswrapper[31420]: I0220 12:19:00.107120 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plt5v\" (UniqueName: \"kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-kube-api-access-plt5v\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:00.346627 master-0 kubenswrapper[31420]: I0220 12:19:00.346184 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" event={"ID":"739e8b2d-4f4b-4b30-8836-c655c9ebb68a","Type":"ContainerStarted","Data":"2e3389cb8f41c5ec97f02aa1a0138cb9bf08aabcca9c34064661418399af110e"} Feb 20 12:19:00.351556 master-0 kubenswrapper[31420]: I0220 12:19:00.348160 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:19:00.355776 master-0 kubenswrapper[31420]: I0220 12:19:00.355503 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"20df0749-3b28-484d-905e-4c9027c36fb3","Type":"ContainerStarted","Data":"314819fd6505848df4e23fa492f234192611d30a21e0a15fd2e716a926186911"} Feb 20 12:19:00.355776 master-0 kubenswrapper[31420]: I0220 12:19:00.355611 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"20df0749-3b28-484d-905e-4c9027c36fb3","Type":"ContainerStarted","Data":"135698d02a25d2309b5820866b02120380ab52caf5294f247848a4f9854244c2"} Feb 20 12:19:00.355921 master-0 kubenswrapper[31420]: I0220 12:19:00.355805 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 20 12:19:00.388442 master-0 kubenswrapper[31420]: I0220 12:19:00.388328 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" podStartSLOduration=3.388306794 podStartE2EDuration="3.388306794s" podCreationTimestamp="2026-02-20 12:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:19:00.38210773 +0000 UTC m=+845.101346011" watchObservedRunningTime="2026-02-20 12:19:00.388306794 +0000 UTC m=+845.107545035" Feb 20 12:19:00.420930 master-0 kubenswrapper[31420]: I0220 12:19:00.420848 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.272878503 podStartE2EDuration="3.420830044s" podCreationTimestamp="2026-02-20 12:18:57 +0000 UTC" firstStartedPulling="2026-02-20 12:18:58.173727238 +0000 UTC m=+842.892965479" lastFinishedPulling="2026-02-20 12:18:59.321678739 +0000 UTC m=+844.040917020" observedRunningTime="2026-02-20 12:19:00.413329404 +0000 UTC m=+845.132567675" watchObservedRunningTime="2026-02-20 12:19:00.420830044 +0000 UTC m=+845.140068285" Feb 20 12:19:00.542117 master-0 kubenswrapper[31420]: I0220 12:19:00.541820 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-lk7sj"] Feb 20 12:19:00.544169 master-0 kubenswrapper[31420]: I0220 12:19:00.543163 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.551922 master-0 kubenswrapper[31420]: I0220 12:19:00.551852 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 20 12:19:00.552214 master-0 kubenswrapper[31420]: I0220 12:19:00.551985 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 20 12:19:00.552214 master-0 kubenswrapper[31420]: I0220 12:19:00.551853 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 20 12:19:00.563078 master-0 kubenswrapper[31420]: I0220 12:19:00.563015 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-lk7sj"] Feb 20 12:19:00.602665 master-0 kubenswrapper[31420]: I0220 12:19:00.602558 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:00.602896 master-0 kubenswrapper[31420]: E0220 12:19:00.602769 31420 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 20 12:19:00.602896 master-0 kubenswrapper[31420]: E0220 12:19:00.602801 31420 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 20 12:19:00.602896 master-0 kubenswrapper[31420]: E0220 12:19:00.602886 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift podName:b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad nodeName:}" failed. No retries permitted until 2026-02-20 12:19:01.6028467 +0000 UTC m=+846.322084941 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift") pod "swift-storage-0" (UID: "b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad") : configmap "swift-ring-files" not found Feb 20 12:19:00.704729 master-0 kubenswrapper[31420]: I0220 12:19:00.704643 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-etc-swift\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.705071 master-0 kubenswrapper[31420]: I0220 12:19:00.704828 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-ring-data-devices\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.705167 master-0 kubenswrapper[31420]: I0220 12:19:00.705095 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-combined-ca-bundle\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.705280 master-0 kubenswrapper[31420]: I0220 12:19:00.705244 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-scripts\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.705365 master-0 kubenswrapper[31420]: I0220 12:19:00.705313 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g274b\" (UniqueName: \"kubernetes.io/projected/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-kube-api-access-g274b\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.705484 master-0 kubenswrapper[31420]: I0220 12:19:00.705453 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-dispersionconf\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.705762 master-0 kubenswrapper[31420]: I0220 12:19:00.705728 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-swiftconf\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.808268 master-0 kubenswrapper[31420]: I0220 12:19:00.808170 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-combined-ca-bundle\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.808652 master-0 kubenswrapper[31420]: I0220 12:19:00.808299 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-scripts\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.808652 master-0 kubenswrapper[31420]: I0220 12:19:00.808336 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g274b\" (UniqueName: \"kubernetes.io/projected/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-kube-api-access-g274b\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.808652 master-0 kubenswrapper[31420]: I0220 12:19:00.808388 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-dispersionconf\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.808652 master-0 kubenswrapper[31420]: I0220 12:19:00.808561 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-swiftconf\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.808652 master-0 kubenswrapper[31420]: I0220 12:19:00.808616 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-etc-swift\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.808832 master-0 kubenswrapper[31420]: I0220 12:19:00.808732 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-ring-data-devices\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.809346 master-0 kubenswrapper[31420]: I0220 12:19:00.809199 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-scripts\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.809837 master-0 kubenswrapper[31420]: I0220 12:19:00.809756 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-etc-swift\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.809910 master-0 kubenswrapper[31420]: I0220 12:19:00.809888 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-ring-data-devices\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.811956 master-0 kubenswrapper[31420]: I0220 12:19:00.811916 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-swiftconf\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.813473 master-0 kubenswrapper[31420]: I0220 12:19:00.812926 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-dispersionconf\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.816154 master-0 kubenswrapper[31420]: I0220 12:19:00.816081 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-combined-ca-bundle\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.841370 master-0 kubenswrapper[31420]: I0220 12:19:00.841301 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g274b\" (UniqueName: \"kubernetes.io/projected/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-kube-api-access-g274b\") pod \"swift-ring-rebalance-lk7sj\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:00.893291 master-0 kubenswrapper[31420]: E0220 12:19:00.893162 31420 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:39770->192.168.32.10:45797: write tcp 192.168.32.10:39770->192.168.32.10:45797: write: broken pipe Feb 20 12:19:00.897015 master-0 kubenswrapper[31420]: I0220 12:19:00.896743 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:01.128603 master-0 kubenswrapper[31420]: I0220 12:19:01.127843 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 20 12:19:01.128603 master-0 kubenswrapper[31420]: I0220 12:19:01.127970 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 20 12:19:01.458628 master-0 kubenswrapper[31420]: I0220 12:19:01.458548 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-lk7sj"] Feb 20 12:19:01.472985 master-0 kubenswrapper[31420]: W0220 12:19:01.472796 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda64c3aa0_12b3_412e_804c_0fd4a79bc80f.slice/crio-21128e5dca57f46c69c3147d608dfbb92f37dd25c4c021ec39a8dfcdee34cbc5 WatchSource:0}: Error finding container 21128e5dca57f46c69c3147d608dfbb92f37dd25c4c021ec39a8dfcdee34cbc5: Status 404 returned error can't find the container with id 21128e5dca57f46c69c3147d608dfbb92f37dd25c4c021ec39a8dfcdee34cbc5 Feb 20 12:19:01.515771 master-0 kubenswrapper[31420]: I0220 12:19:01.515630 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fea9dd3f-4d6e-453c-9558-d2963dc7ae75" path="/var/lib/kubelet/pods/fea9dd3f-4d6e-453c-9558-d2963dc7ae75/volumes" Feb 20 12:19:01.623202 master-0 kubenswrapper[31420]: I0220 12:19:01.623131 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b4df0a86-1936-4efc-9b26-892f994fab0d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^702ed618-6e62-4be8-9c94-57d8e98dc457\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:01.627375 master-0 kubenswrapper[31420]: I0220 12:19:01.627327 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:01.627752 master-0 kubenswrapper[31420]: E0220 12:19:01.627702 31420 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 20 12:19:01.627752 master-0 kubenswrapper[31420]: E0220 12:19:01.627745 31420 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 20 12:19:01.627869 master-0 kubenswrapper[31420]: E0220 12:19:01.627803 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift podName:b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad nodeName:}" failed. No retries permitted until 2026-02-20 12:19:03.627782956 +0000 UTC m=+848.347021197 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift") pod "swift-storage-0" (UID: "b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad") : configmap "swift-ring-files" not found Feb 20 12:19:01.954191 master-0 kubenswrapper[31420]: I0220 12:19:01.954122 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 20 12:19:02.070197 master-0 kubenswrapper[31420]: I0220 12:19:02.070079 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 20 12:19:02.379089 master-0 kubenswrapper[31420]: I0220 12:19:02.378987 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lk7sj" event={"ID":"a64c3aa0-12b3-412e-804c-0fd4a79bc80f","Type":"ContainerStarted","Data":"21128e5dca57f46c69c3147d608dfbb92f37dd25c4c021ec39a8dfcdee34cbc5"} Feb 20 12:19:02.554555 master-0 kubenswrapper[31420]: I0220 12:19:02.552601 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-62fhw"] Feb 20 12:19:02.554555 master-0 kubenswrapper[31420]: I0220 12:19:02.554324 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-62fhw" Feb 20 12:19:02.577305 master-0 kubenswrapper[31420]: I0220 12:19:02.571944 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 20 12:19:02.595546 master-0 kubenswrapper[31420]: I0220 12:19:02.577827 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-62fhw"] Feb 20 12:19:02.676558 master-0 kubenswrapper[31420]: I0220 12:19:02.669201 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn8zq\" (UniqueName: \"kubernetes.io/projected/eec86992-c892-4514-b5d7-273f76f1c91a-kube-api-access-gn8zq\") pod \"root-account-create-update-62fhw\" (UID: \"eec86992-c892-4514-b5d7-273f76f1c91a\") " pod="openstack/root-account-create-update-62fhw" Feb 20 12:19:02.676558 master-0 kubenswrapper[31420]: I0220 12:19:02.669516 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eec86992-c892-4514-b5d7-273f76f1c91a-operator-scripts\") pod \"root-account-create-update-62fhw\" (UID: \"eec86992-c892-4514-b5d7-273f76f1c91a\") " pod="openstack/root-account-create-update-62fhw" Feb 20 12:19:02.773295 master-0 kubenswrapper[31420]: I0220 12:19:02.772622 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn8zq\" (UniqueName: \"kubernetes.io/projected/eec86992-c892-4514-b5d7-273f76f1c91a-kube-api-access-gn8zq\") pod \"root-account-create-update-62fhw\" (UID: \"eec86992-c892-4514-b5d7-273f76f1c91a\") " pod="openstack/root-account-create-update-62fhw" Feb 20 12:19:02.774659 master-0 kubenswrapper[31420]: I0220 12:19:02.774270 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eec86992-c892-4514-b5d7-273f76f1c91a-operator-scripts\") pod \"root-account-create-update-62fhw\" (UID: \"eec86992-c892-4514-b5d7-273f76f1c91a\") " pod="openstack/root-account-create-update-62fhw" Feb 20 12:19:02.775625 master-0 kubenswrapper[31420]: I0220 12:19:02.775584 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eec86992-c892-4514-b5d7-273f76f1c91a-operator-scripts\") pod \"root-account-create-update-62fhw\" (UID: \"eec86992-c892-4514-b5d7-273f76f1c91a\") " pod="openstack/root-account-create-update-62fhw" Feb 20 12:19:02.793603 master-0 kubenswrapper[31420]: I0220 12:19:02.793218 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn8zq\" (UniqueName: \"kubernetes.io/projected/eec86992-c892-4514-b5d7-273f76f1c91a-kube-api-access-gn8zq\") pod \"root-account-create-update-62fhw\" (UID: \"eec86992-c892-4514-b5d7-273f76f1c91a\") " pod="openstack/root-account-create-update-62fhw" Feb 20 12:19:03.774019 master-0 kubenswrapper[31420]: I0220 12:19:02.996014 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-62fhw" Feb 20 12:19:03.774019 master-0 kubenswrapper[31420]: I0220 12:19:03.694420 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:03.774019 master-0 kubenswrapper[31420]: E0220 12:19:03.694630 31420 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 20 12:19:03.774019 master-0 kubenswrapper[31420]: E0220 12:19:03.694668 31420 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 20 12:19:03.774019 master-0 kubenswrapper[31420]: E0220 12:19:03.694728 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift podName:b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad nodeName:}" failed. No retries permitted until 2026-02-20 12:19:07.694707478 +0000 UTC m=+852.413945719 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift") pod "swift-storage-0" (UID: "b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad") : configmap "swift-ring-files" not found Feb 20 12:19:03.931307 master-0 kubenswrapper[31420]: I0220 12:19:03.931240 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 20 12:19:04.015717 master-0 kubenswrapper[31420]: I0220 12:19:04.015667 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 20 12:19:06.584279 master-0 kubenswrapper[31420]: I0220 12:19:06.584199 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zkxzd"] Feb 20 12:19:06.587897 master-0 kubenswrapper[31420]: I0220 12:19:06.587777 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zkxzd" Feb 20 12:19:06.601397 master-0 kubenswrapper[31420]: I0220 12:19:06.601110 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zkxzd"] Feb 20 12:19:06.675510 master-0 kubenswrapper[31420]: I0220 12:19:06.675340 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9fa1-account-create-update-88db9"] Feb 20 12:19:06.677909 master-0 kubenswrapper[31420]: I0220 12:19:06.677880 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9fa1-account-create-update-88db9" Feb 20 12:19:06.680056 master-0 kubenswrapper[31420]: I0220 12:19:06.680010 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 20 12:19:06.686232 master-0 kubenswrapper[31420]: I0220 12:19:06.686176 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9fa1-account-create-update-88db9"] Feb 20 12:19:06.765285 master-0 kubenswrapper[31420]: I0220 12:19:06.763712 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-k8p84"] Feb 20 12:19:06.766417 master-0 kubenswrapper[31420]: I0220 12:19:06.765827 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-k8p84" Feb 20 12:19:06.774163 master-0 kubenswrapper[31420]: I0220 12:19:06.773952 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-k8p84"] Feb 20 12:19:06.788198 master-0 kubenswrapper[31420]: I0220 12:19:06.788103 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svsnf\" (UniqueName: \"kubernetes.io/projected/b0bba7eb-9925-4bdc-b18a-09a15d13bb07-kube-api-access-svsnf\") pod \"keystone-9fa1-account-create-update-88db9\" (UID: \"b0bba7eb-9925-4bdc-b18a-09a15d13bb07\") " pod="openstack/keystone-9fa1-account-create-update-88db9" Feb 20 12:19:06.788434 master-0 kubenswrapper[31420]: I0220 12:19:06.788259 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0bba7eb-9925-4bdc-b18a-09a15d13bb07-operator-scripts\") pod \"keystone-9fa1-account-create-update-88db9\" (UID: \"b0bba7eb-9925-4bdc-b18a-09a15d13bb07\") " pod="openstack/keystone-9fa1-account-create-update-88db9" Feb 20 12:19:06.788434 master-0 kubenswrapper[31420]: I0220 12:19:06.788286 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57a83694-e4aa-4031-8ba3-7eaab90b0abd-operator-scripts\") pod \"keystone-db-create-zkxzd\" (UID: \"57a83694-e4aa-4031-8ba3-7eaab90b0abd\") " pod="openstack/keystone-db-create-zkxzd" Feb 20 12:19:06.788434 master-0 kubenswrapper[31420]: I0220 12:19:06.788366 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n54g8\" (UniqueName: \"kubernetes.io/projected/57a83694-e4aa-4031-8ba3-7eaab90b0abd-kube-api-access-n54g8\") pod \"keystone-db-create-zkxzd\" (UID: \"57a83694-e4aa-4031-8ba3-7eaab90b0abd\") " pod="openstack/keystone-db-create-zkxzd" Feb 20 12:19:06.880025 master-0 kubenswrapper[31420]: I0220 12:19:06.877688 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-gmvz9"] Feb 20 12:19:06.880025 master-0 kubenswrapper[31420]: I0220 12:19:06.879160 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gmvz9" Feb 20 12:19:06.887927 master-0 kubenswrapper[31420]: I0220 12:19:06.887868 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9be4-account-create-update-4jzfk"] Feb 20 12:19:06.889341 master-0 kubenswrapper[31420]: I0220 12:19:06.889306 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9be4-account-create-update-4jzfk" Feb 20 12:19:06.892139 master-0 kubenswrapper[31420]: I0220 12:19:06.891859 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 20 12:19:06.894058 master-0 kubenswrapper[31420]: I0220 12:19:06.894014 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n54g8\" (UniqueName: \"kubernetes.io/projected/57a83694-e4aa-4031-8ba3-7eaab90b0abd-kube-api-access-n54g8\") pod \"keystone-db-create-zkxzd\" (UID: \"57a83694-e4aa-4031-8ba3-7eaab90b0abd\") " pod="openstack/keystone-db-create-zkxzd" Feb 20 12:19:06.896665 master-0 kubenswrapper[31420]: I0220 12:19:06.896618 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp445\" (UniqueName: \"kubernetes.io/projected/9e5fb862-336d-459e-8b0d-7a688bfc722c-kube-api-access-tp445\") pod \"placement-db-create-gmvz9\" (UID: \"9e5fb862-336d-459e-8b0d-7a688bfc722c\") " pod="openstack/placement-db-create-gmvz9" Feb 20 12:19:06.897427 master-0 kubenswrapper[31420]: I0220 12:19:06.897407 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28293a37-d871-493f-8286-a6705a2e5bd8-operator-scripts\") pod \"glance-9be4-account-create-update-4jzfk\" (UID: \"28293a37-d871-493f-8286-a6705a2e5bd8\") " pod="openstack/glance-9be4-account-create-update-4jzfk" Feb 20 12:19:06.897615 master-0 kubenswrapper[31420]: I0220 12:19:06.897516 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svsnf\" (UniqueName: \"kubernetes.io/projected/b0bba7eb-9925-4bdc-b18a-09a15d13bb07-kube-api-access-svsnf\") pod \"keystone-9fa1-account-create-update-88db9\" (UID: \"b0bba7eb-9925-4bdc-b18a-09a15d13bb07\") " pod="openstack/keystone-9fa1-account-create-update-88db9" Feb 20 12:19:06.899728 master-0 kubenswrapper[31420]: I0220 12:19:06.899679 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gmvz9"] Feb 20 12:19:06.899941 master-0 kubenswrapper[31420]: I0220 12:19:06.899871 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0bba7eb-9925-4bdc-b18a-09a15d13bb07-operator-scripts\") pod \"keystone-9fa1-account-create-update-88db9\" (UID: \"b0bba7eb-9925-4bdc-b18a-09a15d13bb07\") " pod="openstack/keystone-9fa1-account-create-update-88db9" Feb 20 12:19:06.900952 master-0 kubenswrapper[31420]: I0220 12:19:06.900081 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57a83694-e4aa-4031-8ba3-7eaab90b0abd-operator-scripts\") pod \"keystone-db-create-zkxzd\" (UID: \"57a83694-e4aa-4031-8ba3-7eaab90b0abd\") " pod="openstack/keystone-db-create-zkxzd" Feb 20 12:19:06.907950 master-0 kubenswrapper[31420]: I0220 12:19:06.902327 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0bba7eb-9925-4bdc-b18a-09a15d13bb07-operator-scripts\") pod \"keystone-9fa1-account-create-update-88db9\" (UID: \"b0bba7eb-9925-4bdc-b18a-09a15d13bb07\") " pod="openstack/keystone-9fa1-account-create-update-88db9" Feb 20 12:19:06.907950 master-0 kubenswrapper[31420]: I0220 12:19:06.900832 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57a83694-e4aa-4031-8ba3-7eaab90b0abd-operator-scripts\") pod \"keystone-db-create-zkxzd\" (UID: \"57a83694-e4aa-4031-8ba3-7eaab90b0abd\") " pod="openstack/keystone-db-create-zkxzd" Feb 20 12:19:06.909117 master-0 kubenswrapper[31420]: I0220 12:19:06.908625 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0311ef3e-c313-4f78-86ce-371566b44c31-operator-scripts\") pod \"glance-db-create-k8p84\" (UID: \"0311ef3e-c313-4f78-86ce-371566b44c31\") " pod="openstack/glance-db-create-k8p84" Feb 20 12:19:06.909117 master-0 kubenswrapper[31420]: I0220 12:19:06.908691 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvsjb\" (UniqueName: \"kubernetes.io/projected/0311ef3e-c313-4f78-86ce-371566b44c31-kube-api-access-kvsjb\") pod \"glance-db-create-k8p84\" (UID: \"0311ef3e-c313-4f78-86ce-371566b44c31\") " pod="openstack/glance-db-create-k8p84" Feb 20 12:19:06.909117 master-0 kubenswrapper[31420]: I0220 12:19:06.908812 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e5fb862-336d-459e-8b0d-7a688bfc722c-operator-scripts\") pod \"placement-db-create-gmvz9\" (UID: \"9e5fb862-336d-459e-8b0d-7a688bfc722c\") " pod="openstack/placement-db-create-gmvz9" Feb 20 12:19:06.909117 master-0 kubenswrapper[31420]: I0220 12:19:06.908907 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pprg9\" (UniqueName: \"kubernetes.io/projected/28293a37-d871-493f-8286-a6705a2e5bd8-kube-api-access-pprg9\") pod \"glance-9be4-account-create-update-4jzfk\" (UID: \"28293a37-d871-493f-8286-a6705a2e5bd8\") " pod="openstack/glance-9be4-account-create-update-4jzfk" Feb 20 12:19:06.918650 master-0 kubenswrapper[31420]: I0220 12:19:06.918581 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n54g8\" (UniqueName: \"kubernetes.io/projected/57a83694-e4aa-4031-8ba3-7eaab90b0abd-kube-api-access-n54g8\") pod \"keystone-db-create-zkxzd\" (UID: \"57a83694-e4aa-4031-8ba3-7eaab90b0abd\") " pod="openstack/keystone-db-create-zkxzd" Feb 20 12:19:06.918920 master-0 kubenswrapper[31420]: I0220 12:19:06.918831 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9be4-account-create-update-4jzfk"] Feb 20 12:19:06.982606 master-0 kubenswrapper[31420]: I0220 12:19:06.980305 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svsnf\" (UniqueName: \"kubernetes.io/projected/b0bba7eb-9925-4bdc-b18a-09a15d13bb07-kube-api-access-svsnf\") pod \"keystone-9fa1-account-create-update-88db9\" (UID: \"b0bba7eb-9925-4bdc-b18a-09a15d13bb07\") " pod="openstack/keystone-9fa1-account-create-update-88db9" Feb 20 12:19:06.984062 master-0 kubenswrapper[31420]: I0220 12:19:06.984015 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-62fhw"] Feb 20 12:19:07.011004 master-0 kubenswrapper[31420]: I0220 12:19:07.010945 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0311ef3e-c313-4f78-86ce-371566b44c31-operator-scripts\") pod \"glance-db-create-k8p84\" (UID: \"0311ef3e-c313-4f78-86ce-371566b44c31\") " pod="openstack/glance-db-create-k8p84" Feb 20 12:19:07.011004 master-0 kubenswrapper[31420]: I0220 12:19:07.011004 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvsjb\" (UniqueName: \"kubernetes.io/projected/0311ef3e-c313-4f78-86ce-371566b44c31-kube-api-access-kvsjb\") pod \"glance-db-create-k8p84\" (UID: \"0311ef3e-c313-4f78-86ce-371566b44c31\") " pod="openstack/glance-db-create-k8p84" Feb 20 12:19:07.011234 master-0 kubenswrapper[31420]: I0220 12:19:07.011056 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e5fb862-336d-459e-8b0d-7a688bfc722c-operator-scripts\") pod \"placement-db-create-gmvz9\" (UID: \"9e5fb862-336d-459e-8b0d-7a688bfc722c\") " pod="openstack/placement-db-create-gmvz9" Feb 20 12:19:07.011234 master-0 kubenswrapper[31420]: I0220 12:19:07.011089 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pprg9\" (UniqueName: \"kubernetes.io/projected/28293a37-d871-493f-8286-a6705a2e5bd8-kube-api-access-pprg9\") pod \"glance-9be4-account-create-update-4jzfk\" (UID: \"28293a37-d871-493f-8286-a6705a2e5bd8\") " pod="openstack/glance-9be4-account-create-update-4jzfk" Feb 20 12:19:07.011234 master-0 kubenswrapper[31420]: I0220 12:19:07.011131 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp445\" (UniqueName: \"kubernetes.io/projected/9e5fb862-336d-459e-8b0d-7a688bfc722c-kube-api-access-tp445\") pod \"placement-db-create-gmvz9\" (UID: \"9e5fb862-336d-459e-8b0d-7a688bfc722c\") " pod="openstack/placement-db-create-gmvz9" Feb 20 12:19:07.011234 master-0 kubenswrapper[31420]: I0220 12:19:07.011176 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28293a37-d871-493f-8286-a6705a2e5bd8-operator-scripts\") pod \"glance-9be4-account-create-update-4jzfk\" (UID: \"28293a37-d871-493f-8286-a6705a2e5bd8\") " pod="openstack/glance-9be4-account-create-update-4jzfk" Feb 20 12:19:07.011944 master-0 kubenswrapper[31420]: I0220 12:19:07.011916 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28293a37-d871-493f-8286-a6705a2e5bd8-operator-scripts\") pod \"glance-9be4-account-create-update-4jzfk\" (UID: \"28293a37-d871-493f-8286-a6705a2e5bd8\") " pod="openstack/glance-9be4-account-create-update-4jzfk" Feb 20 12:19:07.012256 master-0 kubenswrapper[31420]: I0220 12:19:07.012225 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0311ef3e-c313-4f78-86ce-371566b44c31-operator-scripts\") pod \"glance-db-create-k8p84\" (UID: \"0311ef3e-c313-4f78-86ce-371566b44c31\") " pod="openstack/glance-db-create-k8p84" Feb 20 12:19:07.012443 master-0 kubenswrapper[31420]: I0220 12:19:07.012417 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e5fb862-336d-459e-8b0d-7a688bfc722c-operator-scripts\") pod \"placement-db-create-gmvz9\" (UID: \"9e5fb862-336d-459e-8b0d-7a688bfc722c\") " pod="openstack/placement-db-create-gmvz9" Feb 20 12:19:07.022353 master-0 kubenswrapper[31420]: I0220 12:19:07.022330 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-54fa-account-create-update-5ng4x"] Feb 20 12:19:07.023763 master-0 kubenswrapper[31420]: I0220 12:19:07.023745 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54fa-account-create-update-5ng4x" Feb 20 12:19:07.029335 master-0 kubenswrapper[31420]: I0220 12:19:07.029299 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 20 12:19:07.032119 master-0 kubenswrapper[31420]: I0220 12:19:07.032088 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pprg9\" (UniqueName: \"kubernetes.io/projected/28293a37-d871-493f-8286-a6705a2e5bd8-kube-api-access-pprg9\") pod \"glance-9be4-account-create-update-4jzfk\" (UID: \"28293a37-d871-493f-8286-a6705a2e5bd8\") " pod="openstack/glance-9be4-account-create-update-4jzfk" Feb 20 12:19:07.032234 master-0 kubenswrapper[31420]: I0220 12:19:07.032198 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvsjb\" (UniqueName: \"kubernetes.io/projected/0311ef3e-c313-4f78-86ce-371566b44c31-kube-api-access-kvsjb\") pod \"glance-db-create-k8p84\" (UID: \"0311ef3e-c313-4f78-86ce-371566b44c31\") " pod="openstack/glance-db-create-k8p84" Feb 20 12:19:07.036474 master-0 kubenswrapper[31420]: I0220 12:19:07.036414 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp445\" (UniqueName: \"kubernetes.io/projected/9e5fb862-336d-459e-8b0d-7a688bfc722c-kube-api-access-tp445\") pod \"placement-db-create-gmvz9\" (UID: \"9e5fb862-336d-459e-8b0d-7a688bfc722c\") " pod="openstack/placement-db-create-gmvz9" Feb 20 12:19:07.036589 master-0 kubenswrapper[31420]: I0220 12:19:07.036502 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-54fa-account-create-update-5ng4x"] Feb 20 12:19:07.053677 master-0 kubenswrapper[31420]: I0220 12:19:07.053543 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zkxzd" Feb 20 12:19:07.058518 master-0 kubenswrapper[31420]: I0220 12:19:07.058462 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9fa1-account-create-update-88db9" Feb 20 12:19:07.112848 master-0 kubenswrapper[31420]: I0220 12:19:07.112788 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8565cedf-c9e9-45a2-a463-00f1f5224559-operator-scripts\") pod \"placement-54fa-account-create-update-5ng4x\" (UID: \"8565cedf-c9e9-45a2-a463-00f1f5224559\") " pod="openstack/placement-54fa-account-create-update-5ng4x" Feb 20 12:19:07.112956 master-0 kubenswrapper[31420]: I0220 12:19:07.112872 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk6ss\" (UniqueName: \"kubernetes.io/projected/8565cedf-c9e9-45a2-a463-00f1f5224559-kube-api-access-hk6ss\") pod \"placement-54fa-account-create-update-5ng4x\" (UID: \"8565cedf-c9e9-45a2-a463-00f1f5224559\") " pod="openstack/placement-54fa-account-create-update-5ng4x" Feb 20 12:19:07.190510 master-0 kubenswrapper[31420]: I0220 12:19:07.190444 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-k8p84" Feb 20 12:19:07.214864 master-0 kubenswrapper[31420]: I0220 12:19:07.214803 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8565cedf-c9e9-45a2-a463-00f1f5224559-operator-scripts\") pod \"placement-54fa-account-create-update-5ng4x\" (UID: \"8565cedf-c9e9-45a2-a463-00f1f5224559\") " pod="openstack/placement-54fa-account-create-update-5ng4x" Feb 20 12:19:07.215039 master-0 kubenswrapper[31420]: I0220 12:19:07.214975 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk6ss\" (UniqueName: \"kubernetes.io/projected/8565cedf-c9e9-45a2-a463-00f1f5224559-kube-api-access-hk6ss\") pod \"placement-54fa-account-create-update-5ng4x\" (UID: \"8565cedf-c9e9-45a2-a463-00f1f5224559\") " pod="openstack/placement-54fa-account-create-update-5ng4x" Feb 20 12:19:07.216014 master-0 kubenswrapper[31420]: I0220 12:19:07.215970 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8565cedf-c9e9-45a2-a463-00f1f5224559-operator-scripts\") pod \"placement-54fa-account-create-update-5ng4x\" (UID: \"8565cedf-c9e9-45a2-a463-00f1f5224559\") " pod="openstack/placement-54fa-account-create-update-5ng4x" Feb 20 12:19:07.234659 master-0 kubenswrapper[31420]: I0220 12:19:07.234618 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk6ss\" (UniqueName: \"kubernetes.io/projected/8565cedf-c9e9-45a2-a463-00f1f5224559-kube-api-access-hk6ss\") pod \"placement-54fa-account-create-update-5ng4x\" (UID: \"8565cedf-c9e9-45a2-a463-00f1f5224559\") " pod="openstack/placement-54fa-account-create-update-5ng4x" Feb 20 12:19:07.277972 master-0 kubenswrapper[31420]: I0220 12:19:07.277905 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9be4-account-create-update-4jzfk" Feb 20 12:19:07.287917 master-0 kubenswrapper[31420]: I0220 12:19:07.286143 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gmvz9" Feb 20 12:19:07.362229 master-0 kubenswrapper[31420]: I0220 12:19:07.362183 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54fa-account-create-update-5ng4x" Feb 20 12:19:07.603444 master-0 kubenswrapper[31420]: I0220 12:19:07.603113 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zkxzd"] Feb 20 12:19:07.719734 master-0 kubenswrapper[31420]: I0220 12:19:07.719459 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9fa1-account-create-update-88db9"] Feb 20 12:19:07.731193 master-0 kubenswrapper[31420]: I0220 12:19:07.731111 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:07.731321 master-0 kubenswrapper[31420]: E0220 12:19:07.731295 31420 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 20 12:19:07.731369 master-0 kubenswrapper[31420]: E0220 12:19:07.731322 31420 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 20 12:19:07.731407 master-0 kubenswrapper[31420]: E0220 12:19:07.731371 31420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift podName:b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad nodeName:}" failed. No retries permitted until 2026-02-20 12:19:15.731353928 +0000 UTC m=+860.450592169 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift") pod "swift-storage-0" (UID: "b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad") : configmap "swift-ring-files" not found Feb 20 12:19:07.888413 master-0 kubenswrapper[31420]: I0220 12:19:07.883503 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-k8p84"] Feb 20 12:19:07.895594 master-0 kubenswrapper[31420]: I0220 12:19:07.895433 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gmvz9"] Feb 20 12:19:07.910508 master-0 kubenswrapper[31420]: I0220 12:19:07.910453 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9be4-account-create-update-4jzfk"] Feb 20 12:19:07.915058 master-0 kubenswrapper[31420]: W0220 12:19:07.914894 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0311ef3e_c313_4f78_86ce_371566b44c31.slice/crio-bfbe3629eaec8d6546307cce75474aaacf4c57d882fc6d6399f3ae9b4cf4a648 WatchSource:0}: Error finding container bfbe3629eaec8d6546307cce75474aaacf4c57d882fc6d6399f3ae9b4cf4a648: Status 404 returned error can't find the container with id bfbe3629eaec8d6546307cce75474aaacf4c57d882fc6d6399f3ae9b4cf4a648 Feb 20 12:19:07.921441 master-0 kubenswrapper[31420]: I0220 12:19:07.921325 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lk7sj" event={"ID":"a64c3aa0-12b3-412e-804c-0fd4a79bc80f","Type":"ContainerStarted","Data":"85eb9edcf2aec7e72a973859c2082b33a88f4a83123b34d03c1f4e6e204817dc"} Feb 20 12:19:07.929888 master-0 kubenswrapper[31420]: I0220 12:19:07.929804 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zkxzd" event={"ID":"57a83694-e4aa-4031-8ba3-7eaab90b0abd","Type":"ContainerStarted","Data":"d33e2e556b5b8c42c348d08aef7a570d48ae1e8798e8034fa9aca426b7eda7bf"} Feb 20 12:19:07.942426 master-0 kubenswrapper[31420]: I0220 12:19:07.935042 31420 generic.go:334] "Generic (PLEG): container finished" podID="eec86992-c892-4514-b5d7-273f76f1c91a" containerID="d94419d0c149503d253a6791e1794d93c25f53c6bb440ec4c3d9f0faea4b6081" exitCode=0 Feb 20 12:19:07.942426 master-0 kubenswrapper[31420]: I0220 12:19:07.935156 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-62fhw" event={"ID":"eec86992-c892-4514-b5d7-273f76f1c91a","Type":"ContainerDied","Data":"d94419d0c149503d253a6791e1794d93c25f53c6bb440ec4c3d9f0faea4b6081"} Feb 20 12:19:07.942426 master-0 kubenswrapper[31420]: I0220 12:19:07.935216 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-62fhw" event={"ID":"eec86992-c892-4514-b5d7-273f76f1c91a","Type":"ContainerStarted","Data":"aa403e886a780833bdc3130567228316dbcbd8d0bac804ca1d95802193ef8639"} Feb 20 12:19:07.942426 master-0 kubenswrapper[31420]: I0220 12:19:07.937378 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9fa1-account-create-update-88db9" event={"ID":"b0bba7eb-9925-4bdc-b18a-09a15d13bb07","Type":"ContainerStarted","Data":"e437bf783c8e949b1aa27b791ff730b35eefad605be67a1cf6491ff93d12d6d5"} Feb 20 12:19:07.967405 master-0 kubenswrapper[31420]: I0220 12:19:07.960152 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-lk7sj" podStartSLOduration=2.8648446720000003 podStartE2EDuration="7.960128073s" podCreationTimestamp="2026-02-20 12:19:00 +0000 UTC" firstStartedPulling="2026-02-20 12:19:01.48005061 +0000 UTC m=+846.199288851" lastFinishedPulling="2026-02-20 12:19:06.575334011 +0000 UTC m=+851.294572252" observedRunningTime="2026-02-20 12:19:07.95142557 +0000 UTC m=+852.670663831" watchObservedRunningTime="2026-02-20 12:19:07.960128073 +0000 UTC m=+852.679366334" Feb 20 12:19:07.987078 master-0 kubenswrapper[31420]: I0220 12:19:07.985254 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-zkxzd" podStartSLOduration=1.985229905 podStartE2EDuration="1.985229905s" podCreationTimestamp="2026-02-20 12:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:19:07.972807417 +0000 UTC m=+852.692045678" watchObservedRunningTime="2026-02-20 12:19:07.985229905 +0000 UTC m=+852.704468146" Feb 20 12:19:08.064794 master-0 kubenswrapper[31420]: I0220 12:19:08.064400 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:19:08.166105 master-0 kubenswrapper[31420]: I0220 12:19:08.164061 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-xg5ps"] Feb 20 12:19:08.166105 master-0 kubenswrapper[31420]: I0220 12:19:08.164299 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" podUID="9a5729d2-75e9-403d-828f-2739cfc261e0" containerName="dnsmasq-dns" containerID="cri-o://d9a6cf245b522029fd2e222fa1221bf380cbf679290868b76b720ea1293c8a6c" gracePeriod=10 Feb 20 12:19:08.257358 master-0 kubenswrapper[31420]: I0220 12:19:08.237184 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-54fa-account-create-update-5ng4x"] Feb 20 12:19:08.865404 master-0 kubenswrapper[31420]: I0220 12:19:08.864640 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" Feb 20 12:19:08.952447 master-0 kubenswrapper[31420]: I0220 12:19:08.952379 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54fa-account-create-update-5ng4x" event={"ID":"8565cedf-c9e9-45a2-a463-00f1f5224559","Type":"ContainerStarted","Data":"66de4746d8519524504725e279badc0d8f23fd8623f9f07793a365f3dbfb15b9"} Feb 20 12:19:08.952979 master-0 kubenswrapper[31420]: I0220 12:19:08.952844 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54fa-account-create-update-5ng4x" event={"ID":"8565cedf-c9e9-45a2-a463-00f1f5224559","Type":"ContainerStarted","Data":"d29f95b691317daf4c5c33cb151ce1c4903f63c70d00e924f2ddd8151076ed9e"} Feb 20 12:19:08.954275 master-0 kubenswrapper[31420]: I0220 12:19:08.954224 31420 generic.go:334] "Generic (PLEG): container finished" podID="28293a37-d871-493f-8286-a6705a2e5bd8" containerID="f78318e942b07f7a320dda2061b8c25dd130a458eba70751b27e9a339975dbc6" exitCode=0 Feb 20 12:19:08.954405 master-0 kubenswrapper[31420]: I0220 12:19:08.954305 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9be4-account-create-update-4jzfk" event={"ID":"28293a37-d871-493f-8286-a6705a2e5bd8","Type":"ContainerDied","Data":"f78318e942b07f7a320dda2061b8c25dd130a458eba70751b27e9a339975dbc6"} Feb 20 12:19:08.954405 master-0 kubenswrapper[31420]: I0220 12:19:08.954334 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9be4-account-create-update-4jzfk" event={"ID":"28293a37-d871-493f-8286-a6705a2e5bd8","Type":"ContainerStarted","Data":"1769c2475ac61bca9a2e2417528e285125168921f5b14c9a0312a8e508299c90"} Feb 20 12:19:08.956660 master-0 kubenswrapper[31420]: I0220 12:19:08.956626 31420 generic.go:334] "Generic (PLEG): container finished" podID="9a5729d2-75e9-403d-828f-2739cfc261e0" containerID="d9a6cf245b522029fd2e222fa1221bf380cbf679290868b76b720ea1293c8a6c" exitCode=0 Feb 20 12:19:08.956874 master-0 kubenswrapper[31420]: I0220 12:19:08.956712 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" event={"ID":"9a5729d2-75e9-403d-828f-2739cfc261e0","Type":"ContainerDied","Data":"d9a6cf245b522029fd2e222fa1221bf380cbf679290868b76b720ea1293c8a6c"} Feb 20 12:19:08.956874 master-0 kubenswrapper[31420]: I0220 12:19:08.956750 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" event={"ID":"9a5729d2-75e9-403d-828f-2739cfc261e0","Type":"ContainerDied","Data":"1244e04c421302b01dd1d7e7ba86be129bf7a258bc8810867677d8f00fb23cf1"} Feb 20 12:19:08.956874 master-0 kubenswrapper[31420]: I0220 12:19:08.956768 31420 scope.go:117] "RemoveContainer" containerID="d9a6cf245b522029fd2e222fa1221bf380cbf679290868b76b720ea1293c8a6c" Feb 20 12:19:08.956874 master-0 kubenswrapper[31420]: I0220 12:19:08.956872 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-xg5ps" Feb 20 12:19:08.959935 master-0 kubenswrapper[31420]: I0220 12:19:08.959905 31420 generic.go:334] "Generic (PLEG): container finished" podID="0311ef3e-c313-4f78-86ce-371566b44c31" containerID="3dbb81d5035182c2dbc7c4509b6861a658218421378232f8e1e6d08cbb50105c" exitCode=0 Feb 20 12:19:08.959994 master-0 kubenswrapper[31420]: I0220 12:19:08.959946 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-k8p84" event={"ID":"0311ef3e-c313-4f78-86ce-371566b44c31","Type":"ContainerDied","Data":"3dbb81d5035182c2dbc7c4509b6861a658218421378232f8e1e6d08cbb50105c"} Feb 20 12:19:08.962429 master-0 kubenswrapper[31420]: I0220 12:19:08.959993 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-k8p84" event={"ID":"0311ef3e-c313-4f78-86ce-371566b44c31","Type":"ContainerStarted","Data":"bfbe3629eaec8d6546307cce75474aaacf4c57d882fc6d6399f3ae9b4cf4a648"} Feb 20 12:19:09.005708 master-0 kubenswrapper[31420]: I0220 12:19:09.005605 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-54fa-account-create-update-5ng4x" podStartSLOduration=3.005583194 podStartE2EDuration="3.005583194s" podCreationTimestamp="2026-02-20 12:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:19:08.971316494 +0000 UTC m=+853.690554745" watchObservedRunningTime="2026-02-20 12:19:09.005583194 +0000 UTC m=+853.724821435" Feb 20 12:19:09.010705 master-0 kubenswrapper[31420]: I0220 12:19:09.009506 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a5729d2-75e9-403d-828f-2739cfc261e0-config\") pod \"9a5729d2-75e9-403d-828f-2739cfc261e0\" (UID: \"9a5729d2-75e9-403d-828f-2739cfc261e0\") " Feb 20 12:19:09.010873 master-0 kubenswrapper[31420]: I0220 12:19:09.010752 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6gxz\" (UniqueName: \"kubernetes.io/projected/9a5729d2-75e9-403d-828f-2739cfc261e0-kube-api-access-k6gxz\") pod \"9a5729d2-75e9-403d-828f-2739cfc261e0\" (UID: \"9a5729d2-75e9-403d-828f-2739cfc261e0\") " Feb 20 12:19:09.010952 master-0 kubenswrapper[31420]: I0220 12:19:09.010914 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a5729d2-75e9-403d-828f-2739cfc261e0-dns-svc\") pod \"9a5729d2-75e9-403d-828f-2739cfc261e0\" (UID: \"9a5729d2-75e9-403d-828f-2739cfc261e0\") " Feb 20 12:19:09.018805 master-0 kubenswrapper[31420]: I0220 12:19:09.014950 31420 scope.go:117] "RemoveContainer" containerID="b50961e40ba8960875a504af2b3dfed348310e41997560d7b5d15e26b548fbbd" Feb 20 12:19:09.045328 master-0 kubenswrapper[31420]: I0220 12:19:09.021037 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a5729d2-75e9-403d-828f-2739cfc261e0-kube-api-access-k6gxz" (OuterVolumeSpecName: "kube-api-access-k6gxz") pod "9a5729d2-75e9-403d-828f-2739cfc261e0" (UID: "9a5729d2-75e9-403d-828f-2739cfc261e0"). InnerVolumeSpecName "kube-api-access-k6gxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:09.045328 master-0 kubenswrapper[31420]: I0220 12:19:09.027114 31420 generic.go:334] "Generic (PLEG): container finished" podID="b0bba7eb-9925-4bdc-b18a-09a15d13bb07" containerID="7f94cef87282f6f121aa17622625616ad8a9ba45820aec1952de28eadbd8d41e" exitCode=0 Feb 20 12:19:09.045328 master-0 kubenswrapper[31420]: I0220 12:19:09.027195 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9fa1-account-create-update-88db9" event={"ID":"b0bba7eb-9925-4bdc-b18a-09a15d13bb07","Type":"ContainerDied","Data":"7f94cef87282f6f121aa17622625616ad8a9ba45820aec1952de28eadbd8d41e"} Feb 20 12:19:09.045328 master-0 kubenswrapper[31420]: I0220 12:19:09.042284 31420 generic.go:334] "Generic (PLEG): container finished" podID="9e5fb862-336d-459e-8b0d-7a688bfc722c" containerID="be0dc5fc593a041518a78b9625a2e4d7c0033dd7f77c9df9b27ad85c1ad90f7e" exitCode=0 Feb 20 12:19:09.045328 master-0 kubenswrapper[31420]: I0220 12:19:09.042432 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gmvz9" event={"ID":"9e5fb862-336d-459e-8b0d-7a688bfc722c","Type":"ContainerDied","Data":"be0dc5fc593a041518a78b9625a2e4d7c0033dd7f77c9df9b27ad85c1ad90f7e"} Feb 20 12:19:09.045328 master-0 kubenswrapper[31420]: I0220 12:19:09.042502 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gmvz9" event={"ID":"9e5fb862-336d-459e-8b0d-7a688bfc722c","Type":"ContainerStarted","Data":"1e814f9f90215e4a8895a62b1925be75b620eb67786301aaa387c5cfea199957"} Feb 20 12:19:09.045328 master-0 kubenswrapper[31420]: I0220 12:19:09.043814 31420 generic.go:334] "Generic (PLEG): container finished" podID="57a83694-e4aa-4031-8ba3-7eaab90b0abd" containerID="637f8fa838694e5132ca7464110a75c39414ef566f4293b34cd8a9f584f7674a" exitCode=0 Feb 20 12:19:09.045328 master-0 kubenswrapper[31420]: I0220 12:19:09.044047 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zkxzd" event={"ID":"57a83694-e4aa-4031-8ba3-7eaab90b0abd","Type":"ContainerDied","Data":"637f8fa838694e5132ca7464110a75c39414ef566f4293b34cd8a9f584f7674a"} Feb 20 12:19:09.084205 master-0 kubenswrapper[31420]: I0220 12:19:09.083689 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a5729d2-75e9-403d-828f-2739cfc261e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9a5729d2-75e9-403d-828f-2739cfc261e0" (UID: "9a5729d2-75e9-403d-828f-2739cfc261e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:09.099671 master-0 kubenswrapper[31420]: I0220 12:19:09.099612 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a5729d2-75e9-403d-828f-2739cfc261e0-config" (OuterVolumeSpecName: "config") pod "9a5729d2-75e9-403d-828f-2739cfc261e0" (UID: "9a5729d2-75e9-403d-828f-2739cfc261e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:09.119307 master-0 kubenswrapper[31420]: I0220 12:19:09.119236 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a5729d2-75e9-403d-828f-2739cfc261e0-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:09.119307 master-0 kubenswrapper[31420]: I0220 12:19:09.119281 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6gxz\" (UniqueName: \"kubernetes.io/projected/9a5729d2-75e9-403d-828f-2739cfc261e0-kube-api-access-k6gxz\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:09.119307 master-0 kubenswrapper[31420]: I0220 12:19:09.119295 31420 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a5729d2-75e9-403d-828f-2739cfc261e0-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:09.121336 master-0 kubenswrapper[31420]: I0220 12:19:09.121293 31420 scope.go:117] "RemoveContainer" containerID="d9a6cf245b522029fd2e222fa1221bf380cbf679290868b76b720ea1293c8a6c" Feb 20 12:19:09.121938 master-0 kubenswrapper[31420]: E0220 12:19:09.121766 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9a6cf245b522029fd2e222fa1221bf380cbf679290868b76b720ea1293c8a6c\": container with ID starting with d9a6cf245b522029fd2e222fa1221bf380cbf679290868b76b720ea1293c8a6c not found: ID does not exist" containerID="d9a6cf245b522029fd2e222fa1221bf380cbf679290868b76b720ea1293c8a6c" Feb 20 12:19:09.121938 master-0 kubenswrapper[31420]: I0220 12:19:09.121802 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9a6cf245b522029fd2e222fa1221bf380cbf679290868b76b720ea1293c8a6c"} err="failed to get container status \"d9a6cf245b522029fd2e222fa1221bf380cbf679290868b76b720ea1293c8a6c\": rpc error: code = NotFound desc = could not find container \"d9a6cf245b522029fd2e222fa1221bf380cbf679290868b76b720ea1293c8a6c\": container with ID starting with d9a6cf245b522029fd2e222fa1221bf380cbf679290868b76b720ea1293c8a6c not found: ID does not exist" Feb 20 12:19:09.121938 master-0 kubenswrapper[31420]: I0220 12:19:09.121829 31420 scope.go:117] "RemoveContainer" containerID="b50961e40ba8960875a504af2b3dfed348310e41997560d7b5d15e26b548fbbd" Feb 20 12:19:09.122110 master-0 kubenswrapper[31420]: E0220 12:19:09.122083 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b50961e40ba8960875a504af2b3dfed348310e41997560d7b5d15e26b548fbbd\": container with ID starting with b50961e40ba8960875a504af2b3dfed348310e41997560d7b5d15e26b548fbbd not found: ID does not exist" containerID="b50961e40ba8960875a504af2b3dfed348310e41997560d7b5d15e26b548fbbd" Feb 20 12:19:09.122170 master-0 kubenswrapper[31420]: I0220 12:19:09.122112 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b50961e40ba8960875a504af2b3dfed348310e41997560d7b5d15e26b548fbbd"} err="failed to get container status \"b50961e40ba8960875a504af2b3dfed348310e41997560d7b5d15e26b548fbbd\": rpc error: code = NotFound desc = could not find container \"b50961e40ba8960875a504af2b3dfed348310e41997560d7b5d15e26b548fbbd\": container with ID starting with b50961e40ba8960875a504af2b3dfed348310e41997560d7b5d15e26b548fbbd not found: ID does not exist" Feb 20 12:19:09.358185 master-0 kubenswrapper[31420]: I0220 12:19:09.358105 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-xg5ps"] Feb 20 12:19:09.376250 master-0 kubenswrapper[31420]: I0220 12:19:09.376192 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-xg5ps"] Feb 20 12:19:09.522029 master-0 kubenswrapper[31420]: I0220 12:19:09.521988 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a5729d2-75e9-403d-828f-2739cfc261e0" path="/var/lib/kubelet/pods/9a5729d2-75e9-403d-828f-2739cfc261e0/volumes" Feb 20 12:19:09.605372 master-0 kubenswrapper[31420]: I0220 12:19:09.605255 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-62fhw" Feb 20 12:19:09.738809 master-0 kubenswrapper[31420]: I0220 12:19:09.729342 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn8zq\" (UniqueName: \"kubernetes.io/projected/eec86992-c892-4514-b5d7-273f76f1c91a-kube-api-access-gn8zq\") pod \"eec86992-c892-4514-b5d7-273f76f1c91a\" (UID: \"eec86992-c892-4514-b5d7-273f76f1c91a\") " Feb 20 12:19:09.738809 master-0 kubenswrapper[31420]: I0220 12:19:09.729456 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eec86992-c892-4514-b5d7-273f76f1c91a-operator-scripts\") pod \"eec86992-c892-4514-b5d7-273f76f1c91a\" (UID: \"eec86992-c892-4514-b5d7-273f76f1c91a\") " Feb 20 12:19:09.738809 master-0 kubenswrapper[31420]: I0220 12:19:09.730510 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eec86992-c892-4514-b5d7-273f76f1c91a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eec86992-c892-4514-b5d7-273f76f1c91a" (UID: "eec86992-c892-4514-b5d7-273f76f1c91a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:09.738809 master-0 kubenswrapper[31420]: I0220 12:19:09.732849 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eec86992-c892-4514-b5d7-273f76f1c91a-kube-api-access-gn8zq" (OuterVolumeSpecName: "kube-api-access-gn8zq") pod "eec86992-c892-4514-b5d7-273f76f1c91a" (UID: "eec86992-c892-4514-b5d7-273f76f1c91a"). InnerVolumeSpecName "kube-api-access-gn8zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:09.831666 master-0 kubenswrapper[31420]: I0220 12:19:09.831620 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn8zq\" (UniqueName: \"kubernetes.io/projected/eec86992-c892-4514-b5d7-273f76f1c91a-kube-api-access-gn8zq\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:09.831666 master-0 kubenswrapper[31420]: I0220 12:19:09.831659 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eec86992-c892-4514-b5d7-273f76f1c91a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:10.057454 master-0 kubenswrapper[31420]: I0220 12:19:10.056736 31420 generic.go:334] "Generic (PLEG): container finished" podID="8565cedf-c9e9-45a2-a463-00f1f5224559" containerID="66de4746d8519524504725e279badc0d8f23fd8623f9f07793a365f3dbfb15b9" exitCode=0 Feb 20 12:19:10.057454 master-0 kubenswrapper[31420]: I0220 12:19:10.056846 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54fa-account-create-update-5ng4x" event={"ID":"8565cedf-c9e9-45a2-a463-00f1f5224559","Type":"ContainerDied","Data":"66de4746d8519524504725e279badc0d8f23fd8623f9f07793a365f3dbfb15b9"} Feb 20 12:19:10.061487 master-0 kubenswrapper[31420]: I0220 12:19:10.061429 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-62fhw" Feb 20 12:19:10.061487 master-0 kubenswrapper[31420]: I0220 12:19:10.061441 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-62fhw" event={"ID":"eec86992-c892-4514-b5d7-273f76f1c91a","Type":"ContainerDied","Data":"aa403e886a780833bdc3130567228316dbcbd8d0bac804ca1d95802193ef8639"} Feb 20 12:19:10.061487 master-0 kubenswrapper[31420]: I0220 12:19:10.061491 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa403e886a780833bdc3130567228316dbcbd8d0bac804ca1d95802193ef8639" Feb 20 12:19:10.661761 master-0 kubenswrapper[31420]: I0220 12:19:10.659499 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9be4-account-create-update-4jzfk" Feb 20 12:19:10.748032 master-0 kubenswrapper[31420]: I0220 12:19:10.747878 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pprg9\" (UniqueName: \"kubernetes.io/projected/28293a37-d871-493f-8286-a6705a2e5bd8-kube-api-access-pprg9\") pod \"28293a37-d871-493f-8286-a6705a2e5bd8\" (UID: \"28293a37-d871-493f-8286-a6705a2e5bd8\") " Feb 20 12:19:10.748262 master-0 kubenswrapper[31420]: I0220 12:19:10.748230 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28293a37-d871-493f-8286-a6705a2e5bd8-operator-scripts\") pod \"28293a37-d871-493f-8286-a6705a2e5bd8\" (UID: \"28293a37-d871-493f-8286-a6705a2e5bd8\") " Feb 20 12:19:10.752562 master-0 kubenswrapper[31420]: I0220 12:19:10.749096 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28293a37-d871-493f-8286-a6705a2e5bd8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28293a37-d871-493f-8286-a6705a2e5bd8" (UID: "28293a37-d871-493f-8286-a6705a2e5bd8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:10.757579 master-0 kubenswrapper[31420]: I0220 12:19:10.753636 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28293a37-d871-493f-8286-a6705a2e5bd8-kube-api-access-pprg9" (OuterVolumeSpecName: "kube-api-access-pprg9") pod "28293a37-d871-493f-8286-a6705a2e5bd8" (UID: "28293a37-d871-493f-8286-a6705a2e5bd8"). InnerVolumeSpecName "kube-api-access-pprg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:10.850563 master-0 kubenswrapper[31420]: I0220 12:19:10.850472 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28293a37-d871-493f-8286-a6705a2e5bd8-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:10.850563 master-0 kubenswrapper[31420]: I0220 12:19:10.850560 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pprg9\" (UniqueName: \"kubernetes.io/projected/28293a37-d871-493f-8286-a6705a2e5bd8-kube-api-access-pprg9\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:11.046400 master-0 kubenswrapper[31420]: I0220 12:19:11.046216 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9fa1-account-create-update-88db9" Feb 20 12:19:11.054872 master-0 kubenswrapper[31420]: I0220 12:19:11.054806 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zkxzd" Feb 20 12:19:11.073597 master-0 kubenswrapper[31420]: I0220 12:19:11.073297 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-k8p84" Feb 20 12:19:11.101587 master-0 kubenswrapper[31420]: I0220 12:19:11.101518 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9fa1-account-create-update-88db9" event={"ID":"b0bba7eb-9925-4bdc-b18a-09a15d13bb07","Type":"ContainerDied","Data":"e437bf783c8e949b1aa27b791ff730b35eefad605be67a1cf6491ff93d12d6d5"} Feb 20 12:19:11.101587 master-0 kubenswrapper[31420]: I0220 12:19:11.101581 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e437bf783c8e949b1aa27b791ff730b35eefad605be67a1cf6491ff93d12d6d5" Feb 20 12:19:11.101915 master-0 kubenswrapper[31420]: I0220 12:19:11.101612 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9fa1-account-create-update-88db9" Feb 20 12:19:11.107575 master-0 kubenswrapper[31420]: I0220 12:19:11.107494 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gmvz9" event={"ID":"9e5fb862-336d-459e-8b0d-7a688bfc722c","Type":"ContainerDied","Data":"1e814f9f90215e4a8895a62b1925be75b620eb67786301aaa387c5cfea199957"} Feb 20 12:19:11.107575 master-0 kubenswrapper[31420]: I0220 12:19:11.107555 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e814f9f90215e4a8895a62b1925be75b620eb67786301aaa387c5cfea199957" Feb 20 12:19:11.107900 master-0 kubenswrapper[31420]: I0220 12:19:11.107704 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gmvz9" Feb 20 12:19:11.111163 master-0 kubenswrapper[31420]: I0220 12:19:11.109178 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zkxzd" Feb 20 12:19:11.111163 master-0 kubenswrapper[31420]: I0220 12:19:11.109177 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zkxzd" event={"ID":"57a83694-e4aa-4031-8ba3-7eaab90b0abd","Type":"ContainerDied","Data":"d33e2e556b5b8c42c348d08aef7a570d48ae1e8798e8034fa9aca426b7eda7bf"} Feb 20 12:19:11.111163 master-0 kubenswrapper[31420]: I0220 12:19:11.109285 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33e2e556b5b8c42c348d08aef7a570d48ae1e8798e8034fa9aca426b7eda7bf" Feb 20 12:19:11.111163 master-0 kubenswrapper[31420]: I0220 12:19:11.110489 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9be4-account-create-update-4jzfk" event={"ID":"28293a37-d871-493f-8286-a6705a2e5bd8","Type":"ContainerDied","Data":"1769c2475ac61bca9a2e2417528e285125168921f5b14c9a0312a8e508299c90"} Feb 20 12:19:11.111163 master-0 kubenswrapper[31420]: I0220 12:19:11.110511 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1769c2475ac61bca9a2e2417528e285125168921f5b14c9a0312a8e508299c90" Feb 20 12:19:11.111163 master-0 kubenswrapper[31420]: I0220 12:19:11.110578 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9be4-account-create-update-4jzfk" Feb 20 12:19:11.142103 master-0 kubenswrapper[31420]: I0220 12:19:11.139605 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-k8p84" event={"ID":"0311ef3e-c313-4f78-86ce-371566b44c31","Type":"ContainerDied","Data":"bfbe3629eaec8d6546307cce75474aaacf4c57d882fc6d6399f3ae9b4cf4a648"} Feb 20 12:19:11.142381 master-0 kubenswrapper[31420]: I0220 12:19:11.142107 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfbe3629eaec8d6546307cce75474aaacf4c57d882fc6d6399f3ae9b4cf4a648" Feb 20 12:19:11.156953 master-0 kubenswrapper[31420]: I0220 12:19:11.143014 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-k8p84" Feb 20 12:19:11.168610 master-0 kubenswrapper[31420]: I0220 12:19:11.168414 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57a83694-e4aa-4031-8ba3-7eaab90b0abd-operator-scripts\") pod \"57a83694-e4aa-4031-8ba3-7eaab90b0abd\" (UID: \"57a83694-e4aa-4031-8ba3-7eaab90b0abd\") " Feb 20 12:19:11.168730 master-0 kubenswrapper[31420]: I0220 12:19:11.168640 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0311ef3e-c313-4f78-86ce-371566b44c31-operator-scripts\") pod \"0311ef3e-c313-4f78-86ce-371566b44c31\" (UID: \"0311ef3e-c313-4f78-86ce-371566b44c31\") " Feb 20 12:19:11.168803 master-0 kubenswrapper[31420]: I0220 12:19:11.168791 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n54g8\" (UniqueName: \"kubernetes.io/projected/57a83694-e4aa-4031-8ba3-7eaab90b0abd-kube-api-access-n54g8\") pod \"57a83694-e4aa-4031-8ba3-7eaab90b0abd\" (UID: \"57a83694-e4aa-4031-8ba3-7eaab90b0abd\") " Feb 20 12:19:11.170323 master-0 kubenswrapper[31420]: I0220 12:19:11.169105 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvsjb\" (UniqueName: \"kubernetes.io/projected/0311ef3e-c313-4f78-86ce-371566b44c31-kube-api-access-kvsjb\") pod \"0311ef3e-c313-4f78-86ce-371566b44c31\" (UID: \"0311ef3e-c313-4f78-86ce-371566b44c31\") " Feb 20 12:19:11.170323 master-0 kubenswrapper[31420]: I0220 12:19:11.169415 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svsnf\" (UniqueName: \"kubernetes.io/projected/b0bba7eb-9925-4bdc-b18a-09a15d13bb07-kube-api-access-svsnf\") pod \"b0bba7eb-9925-4bdc-b18a-09a15d13bb07\" (UID: \"b0bba7eb-9925-4bdc-b18a-09a15d13bb07\") " Feb 20 12:19:11.170323 master-0 kubenswrapper[31420]: I0220 12:19:11.169636 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0311ef3e-c313-4f78-86ce-371566b44c31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0311ef3e-c313-4f78-86ce-371566b44c31" (UID: "0311ef3e-c313-4f78-86ce-371566b44c31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:11.170323 master-0 kubenswrapper[31420]: I0220 12:19:11.169699 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0bba7eb-9925-4bdc-b18a-09a15d13bb07-operator-scripts\") pod \"b0bba7eb-9925-4bdc-b18a-09a15d13bb07\" (UID: \"b0bba7eb-9925-4bdc-b18a-09a15d13bb07\") " Feb 20 12:19:11.172103 master-0 kubenswrapper[31420]: I0220 12:19:11.171892 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57a83694-e4aa-4031-8ba3-7eaab90b0abd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57a83694-e4aa-4031-8ba3-7eaab90b0abd" (UID: "57a83694-e4aa-4031-8ba3-7eaab90b0abd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:11.172650 master-0 kubenswrapper[31420]: I0220 12:19:11.172557 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0bba7eb-9925-4bdc-b18a-09a15d13bb07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0bba7eb-9925-4bdc-b18a-09a15d13bb07" (UID: "b0bba7eb-9925-4bdc-b18a-09a15d13bb07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:11.177594 master-0 kubenswrapper[31420]: I0220 12:19:11.175851 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a83694-e4aa-4031-8ba3-7eaab90b0abd-kube-api-access-n54g8" (OuterVolumeSpecName: "kube-api-access-n54g8") pod "57a83694-e4aa-4031-8ba3-7eaab90b0abd" (UID: "57a83694-e4aa-4031-8ba3-7eaab90b0abd"). InnerVolumeSpecName "kube-api-access-n54g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:11.177594 master-0 kubenswrapper[31420]: I0220 12:19:11.175966 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0bba7eb-9925-4bdc-b18a-09a15d13bb07-kube-api-access-svsnf" (OuterVolumeSpecName: "kube-api-access-svsnf") pod "b0bba7eb-9925-4bdc-b18a-09a15d13bb07" (UID: "b0bba7eb-9925-4bdc-b18a-09a15d13bb07"). InnerVolumeSpecName "kube-api-access-svsnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:11.189750 master-0 kubenswrapper[31420]: I0220 12:19:11.189653 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0bba7eb-9925-4bdc-b18a-09a15d13bb07-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:11.189902 master-0 kubenswrapper[31420]: I0220 12:19:11.189890 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57a83694-e4aa-4031-8ba3-7eaab90b0abd-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:11.190020 master-0 kubenswrapper[31420]: I0220 12:19:11.189993 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0311ef3e-c313-4f78-86ce-371566b44c31-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:11.190110 master-0 kubenswrapper[31420]: I0220 12:19:11.190080 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n54g8\" (UniqueName: \"kubernetes.io/projected/57a83694-e4aa-4031-8ba3-7eaab90b0abd-kube-api-access-n54g8\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:11.190191 master-0 kubenswrapper[31420]: I0220 12:19:11.190180 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svsnf\" (UniqueName: \"kubernetes.io/projected/b0bba7eb-9925-4bdc-b18a-09a15d13bb07-kube-api-access-svsnf\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:11.194775 master-0 kubenswrapper[31420]: I0220 12:19:11.194659 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0311ef3e-c313-4f78-86ce-371566b44c31-kube-api-access-kvsjb" (OuterVolumeSpecName: "kube-api-access-kvsjb") pod "0311ef3e-c313-4f78-86ce-371566b44c31" (UID: "0311ef3e-c313-4f78-86ce-371566b44c31"). InnerVolumeSpecName "kube-api-access-kvsjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:11.291554 master-0 kubenswrapper[31420]: I0220 12:19:11.291471 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp445\" (UniqueName: \"kubernetes.io/projected/9e5fb862-336d-459e-8b0d-7a688bfc722c-kube-api-access-tp445\") pod \"9e5fb862-336d-459e-8b0d-7a688bfc722c\" (UID: \"9e5fb862-336d-459e-8b0d-7a688bfc722c\") " Feb 20 12:19:11.291738 master-0 kubenswrapper[31420]: I0220 12:19:11.291629 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e5fb862-336d-459e-8b0d-7a688bfc722c-operator-scripts\") pod \"9e5fb862-336d-459e-8b0d-7a688bfc722c\" (UID: \"9e5fb862-336d-459e-8b0d-7a688bfc722c\") " Feb 20 12:19:11.292130 master-0 kubenswrapper[31420]: I0220 12:19:11.292104 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvsjb\" (UniqueName: \"kubernetes.io/projected/0311ef3e-c313-4f78-86ce-371566b44c31-kube-api-access-kvsjb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:11.292568 master-0 kubenswrapper[31420]: I0220 12:19:11.292542 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e5fb862-336d-459e-8b0d-7a688bfc722c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e5fb862-336d-459e-8b0d-7a688bfc722c" (UID: "9e5fb862-336d-459e-8b0d-7a688bfc722c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:11.296866 master-0 kubenswrapper[31420]: I0220 12:19:11.296797 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e5fb862-336d-459e-8b0d-7a688bfc722c-kube-api-access-tp445" (OuterVolumeSpecName: "kube-api-access-tp445") pod "9e5fb862-336d-459e-8b0d-7a688bfc722c" (UID: "9e5fb862-336d-459e-8b0d-7a688bfc722c"). InnerVolumeSpecName "kube-api-access-tp445". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:11.395072 master-0 kubenswrapper[31420]: I0220 12:19:11.394955 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp445\" (UniqueName: \"kubernetes.io/projected/9e5fb862-336d-459e-8b0d-7a688bfc722c-kube-api-access-tp445\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:11.395072 master-0 kubenswrapper[31420]: I0220 12:19:11.395047 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e5fb862-336d-459e-8b0d-7a688bfc722c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:11.671785 master-0 kubenswrapper[31420]: I0220 12:19:11.671720 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54fa-account-create-update-5ng4x" Feb 20 12:19:11.804671 master-0 kubenswrapper[31420]: I0220 12:19:11.804591 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk6ss\" (UniqueName: \"kubernetes.io/projected/8565cedf-c9e9-45a2-a463-00f1f5224559-kube-api-access-hk6ss\") pod \"8565cedf-c9e9-45a2-a463-00f1f5224559\" (UID: \"8565cedf-c9e9-45a2-a463-00f1f5224559\") " Feb 20 12:19:11.804964 master-0 kubenswrapper[31420]: I0220 12:19:11.804824 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8565cedf-c9e9-45a2-a463-00f1f5224559-operator-scripts\") pod \"8565cedf-c9e9-45a2-a463-00f1f5224559\" (UID: \"8565cedf-c9e9-45a2-a463-00f1f5224559\") " Feb 20 12:19:11.808037 master-0 kubenswrapper[31420]: I0220 12:19:11.806142 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8565cedf-c9e9-45a2-a463-00f1f5224559-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8565cedf-c9e9-45a2-a463-00f1f5224559" (UID: "8565cedf-c9e9-45a2-a463-00f1f5224559"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:11.814441 master-0 kubenswrapper[31420]: I0220 12:19:11.814232 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8565cedf-c9e9-45a2-a463-00f1f5224559-kube-api-access-hk6ss" (OuterVolumeSpecName: "kube-api-access-hk6ss") pod "8565cedf-c9e9-45a2-a463-00f1f5224559" (UID: "8565cedf-c9e9-45a2-a463-00f1f5224559"). InnerVolumeSpecName "kube-api-access-hk6ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:11.907918 master-0 kubenswrapper[31420]: I0220 12:19:11.907832 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk6ss\" (UniqueName: \"kubernetes.io/projected/8565cedf-c9e9-45a2-a463-00f1f5224559-kube-api-access-hk6ss\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:11.908223 master-0 kubenswrapper[31420]: I0220 12:19:11.907934 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8565cedf-c9e9-45a2-a463-00f1f5224559-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:12.157448 master-0 kubenswrapper[31420]: I0220 12:19:12.157257 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54fa-account-create-update-5ng4x" event={"ID":"8565cedf-c9e9-45a2-a463-00f1f5224559","Type":"ContainerDied","Data":"d29f95b691317daf4c5c33cb151ce1c4903f63c70d00e924f2ddd8151076ed9e"} Feb 20 12:19:12.157448 master-0 kubenswrapper[31420]: I0220 12:19:12.157304 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54fa-account-create-update-5ng4x" Feb 20 12:19:12.158408 master-0 kubenswrapper[31420]: I0220 12:19:12.157313 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gmvz9" Feb 20 12:19:12.158408 master-0 kubenswrapper[31420]: I0220 12:19:12.157322 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d29f95b691317daf4c5c33cb151ce1c4903f63c70d00e924f2ddd8151076ed9e" Feb 20 12:19:13.860008 master-0 kubenswrapper[31420]: I0220 12:19:13.859925 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-62fhw"] Feb 20 12:19:13.867955 master-0 kubenswrapper[31420]: I0220 12:19:13.867883 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-62fhw"] Feb 20 12:19:13.950006 master-0 kubenswrapper[31420]: I0220 12:19:13.949865 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sztf8"] Feb 20 12:19:13.950770 master-0 kubenswrapper[31420]: E0220 12:19:13.950730 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a83694-e4aa-4031-8ba3-7eaab90b0abd" containerName="mariadb-database-create" Feb 20 12:19:13.950770 master-0 kubenswrapper[31420]: I0220 12:19:13.950758 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a83694-e4aa-4031-8ba3-7eaab90b0abd" containerName="mariadb-database-create" Feb 20 12:19:13.950770 master-0 kubenswrapper[31420]: E0220 12:19:13.950775 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8565cedf-c9e9-45a2-a463-00f1f5224559" containerName="mariadb-account-create-update" Feb 20 12:19:13.950964 master-0 kubenswrapper[31420]: I0220 12:19:13.950783 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8565cedf-c9e9-45a2-a463-00f1f5224559" containerName="mariadb-account-create-update" Feb 20 12:19:13.950964 master-0 kubenswrapper[31420]: E0220 12:19:13.950823 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0311ef3e-c313-4f78-86ce-371566b44c31" containerName="mariadb-database-create" Feb 20 12:19:13.950964 master-0 kubenswrapper[31420]: I0220 12:19:13.950845 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="0311ef3e-c313-4f78-86ce-371566b44c31" containerName="mariadb-database-create" Feb 20 12:19:13.950964 master-0 kubenswrapper[31420]: E0220 12:19:13.950858 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5729d2-75e9-403d-828f-2739cfc261e0" containerName="init" Feb 20 12:19:13.950964 master-0 kubenswrapper[31420]: I0220 12:19:13.950867 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5729d2-75e9-403d-828f-2739cfc261e0" containerName="init" Feb 20 12:19:13.950964 master-0 kubenswrapper[31420]: E0220 12:19:13.950886 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e5fb862-336d-459e-8b0d-7a688bfc722c" containerName="mariadb-database-create" Feb 20 12:19:13.950964 master-0 kubenswrapper[31420]: I0220 12:19:13.950893 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e5fb862-336d-459e-8b0d-7a688bfc722c" containerName="mariadb-database-create" Feb 20 12:19:13.950964 master-0 kubenswrapper[31420]: E0220 12:19:13.950925 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28293a37-d871-493f-8286-a6705a2e5bd8" containerName="mariadb-account-create-update" Feb 20 12:19:13.950964 master-0 kubenswrapper[31420]: I0220 12:19:13.950936 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="28293a37-d871-493f-8286-a6705a2e5bd8" containerName="mariadb-account-create-update" Feb 20 12:19:13.950964 master-0 kubenswrapper[31420]: E0220 12:19:13.950948 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5729d2-75e9-403d-828f-2739cfc261e0" containerName="dnsmasq-dns" Feb 20 12:19:13.950964 master-0 kubenswrapper[31420]: I0220 12:19:13.950954 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5729d2-75e9-403d-828f-2739cfc261e0" containerName="dnsmasq-dns" Feb 20 12:19:13.950964 master-0 kubenswrapper[31420]: E0220 12:19:13.950971 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec86992-c892-4514-b5d7-273f76f1c91a" containerName="mariadb-account-create-update" Feb 20 12:19:13.950964 master-0 kubenswrapper[31420]: I0220 12:19:13.950984 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec86992-c892-4514-b5d7-273f76f1c91a" containerName="mariadb-account-create-update" Feb 20 12:19:13.951497 master-0 kubenswrapper[31420]: E0220 12:19:13.951003 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0bba7eb-9925-4bdc-b18a-09a15d13bb07" containerName="mariadb-account-create-update" Feb 20 12:19:13.951497 master-0 kubenswrapper[31420]: I0220 12:19:13.951013 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0bba7eb-9925-4bdc-b18a-09a15d13bb07" containerName="mariadb-account-create-update" Feb 20 12:19:13.951497 master-0 kubenswrapper[31420]: I0220 12:19:13.951292 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec86992-c892-4514-b5d7-273f76f1c91a" containerName="mariadb-account-create-update" Feb 20 12:19:13.951497 master-0 kubenswrapper[31420]: I0220 12:19:13.951334 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0bba7eb-9925-4bdc-b18a-09a15d13bb07" containerName="mariadb-account-create-update" Feb 20 12:19:13.951497 master-0 kubenswrapper[31420]: I0220 12:19:13.951345 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="0311ef3e-c313-4f78-86ce-371566b44c31" containerName="mariadb-database-create" Feb 20 12:19:13.951497 master-0 kubenswrapper[31420]: I0220 12:19:13.951359 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8565cedf-c9e9-45a2-a463-00f1f5224559" containerName="mariadb-account-create-update" Feb 20 12:19:13.951497 master-0 kubenswrapper[31420]: I0220 12:19:13.951367 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a5729d2-75e9-403d-828f-2739cfc261e0" containerName="dnsmasq-dns" Feb 20 12:19:13.951497 master-0 kubenswrapper[31420]: I0220 12:19:13.951387 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e5fb862-336d-459e-8b0d-7a688bfc722c" containerName="mariadb-database-create" Feb 20 12:19:13.951497 master-0 kubenswrapper[31420]: I0220 12:19:13.951405 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a83694-e4aa-4031-8ba3-7eaab90b0abd" containerName="mariadb-database-create" Feb 20 12:19:13.951497 master-0 kubenswrapper[31420]: I0220 12:19:13.951425 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="28293a37-d871-493f-8286-a6705a2e5bd8" containerName="mariadb-account-create-update" Feb 20 12:19:13.952597 master-0 kubenswrapper[31420]: I0220 12:19:13.952552 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sztf8" Feb 20 12:19:13.954510 master-0 kubenswrapper[31420]: I0220 12:19:13.954467 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 20 12:19:13.962996 master-0 kubenswrapper[31420]: I0220 12:19:13.962941 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sztf8"] Feb 20 12:19:14.057430 master-0 kubenswrapper[31420]: I0220 12:19:14.057305 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b293179-ca05-4d55-8691-120c2b338814-operator-scripts\") pod \"root-account-create-update-sztf8\" (UID: \"9b293179-ca05-4d55-8691-120c2b338814\") " pod="openstack/root-account-create-update-sztf8" Feb 20 12:19:14.057882 master-0 kubenswrapper[31420]: I0220 12:19:14.057802 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tv9v\" (UniqueName: \"kubernetes.io/projected/9b293179-ca05-4d55-8691-120c2b338814-kube-api-access-4tv9v\") pod \"root-account-create-update-sztf8\" (UID: \"9b293179-ca05-4d55-8691-120c2b338814\") " pod="openstack/root-account-create-update-sztf8" Feb 20 12:19:14.160690 master-0 kubenswrapper[31420]: I0220 12:19:14.160607 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tv9v\" (UniqueName: \"kubernetes.io/projected/9b293179-ca05-4d55-8691-120c2b338814-kube-api-access-4tv9v\") pod \"root-account-create-update-sztf8\" (UID: \"9b293179-ca05-4d55-8691-120c2b338814\") " pod="openstack/root-account-create-update-sztf8" Feb 20 12:19:14.161734 master-0 kubenswrapper[31420]: I0220 12:19:14.161679 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b293179-ca05-4d55-8691-120c2b338814-operator-scripts\") pod \"root-account-create-update-sztf8\" (UID: \"9b293179-ca05-4d55-8691-120c2b338814\") " pod="openstack/root-account-create-update-sztf8" Feb 20 12:19:14.163305 master-0 kubenswrapper[31420]: I0220 12:19:14.163227 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b293179-ca05-4d55-8691-120c2b338814-operator-scripts\") pod \"root-account-create-update-sztf8\" (UID: \"9b293179-ca05-4d55-8691-120c2b338814\") " pod="openstack/root-account-create-update-sztf8" Feb 20 12:19:14.184552 master-0 kubenswrapper[31420]: I0220 12:19:14.184463 31420 generic.go:334] "Generic (PLEG): container finished" podID="a64c3aa0-12b3-412e-804c-0fd4a79bc80f" containerID="85eb9edcf2aec7e72a973859c2082b33a88f4a83123b34d03c1f4e6e204817dc" exitCode=0 Feb 20 12:19:14.184753 master-0 kubenswrapper[31420]: I0220 12:19:14.184517 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lk7sj" event={"ID":"a64c3aa0-12b3-412e-804c-0fd4a79bc80f","Type":"ContainerDied","Data":"85eb9edcf2aec7e72a973859c2082b33a88f4a83123b34d03c1f4e6e204817dc"} Feb 20 12:19:14.185035 master-0 kubenswrapper[31420]: I0220 12:19:14.184996 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tv9v\" (UniqueName: \"kubernetes.io/projected/9b293179-ca05-4d55-8691-120c2b338814-kube-api-access-4tv9v\") pod \"root-account-create-update-sztf8\" (UID: \"9b293179-ca05-4d55-8691-120c2b338814\") " pod="openstack/root-account-create-update-sztf8" Feb 20 12:19:14.314229 master-0 kubenswrapper[31420]: I0220 12:19:14.313998 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sztf8" Feb 20 12:19:14.847428 master-0 kubenswrapper[31420]: I0220 12:19:14.847371 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sztf8"] Feb 20 12:19:14.856482 master-0 kubenswrapper[31420]: W0220 12:19:14.856430 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b293179_ca05_4d55_8691_120c2b338814.slice/crio-a522ea28959d53286445d310a7f8b24d5a785364f704a01ffb68cbd58e4dab09 WatchSource:0}: Error finding container a522ea28959d53286445d310a7f8b24d5a785364f704a01ffb68cbd58e4dab09: Status 404 returned error can't find the container with id a522ea28959d53286445d310a7f8b24d5a785364f704a01ffb68cbd58e4dab09 Feb 20 12:19:15.199132 master-0 kubenswrapper[31420]: I0220 12:19:15.198955 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sztf8" event={"ID":"9b293179-ca05-4d55-8691-120c2b338814","Type":"ContainerStarted","Data":"5df8110662bd016fd8287deb6b6772e83dce00230979d49dc7488b445f50b811"} Feb 20 12:19:15.199132 master-0 kubenswrapper[31420]: I0220 12:19:15.199018 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sztf8" event={"ID":"9b293179-ca05-4d55-8691-120c2b338814","Type":"ContainerStarted","Data":"a522ea28959d53286445d310a7f8b24d5a785364f704a01ffb68cbd58e4dab09"} Feb 20 12:19:15.233580 master-0 kubenswrapper[31420]: I0220 12:19:15.233443 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-sztf8" podStartSLOduration=2.233419544 podStartE2EDuration="2.233419544s" podCreationTimestamp="2026-02-20 12:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:19:15.224869144 +0000 UTC m=+859.944107425" watchObservedRunningTime="2026-02-20 12:19:15.233419544 +0000 UTC m=+859.952657825" Feb 20 12:19:15.521641 master-0 kubenswrapper[31420]: I0220 12:19:15.516205 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eec86992-c892-4514-b5d7-273f76f1c91a" path="/var/lib/kubelet/pods/eec86992-c892-4514-b5d7-273f76f1c91a/volumes" Feb 20 12:19:15.717377 master-0 kubenswrapper[31420]: I0220 12:19:15.717301 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:15.802776 master-0 kubenswrapper[31420]: I0220 12:19:15.802514 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-combined-ca-bundle\") pod \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " Feb 20 12:19:15.802776 master-0 kubenswrapper[31420]: I0220 12:19:15.802760 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-ring-data-devices\") pod \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " Feb 20 12:19:15.803260 master-0 kubenswrapper[31420]: I0220 12:19:15.802924 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g274b\" (UniqueName: \"kubernetes.io/projected/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-kube-api-access-g274b\") pod \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " Feb 20 12:19:15.803260 master-0 kubenswrapper[31420]: I0220 12:19:15.802991 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-scripts\") pod \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " Feb 20 12:19:15.803260 master-0 kubenswrapper[31420]: I0220 12:19:15.803044 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-dispersionconf\") pod \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " Feb 20 12:19:15.803260 master-0 kubenswrapper[31420]: I0220 12:19:15.803181 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-etc-swift\") pod \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " Feb 20 12:19:15.803767 master-0 kubenswrapper[31420]: I0220 12:19:15.803299 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-swiftconf\") pod \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\" (UID: \"a64c3aa0-12b3-412e-804c-0fd4a79bc80f\") " Feb 20 12:19:15.804121 master-0 kubenswrapper[31420]: I0220 12:19:15.804050 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:15.806139 master-0 kubenswrapper[31420]: I0220 12:19:15.806070 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a64c3aa0-12b3-412e-804c-0fd4a79bc80f" (UID: "a64c3aa0-12b3-412e-804c-0fd4a79bc80f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:19:15.806490 master-0 kubenswrapper[31420]: I0220 12:19:15.806423 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a64c3aa0-12b3-412e-804c-0fd4a79bc80f" (UID: "a64c3aa0-12b3-412e-804c-0fd4a79bc80f"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:15.811162 master-0 kubenswrapper[31420]: I0220 12:19:15.811060 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-kube-api-access-g274b" (OuterVolumeSpecName: "kube-api-access-g274b") pod "a64c3aa0-12b3-412e-804c-0fd4a79bc80f" (UID: "a64c3aa0-12b3-412e-804c-0fd4a79bc80f"). InnerVolumeSpecName "kube-api-access-g274b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:15.814255 master-0 kubenswrapper[31420]: I0220 12:19:15.814153 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad-etc-swift\") pod \"swift-storage-0\" (UID: \"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad\") " pod="openstack/swift-storage-0" Feb 20 12:19:15.821079 master-0 kubenswrapper[31420]: I0220 12:19:15.821007 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a64c3aa0-12b3-412e-804c-0fd4a79bc80f" (UID: "a64c3aa0-12b3-412e-804c-0fd4a79bc80f"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:19:15.854259 master-0 kubenswrapper[31420]: I0220 12:19:15.854187 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a64c3aa0-12b3-412e-804c-0fd4a79bc80f" (UID: "a64c3aa0-12b3-412e-804c-0fd4a79bc80f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:19:15.858267 master-0 kubenswrapper[31420]: I0220 12:19:15.858180 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a64c3aa0-12b3-412e-804c-0fd4a79bc80f" (UID: "a64c3aa0-12b3-412e-804c-0fd4a79bc80f"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:19:15.859620 master-0 kubenswrapper[31420]: I0220 12:19:15.859482 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-scripts" (OuterVolumeSpecName: "scripts") pod "a64c3aa0-12b3-412e-804c-0fd4a79bc80f" (UID: "a64c3aa0-12b3-412e-804c-0fd4a79bc80f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:15.906562 master-0 kubenswrapper[31420]: I0220 12:19:15.906491 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g274b\" (UniqueName: \"kubernetes.io/projected/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-kube-api-access-g274b\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:15.906794 master-0 kubenswrapper[31420]: I0220 12:19:15.906611 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:15.906794 master-0 kubenswrapper[31420]: I0220 12:19:15.906630 31420 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-dispersionconf\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:15.906794 master-0 kubenswrapper[31420]: I0220 12:19:15.906641 31420 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-etc-swift\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:15.906794 master-0 kubenswrapper[31420]: I0220 12:19:15.906698 31420 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-swiftconf\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:15.906794 master-0 kubenswrapper[31420]: I0220 12:19:15.906713 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:15.906794 master-0 kubenswrapper[31420]: I0220 12:19:15.906724 31420 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a64c3aa0-12b3-412e-804c-0fd4a79bc80f-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:16.018775 master-0 kubenswrapper[31420]: I0220 12:19:16.018699 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 20 12:19:16.219431 master-0 kubenswrapper[31420]: I0220 12:19:16.219354 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lk7sj" event={"ID":"a64c3aa0-12b3-412e-804c-0fd4a79bc80f","Type":"ContainerDied","Data":"21128e5dca57f46c69c3147d608dfbb92f37dd25c4c021ec39a8dfcdee34cbc5"} Feb 20 12:19:16.219431 master-0 kubenswrapper[31420]: I0220 12:19:16.219414 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lk7sj" Feb 20 12:19:16.219991 master-0 kubenswrapper[31420]: I0220 12:19:16.219439 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21128e5dca57f46c69c3147d608dfbb92f37dd25c4c021ec39a8dfcdee34cbc5" Feb 20 12:19:16.221506 master-0 kubenswrapper[31420]: I0220 12:19:16.221466 31420 generic.go:334] "Generic (PLEG): container finished" podID="9b293179-ca05-4d55-8691-120c2b338814" containerID="5df8110662bd016fd8287deb6b6772e83dce00230979d49dc7488b445f50b811" exitCode=0 Feb 20 12:19:16.221588 master-0 kubenswrapper[31420]: I0220 12:19:16.221517 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sztf8" event={"ID":"9b293179-ca05-4d55-8691-120c2b338814","Type":"ContainerDied","Data":"5df8110662bd016fd8287deb6b6772e83dce00230979d49dc7488b445f50b811"} Feb 20 12:19:16.624259 master-0 kubenswrapper[31420]: I0220 12:19:16.623961 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 20 12:19:16.995574 master-0 kubenswrapper[31420]: I0220 12:19:16.995398 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-2d6nq"] Feb 20 12:19:16.996010 master-0 kubenswrapper[31420]: E0220 12:19:16.995919 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64c3aa0-12b3-412e-804c-0fd4a79bc80f" containerName="swift-ring-rebalance" Feb 20 12:19:16.996010 master-0 kubenswrapper[31420]: I0220 12:19:16.995937 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64c3aa0-12b3-412e-804c-0fd4a79bc80f" containerName="swift-ring-rebalance" Feb 20 12:19:16.996555 master-0 kubenswrapper[31420]: I0220 12:19:16.996198 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64c3aa0-12b3-412e-804c-0fd4a79bc80f" containerName="swift-ring-rebalance" Feb 20 12:19:17.015569 master-0 kubenswrapper[31420]: I0220 12:19:16.997498 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:17.020290 master-0 kubenswrapper[31420]: I0220 12:19:17.018339 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-e60fa-config-data" Feb 20 12:19:17.035011 master-0 kubenswrapper[31420]: I0220 12:19:17.034948 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2d6nq"] Feb 20 12:19:17.139245 master-0 kubenswrapper[31420]: I0220 12:19:17.139161 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-combined-ca-bundle\") pod \"glance-db-sync-2d6nq\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:17.139485 master-0 kubenswrapper[31420]: I0220 12:19:17.139257 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-config-data\") pod \"glance-db-sync-2d6nq\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:17.139485 master-0 kubenswrapper[31420]: I0220 12:19:17.139299 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-db-sync-config-data\") pod \"glance-db-sync-2d6nq\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:17.139633 master-0 kubenswrapper[31420]: I0220 12:19:17.139559 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tsbh\" (UniqueName: \"kubernetes.io/projected/964cb0d1-eb1a-404e-b395-0a733f4ae02b-kube-api-access-2tsbh\") pod \"glance-db-sync-2d6nq\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:17.237474 master-0 kubenswrapper[31420]: I0220 12:19:17.237372 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"1610369f9d7c7bacef152d3e508bb84d3645f14a61cf427136e615ed8dc45ba3"} Feb 20 12:19:17.241023 master-0 kubenswrapper[31420]: I0220 12:19:17.240959 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tsbh\" (UniqueName: \"kubernetes.io/projected/964cb0d1-eb1a-404e-b395-0a733f4ae02b-kube-api-access-2tsbh\") pod \"glance-db-sync-2d6nq\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:17.241426 master-0 kubenswrapper[31420]: I0220 12:19:17.241326 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-combined-ca-bundle\") pod \"glance-db-sync-2d6nq\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:17.241590 master-0 kubenswrapper[31420]: I0220 12:19:17.241555 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-config-data\") pod \"glance-db-sync-2d6nq\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:17.241682 master-0 kubenswrapper[31420]: I0220 12:19:17.241661 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-db-sync-config-data\") pod \"glance-db-sync-2d6nq\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:17.252740 master-0 kubenswrapper[31420]: I0220 12:19:17.247878 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-config-data\") pod \"glance-db-sync-2d6nq\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:17.252740 master-0 kubenswrapper[31420]: I0220 12:19:17.249017 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-db-sync-config-data\") pod \"glance-db-sync-2d6nq\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:17.252740 master-0 kubenswrapper[31420]: I0220 12:19:17.249561 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-combined-ca-bundle\") pod \"glance-db-sync-2d6nq\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:17.282841 master-0 kubenswrapper[31420]: I0220 12:19:17.282724 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tsbh\" (UniqueName: \"kubernetes.io/projected/964cb0d1-eb1a-404e-b395-0a733f4ae02b-kube-api-access-2tsbh\") pod \"glance-db-sync-2d6nq\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:17.358298 master-0 kubenswrapper[31420]: I0220 12:19:17.358231 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:17.632039 master-0 kubenswrapper[31420]: I0220 12:19:17.631975 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 20 12:19:17.944125 master-0 kubenswrapper[31420]: I0220 12:19:17.944079 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sztf8" Feb 20 12:19:18.021518 master-0 kubenswrapper[31420]: I0220 12:19:18.021478 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2d6nq"] Feb 20 12:19:18.080945 master-0 kubenswrapper[31420]: I0220 12:19:18.080882 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b293179-ca05-4d55-8691-120c2b338814-operator-scripts\") pod \"9b293179-ca05-4d55-8691-120c2b338814\" (UID: \"9b293179-ca05-4d55-8691-120c2b338814\") " Feb 20 12:19:18.081146 master-0 kubenswrapper[31420]: I0220 12:19:18.081122 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tv9v\" (UniqueName: \"kubernetes.io/projected/9b293179-ca05-4d55-8691-120c2b338814-kube-api-access-4tv9v\") pod \"9b293179-ca05-4d55-8691-120c2b338814\" (UID: \"9b293179-ca05-4d55-8691-120c2b338814\") " Feb 20 12:19:18.081693 master-0 kubenswrapper[31420]: I0220 12:19:18.081468 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b293179-ca05-4d55-8691-120c2b338814-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9b293179-ca05-4d55-8691-120c2b338814" (UID: "9b293179-ca05-4d55-8691-120c2b338814"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:18.082093 master-0 kubenswrapper[31420]: I0220 12:19:18.082061 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b293179-ca05-4d55-8691-120c2b338814-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:18.086245 master-0 kubenswrapper[31420]: I0220 12:19:18.085593 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b293179-ca05-4d55-8691-120c2b338814-kube-api-access-4tv9v" (OuterVolumeSpecName: "kube-api-access-4tv9v") pod "9b293179-ca05-4d55-8691-120c2b338814" (UID: "9b293179-ca05-4d55-8691-120c2b338814"). InnerVolumeSpecName "kube-api-access-4tv9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:18.184402 master-0 kubenswrapper[31420]: I0220 12:19:18.184341 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tv9v\" (UniqueName: \"kubernetes.io/projected/9b293179-ca05-4d55-8691-120c2b338814-kube-api-access-4tv9v\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:18.249318 master-0 kubenswrapper[31420]: I0220 12:19:18.248959 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2d6nq" event={"ID":"964cb0d1-eb1a-404e-b395-0a733f4ae02b","Type":"ContainerStarted","Data":"979e0a5a357b1d3f1b4604fc77d1bf3863ced13adad1606935aa20ff836c59a0"} Feb 20 12:19:18.251237 master-0 kubenswrapper[31420]: I0220 12:19:18.251200 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"8df7931656d9261a6e9dc86fb76dc3ddd79fa2e841bcb40876fa406c1af39398"} Feb 20 12:19:18.253031 master-0 kubenswrapper[31420]: I0220 12:19:18.252971 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sztf8" event={"ID":"9b293179-ca05-4d55-8691-120c2b338814","Type":"ContainerDied","Data":"a522ea28959d53286445d310a7f8b24d5a785364f704a01ffb68cbd58e4dab09"} Feb 20 12:19:18.253086 master-0 kubenswrapper[31420]: I0220 12:19:18.253033 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a522ea28959d53286445d310a7f8b24d5a785364f704a01ffb68cbd58e4dab09" Feb 20 12:19:18.253164 master-0 kubenswrapper[31420]: I0220 12:19:18.253088 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sztf8" Feb 20 12:19:19.265693 master-0 kubenswrapper[31420]: I0220 12:19:19.265609 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"1563f002ec3a19f8b390478153f2e811201f5f61a167d8473f2c363bef0727f4"} Feb 20 12:19:19.265693 master-0 kubenswrapper[31420]: I0220 12:19:19.265676 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"c63d8913748c0c855545ecd9bb052b43f887463f8c3216782ac76b2aef31d546"} Feb 20 12:19:19.266404 master-0 kubenswrapper[31420]: I0220 12:19:19.265744 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"eb7ffd5a06598121ac0a60f23ee740dda42a01aa49254157dde51fe6499cc5f9"} Feb 20 12:19:20.288684 master-0 kubenswrapper[31420]: I0220 12:19:20.288598 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"4275c1e5dc01114d9376ac13923af7323a250b296c153fe2861befbc81190303"} Feb 20 12:19:21.309494 master-0 kubenswrapper[31420]: I0220 12:19:21.309432 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"987f75360f0d09c895c2a91a4bc9b0d9b498b704d9ef74e5461d650d418ad3b0"} Feb 20 12:19:21.309494 master-0 kubenswrapper[31420]: I0220 12:19:21.309495 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"a41608094d7fa99bbc918bc7d929fb3d8d13a5ed8300109658d70e06ba046275"} Feb 20 12:19:21.309494 master-0 kubenswrapper[31420]: I0220 12:19:21.309505 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"e8a62e596a998e5eb8cfdecb1867bcbc08c327bcce86949c9e61ccb97e6e0ccd"} Feb 20 12:19:21.557144 master-0 kubenswrapper[31420]: I0220 12:19:21.557042 31420 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4twdx" podUID="d653c352-bccc-4fb7-bba0-97ad923e92e4" containerName="ovn-controller" probeResult="failure" output=< Feb 20 12:19:21.557144 master-0 kubenswrapper[31420]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 20 12:19:21.557144 master-0 kubenswrapper[31420]: > Feb 20 12:19:22.332731 master-0 kubenswrapper[31420]: I0220 12:19:22.332687 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"caf50cf094b3f797dc1fc029d5b415bce7621c87026fa95ed5c03691c654f50a"} Feb 20 12:19:23.352241 master-0 kubenswrapper[31420]: I0220 12:19:23.352195 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"5c9a3537fe2de3e774a16915e8c5989a4f97d74d39c7159cad4033661e57f6cf"} Feb 20 12:19:23.352862 master-0 kubenswrapper[31420]: I0220 12:19:23.352253 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"175af0dc9c16bc70600c64289844a038850f721886a4bffa96b1ecdad5eb56e0"} Feb 20 12:19:23.352862 master-0 kubenswrapper[31420]: I0220 12:19:23.352269 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"3d1ad8c7efba01be6291617f400b473498f0f32ebd350864dc28976bbc08bb97"} Feb 20 12:19:23.352862 master-0 kubenswrapper[31420]: I0220 12:19:23.352282 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"73a5bebdac800428f3e0e2512b0535d759e2ad6618d23bfd73abfef7d7cd94b6"} Feb 20 12:19:24.365092 master-0 kubenswrapper[31420]: I0220 12:19:24.365035 31420 generic.go:334] "Generic (PLEG): container finished" podID="de0e242c-6018-42c0-8a59-b755e2bd36b0" containerID="e8317f9af6dd37b6f0e47f2c5378d5aa457a3d9a46248b81bc77b1ada35e7013" exitCode=0 Feb 20 12:19:24.365748 master-0 kubenswrapper[31420]: I0220 12:19:24.365140 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de0e242c-6018-42c0-8a59-b755e2bd36b0","Type":"ContainerDied","Data":"e8317f9af6dd37b6f0e47f2c5378d5aa457a3d9a46248b81bc77b1ada35e7013"} Feb 20 12:19:24.368280 master-0 kubenswrapper[31420]: I0220 12:19:24.368200 31420 generic.go:334] "Generic (PLEG): container finished" podID="3027dc76-27b3-44c4-b217-885670c3e29e" containerID="16f9eba54beb3b518c519cf6dc87986f00661bde733dc9205d462389d48e75c7" exitCode=0 Feb 20 12:19:24.368384 master-0 kubenswrapper[31420]: I0220 12:19:24.368282 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3027dc76-27b3-44c4-b217-885670c3e29e","Type":"ContainerDied","Data":"16f9eba54beb3b518c519cf6dc87986f00661bde733dc9205d462389d48e75c7"} Feb 20 12:19:26.566752 master-0 kubenswrapper[31420]: I0220 12:19:26.566690 31420 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4twdx" podUID="d653c352-bccc-4fb7-bba0-97ad923e92e4" containerName="ovn-controller" probeResult="failure" output=< Feb 20 12:19:26.566752 master-0 kubenswrapper[31420]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 20 12:19:26.566752 master-0 kubenswrapper[31420]: > Feb 20 12:19:26.569750 master-0 kubenswrapper[31420]: I0220 12:19:26.566836 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:19:26.588971 master-0 kubenswrapper[31420]: I0220 12:19:26.587258 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hzmsb" Feb 20 12:19:27.202147 master-0 kubenswrapper[31420]: I0220 12:19:27.199897 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4twdx-config-pl7sr"] Feb 20 12:19:27.202147 master-0 kubenswrapper[31420]: E0220 12:19:27.200331 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b293179-ca05-4d55-8691-120c2b338814" containerName="mariadb-account-create-update" Feb 20 12:19:27.202147 master-0 kubenswrapper[31420]: I0220 12:19:27.200347 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b293179-ca05-4d55-8691-120c2b338814" containerName="mariadb-account-create-update" Feb 20 12:19:27.202147 master-0 kubenswrapper[31420]: I0220 12:19:27.200583 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b293179-ca05-4d55-8691-120c2b338814" containerName="mariadb-account-create-update" Feb 20 12:19:27.202147 master-0 kubenswrapper[31420]: I0220 12:19:27.201212 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.207549 master-0 kubenswrapper[31420]: I0220 12:19:27.205926 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 20 12:19:27.224433 master-0 kubenswrapper[31420]: I0220 12:19:27.224163 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4twdx-config-pl7sr"] Feb 20 12:19:27.288042 master-0 kubenswrapper[31420]: I0220 12:19:27.287942 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-scripts\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.288042 master-0 kubenswrapper[31420]: I0220 12:19:27.288051 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph4mb\" (UniqueName: \"kubernetes.io/projected/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-kube-api-access-ph4mb\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.288386 master-0 kubenswrapper[31420]: I0220 12:19:27.288182 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-additional-scripts\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.288386 master-0 kubenswrapper[31420]: I0220 12:19:27.288236 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-run-ovn\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.288386 master-0 kubenswrapper[31420]: I0220 12:19:27.288267 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-run\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.288386 master-0 kubenswrapper[31420]: I0220 12:19:27.288314 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-log-ovn\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.390256 master-0 kubenswrapper[31420]: I0220 12:19:27.390196 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-additional-scripts\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.390256 master-0 kubenswrapper[31420]: I0220 12:19:27.390268 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-run-ovn\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.390549 master-0 kubenswrapper[31420]: I0220 12:19:27.390299 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-run\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.390549 master-0 kubenswrapper[31420]: I0220 12:19:27.390326 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-log-ovn\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.390549 master-0 kubenswrapper[31420]: I0220 12:19:27.390361 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-scripts\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.390549 master-0 kubenswrapper[31420]: I0220 12:19:27.390410 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph4mb\" (UniqueName: \"kubernetes.io/projected/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-kube-api-access-ph4mb\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.390746 master-0 kubenswrapper[31420]: I0220 12:19:27.390654 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-log-ovn\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.390800 master-0 kubenswrapper[31420]: I0220 12:19:27.390730 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-run\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.391160 master-0 kubenswrapper[31420]: I0220 12:19:27.391137 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-run-ovn\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.391411 master-0 kubenswrapper[31420]: I0220 12:19:27.391383 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-additional-scripts\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.393753 master-0 kubenswrapper[31420]: I0220 12:19:27.393689 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-scripts\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.407381 master-0 kubenswrapper[31420]: I0220 12:19:27.407309 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph4mb\" (UniqueName: \"kubernetes.io/projected/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-kube-api-access-ph4mb\") pod \"ovn-controller-4twdx-config-pl7sr\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:27.551984 master-0 kubenswrapper[31420]: I0220 12:19:27.551235 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:30.149566 master-0 kubenswrapper[31420]: I0220 12:19:30.149497 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4twdx-config-pl7sr"] Feb 20 12:19:30.158368 master-0 kubenswrapper[31420]: W0220 12:19:30.158314 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ddbb9f3_13a6_4a02_80f7_8b63047918e1.slice/crio-c31d9fd4090925ff3975a385234727e5127c3478fa9f712c4cf155624d7257e8 WatchSource:0}: Error finding container c31d9fd4090925ff3975a385234727e5127c3478fa9f712c4cf155624d7257e8: Status 404 returned error can't find the container with id c31d9fd4090925ff3975a385234727e5127c3478fa9f712c4cf155624d7257e8 Feb 20 12:19:30.444702 master-0 kubenswrapper[31420]: I0220 12:19:30.444646 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de0e242c-6018-42c0-8a59-b755e2bd36b0","Type":"ContainerStarted","Data":"57971997ae94b938225e16b0ee390dbc28ddb0dde00dfad84297c1bc0589ba0b"} Feb 20 12:19:30.444885 master-0 kubenswrapper[31420]: I0220 12:19:30.444847 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:19:30.450619 master-0 kubenswrapper[31420]: I0220 12:19:30.450571 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"ba8fa8fbc2b66adf41f6013da3b55f56eb592772966c0826369c8c8b29daf745"} Feb 20 12:19:30.450619 master-0 kubenswrapper[31420]: I0220 12:19:30.450607 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad","Type":"ContainerStarted","Data":"d07d90a8b9428b1f2a5b3371b46bc8994794f4a86a430ed814da1b583b17c5df"} Feb 20 12:19:30.457723 master-0 kubenswrapper[31420]: I0220 12:19:30.457617 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2d6nq" event={"ID":"964cb0d1-eb1a-404e-b395-0a733f4ae02b","Type":"ContainerStarted","Data":"61179ce47e86b7e72ff24a09f0108086936c3398509fb17f634c7953b8e76888"} Feb 20 12:19:30.461521 master-0 kubenswrapper[31420]: I0220 12:19:30.460300 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3027dc76-27b3-44c4-b217-885670c3e29e","Type":"ContainerStarted","Data":"be6a018d35554544dd1b630c2c5866e3d36a2bc9eb7409b4e4c677da7abdf62b"} Feb 20 12:19:30.461521 master-0 kubenswrapper[31420]: I0220 12:19:30.460634 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 20 12:19:30.468187 master-0 kubenswrapper[31420]: I0220 12:19:30.466050 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4twdx-config-pl7sr" event={"ID":"7ddbb9f3-13a6-4a02-80f7-8b63047918e1","Type":"ContainerStarted","Data":"c31d9fd4090925ff3975a385234727e5127c3478fa9f712c4cf155624d7257e8"} Feb 20 12:19:30.479154 master-0 kubenswrapper[31420]: I0220 12:19:30.479048 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=58.427687098 podStartE2EDuration="1m9.478996067s" podCreationTimestamp="2026-02-20 12:18:21 +0000 UTC" firstStartedPulling="2026-02-20 12:18:39.111329231 +0000 UTC m=+823.830567502" lastFinishedPulling="2026-02-20 12:18:50.16263823 +0000 UTC m=+834.881876471" observedRunningTime="2026-02-20 12:19:30.47841016 +0000 UTC m=+875.197648411" watchObservedRunningTime="2026-02-20 12:19:30.478996067 +0000 UTC m=+875.198234348" Feb 20 12:19:30.517605 master-0 kubenswrapper[31420]: I0220 12:19:30.515043 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-2d6nq" podStartSLOduration=2.798254054 podStartE2EDuration="14.515017135s" podCreationTimestamp="2026-02-20 12:19:16 +0000 UTC" firstStartedPulling="2026-02-20 12:19:18.021060094 +0000 UTC m=+862.740298335" lastFinishedPulling="2026-02-20 12:19:29.737823175 +0000 UTC m=+874.457061416" observedRunningTime="2026-02-20 12:19:30.497193546 +0000 UTC m=+875.216431787" watchObservedRunningTime="2026-02-20 12:19:30.515017135 +0000 UTC m=+875.234255376" Feb 20 12:19:30.543603 master-0 kubenswrapper[31420]: I0220 12:19:30.543432 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4twdx-config-pl7sr" podStartSLOduration=3.5434035 podStartE2EDuration="3.5434035s" podCreationTimestamp="2026-02-20 12:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:19:30.526475916 +0000 UTC m=+875.245714197" watchObservedRunningTime="2026-02-20 12:19:30.5434035 +0000 UTC m=+875.262641781" Feb 20 12:19:30.582666 master-0 kubenswrapper[31420]: I0220 12:19:30.582379 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=28.168916342 podStartE2EDuration="33.582361751s" podCreationTimestamp="2026-02-20 12:18:57 +0000 UTC" firstStartedPulling="2026-02-20 12:19:16.62585341 +0000 UTC m=+861.345091651" lastFinishedPulling="2026-02-20 12:19:22.039298809 +0000 UTC m=+866.758537060" observedRunningTime="2026-02-20 12:19:30.580702405 +0000 UTC m=+875.299940656" watchObservedRunningTime="2026-02-20 12:19:30.582361751 +0000 UTC m=+875.301599992" Feb 20 12:19:30.613580 master-0 kubenswrapper[31420]: I0220 12:19:30.613491 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=59.561184746 podStartE2EDuration="1m10.613475482s" podCreationTimestamp="2026-02-20 12:18:20 +0000 UTC" firstStartedPulling="2026-02-20 12:18:39.111823125 +0000 UTC m=+823.831061366" lastFinishedPulling="2026-02-20 12:18:50.164113861 +0000 UTC m=+834.883352102" observedRunningTime="2026-02-20 12:19:30.610650123 +0000 UTC m=+875.329888364" watchObservedRunningTime="2026-02-20 12:19:30.613475482 +0000 UTC m=+875.332713723" Feb 20 12:19:30.862346 master-0 kubenswrapper[31420]: I0220 12:19:30.862264 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cc6d6f9-6ct5q"] Feb 20 12:19:30.865765 master-0 kubenswrapper[31420]: I0220 12:19:30.865724 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:30.868458 master-0 kubenswrapper[31420]: I0220 12:19:30.868406 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 20 12:19:30.877607 master-0 kubenswrapper[31420]: I0220 12:19:30.877555 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cc6d6f9-6ct5q"] Feb 20 12:19:30.965317 master-0 kubenswrapper[31420]: I0220 12:19:30.965026 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-config\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:30.965317 master-0 kubenswrapper[31420]: I0220 12:19:30.965094 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-dns-swift-storage-0\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:30.965317 master-0 kubenswrapper[31420]: I0220 12:19:30.965143 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxvld\" (UniqueName: \"kubernetes.io/projected/380ac98f-d2d2-42cd-be81-c98387690df4-kube-api-access-nxvld\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:30.965593 master-0 kubenswrapper[31420]: I0220 12:19:30.965334 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-dns-svc\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:30.965593 master-0 kubenswrapper[31420]: I0220 12:19:30.965505 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-ovsdbserver-sb\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:30.965690 master-0 kubenswrapper[31420]: I0220 12:19:30.965668 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-ovsdbserver-nb\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:31.068191 master-0 kubenswrapper[31420]: I0220 12:19:31.068101 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-config\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:31.068401 master-0 kubenswrapper[31420]: I0220 12:19:31.068252 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-dns-swift-storage-0\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:31.069223 master-0 kubenswrapper[31420]: I0220 12:19:31.069193 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-config\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:31.069693 master-0 kubenswrapper[31420]: I0220 12:19:31.069637 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-dns-swift-storage-0\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:31.069778 master-0 kubenswrapper[31420]: I0220 12:19:31.069749 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxvld\" (UniqueName: \"kubernetes.io/projected/380ac98f-d2d2-42cd-be81-c98387690df4-kube-api-access-nxvld\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:31.069935 master-0 kubenswrapper[31420]: I0220 12:19:31.069913 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-dns-svc\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:31.070097 master-0 kubenswrapper[31420]: I0220 12:19:31.070061 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-ovsdbserver-sb\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:31.070662 master-0 kubenswrapper[31420]: I0220 12:19:31.070640 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-dns-svc\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:31.071332 master-0 kubenswrapper[31420]: I0220 12:19:31.071293 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-ovsdbserver-sb\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:31.071545 master-0 kubenswrapper[31420]: I0220 12:19:31.071507 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-ovsdbserver-nb\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:31.072976 master-0 kubenswrapper[31420]: I0220 12:19:31.072941 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-ovsdbserver-nb\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:31.087045 master-0 kubenswrapper[31420]: I0220 12:19:31.086996 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxvld\" (UniqueName: \"kubernetes.io/projected/380ac98f-d2d2-42cd-be81-c98387690df4-kube-api-access-nxvld\") pod \"dnsmasq-dns-78cc6d6f9-6ct5q\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:31.188254 master-0 kubenswrapper[31420]: I0220 12:19:31.188170 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:31.499356 master-0 kubenswrapper[31420]: I0220 12:19:31.499240 31420 generic.go:334] "Generic (PLEG): container finished" podID="7ddbb9f3-13a6-4a02-80f7-8b63047918e1" containerID="c55b17c12154340e7cab8f1ad70acc2287f46f82d1d31be8760ad228aee52d03" exitCode=0 Feb 20 12:19:31.528012 master-0 kubenswrapper[31420]: I0220 12:19:31.527899 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4twdx-config-pl7sr" event={"ID":"7ddbb9f3-13a6-4a02-80f7-8b63047918e1","Type":"ContainerDied","Data":"c55b17c12154340e7cab8f1ad70acc2287f46f82d1d31be8760ad228aee52d03"} Feb 20 12:19:31.578197 master-0 kubenswrapper[31420]: I0220 12:19:31.578129 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-4twdx" Feb 20 12:19:31.676414 master-0 kubenswrapper[31420]: I0220 12:19:31.676344 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cc6d6f9-6ct5q"] Feb 20 12:19:31.684852 master-0 kubenswrapper[31420]: W0220 12:19:31.684774 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod380ac98f_d2d2_42cd_be81_c98387690df4.slice/crio-9de88a60084813faeee373770633ee70c5e8d58c4852b4b5d7bd5fa11bb864b7 WatchSource:0}: Error finding container 9de88a60084813faeee373770633ee70c5e8d58c4852b4b5d7bd5fa11bb864b7: Status 404 returned error can't find the container with id 9de88a60084813faeee373770633ee70c5e8d58c4852b4b5d7bd5fa11bb864b7 Feb 20 12:19:32.513065 master-0 kubenswrapper[31420]: I0220 12:19:32.512982 31420 generic.go:334] "Generic (PLEG): container finished" podID="380ac98f-d2d2-42cd-be81-c98387690df4" containerID="13a2453f36e91d02e72cfa93ecfe6c136b9a4cb74eba8695c44691710f1ed0f6" exitCode=0 Feb 20 12:19:32.513065 master-0 kubenswrapper[31420]: I0220 12:19:32.513033 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" event={"ID":"380ac98f-d2d2-42cd-be81-c98387690df4","Type":"ContainerDied","Data":"13a2453f36e91d02e72cfa93ecfe6c136b9a4cb74eba8695c44691710f1ed0f6"} Feb 20 12:19:32.513959 master-0 kubenswrapper[31420]: I0220 12:19:32.513106 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" event={"ID":"380ac98f-d2d2-42cd-be81-c98387690df4","Type":"ContainerStarted","Data":"9de88a60084813faeee373770633ee70c5e8d58c4852b4b5d7bd5fa11bb864b7"} Feb 20 12:19:32.943411 master-0 kubenswrapper[31420]: I0220 12:19:32.943344 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:33.015086 master-0 kubenswrapper[31420]: I0220 12:19:33.015030 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-run-ovn\") pod \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " Feb 20 12:19:33.015374 master-0 kubenswrapper[31420]: I0220 12:19:33.015359 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-scripts\") pod \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " Feb 20 12:19:33.015489 master-0 kubenswrapper[31420]: I0220 12:19:33.015476 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-log-ovn\") pod \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " Feb 20 12:19:33.015614 master-0 kubenswrapper[31420]: I0220 12:19:33.015600 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-run\") pod \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " Feb 20 12:19:33.015714 master-0 kubenswrapper[31420]: I0220 12:19:33.015699 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-additional-scripts\") pod \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " Feb 20 12:19:33.015908 master-0 kubenswrapper[31420]: I0220 12:19:33.015188 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7ddbb9f3-13a6-4a02-80f7-8b63047918e1" (UID: "7ddbb9f3-13a6-4a02-80f7-8b63047918e1"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:19:33.015954 master-0 kubenswrapper[31420]: I0220 12:19:33.015889 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph4mb\" (UniqueName: \"kubernetes.io/projected/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-kube-api-access-ph4mb\") pod \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\" (UID: \"7ddbb9f3-13a6-4a02-80f7-8b63047918e1\") " Feb 20 12:19:33.016056 master-0 kubenswrapper[31420]: I0220 12:19:33.015731 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7ddbb9f3-13a6-4a02-80f7-8b63047918e1" (UID: "7ddbb9f3-13a6-4a02-80f7-8b63047918e1"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:19:33.016099 master-0 kubenswrapper[31420]: I0220 12:19:33.015792 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-run" (OuterVolumeSpecName: "var-run") pod "7ddbb9f3-13a6-4a02-80f7-8b63047918e1" (UID: "7ddbb9f3-13a6-4a02-80f7-8b63047918e1"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:19:33.016460 master-0 kubenswrapper[31420]: I0220 12:19:33.016397 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7ddbb9f3-13a6-4a02-80f7-8b63047918e1" (UID: "7ddbb9f3-13a6-4a02-80f7-8b63047918e1"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:33.017310 master-0 kubenswrapper[31420]: I0220 12:19:33.017272 31420 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:33.017375 master-0 kubenswrapper[31420]: I0220 12:19:33.017312 31420 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:33.017375 master-0 kubenswrapper[31420]: I0220 12:19:33.017334 31420 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-var-run\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:33.017375 master-0 kubenswrapper[31420]: I0220 12:19:33.017354 31420 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-additional-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:33.019592 master-0 kubenswrapper[31420]: I0220 12:19:33.019512 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-scripts" (OuterVolumeSpecName: "scripts") pod "7ddbb9f3-13a6-4a02-80f7-8b63047918e1" (UID: "7ddbb9f3-13a6-4a02-80f7-8b63047918e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:33.019895 master-0 kubenswrapper[31420]: I0220 12:19:33.019870 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-kube-api-access-ph4mb" (OuterVolumeSpecName: "kube-api-access-ph4mb") pod "7ddbb9f3-13a6-4a02-80f7-8b63047918e1" (UID: "7ddbb9f3-13a6-4a02-80f7-8b63047918e1"). InnerVolumeSpecName "kube-api-access-ph4mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:33.119657 master-0 kubenswrapper[31420]: I0220 12:19:33.119330 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph4mb\" (UniqueName: \"kubernetes.io/projected/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-kube-api-access-ph4mb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:33.119657 master-0 kubenswrapper[31420]: I0220 12:19:33.119385 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ddbb9f3-13a6-4a02-80f7-8b63047918e1-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:33.266505 master-0 kubenswrapper[31420]: I0220 12:19:33.266393 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4twdx-config-pl7sr"] Feb 20 12:19:33.279455 master-0 kubenswrapper[31420]: I0220 12:19:33.279388 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4twdx-config-pl7sr"] Feb 20 12:19:33.539199 master-0 kubenswrapper[31420]: I0220 12:19:33.539138 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ddbb9f3-13a6-4a02-80f7-8b63047918e1" path="/var/lib/kubelet/pods/7ddbb9f3-13a6-4a02-80f7-8b63047918e1/volumes" Feb 20 12:19:33.543929 master-0 kubenswrapper[31420]: I0220 12:19:33.543869 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4twdx-config-pl7sr" Feb 20 12:19:33.544139 master-0 kubenswrapper[31420]: I0220 12:19:33.544102 31420 scope.go:117] "RemoveContainer" containerID="c55b17c12154340e7cab8f1ad70acc2287f46f82d1d31be8760ad228aee52d03" Feb 20 12:19:33.550809 master-0 kubenswrapper[31420]: I0220 12:19:33.550758 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" event={"ID":"380ac98f-d2d2-42cd-be81-c98387690df4","Type":"ContainerStarted","Data":"18df7d168eb282c5bbc91f49fac72678a4c662bbafe86bf67ba91551ed043aa5"} Feb 20 12:19:33.552645 master-0 kubenswrapper[31420]: I0220 12:19:33.552598 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:33.581187 master-0 kubenswrapper[31420]: I0220 12:19:33.581080 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" podStartSLOduration=3.58105315 podStartE2EDuration="3.58105315s" podCreationTimestamp="2026-02-20 12:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:19:33.574240779 +0000 UTC m=+878.293479060" watchObservedRunningTime="2026-02-20 12:19:33.58105315 +0000 UTC m=+878.300291401" Feb 20 12:19:38.613151 master-0 kubenswrapper[31420]: I0220 12:19:38.613066 31420 generic.go:334] "Generic (PLEG): container finished" podID="964cb0d1-eb1a-404e-b395-0a733f4ae02b" containerID="61179ce47e86b7e72ff24a09f0108086936c3398509fb17f634c7953b8e76888" exitCode=0 Feb 20 12:19:38.613151 master-0 kubenswrapper[31420]: I0220 12:19:38.613122 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2d6nq" event={"ID":"964cb0d1-eb1a-404e-b395-0a733f4ae02b","Type":"ContainerDied","Data":"61179ce47e86b7e72ff24a09f0108086936c3398509fb17f634c7953b8e76888"} Feb 20 12:19:40.219191 master-0 kubenswrapper[31420]: I0220 12:19:40.219126 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:40.379221 master-0 kubenswrapper[31420]: I0220 12:19:40.379084 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-db-sync-config-data\") pod \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " Feb 20 12:19:40.379412 master-0 kubenswrapper[31420]: I0220 12:19:40.379284 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tsbh\" (UniqueName: \"kubernetes.io/projected/964cb0d1-eb1a-404e-b395-0a733f4ae02b-kube-api-access-2tsbh\") pod \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " Feb 20 12:19:40.379412 master-0 kubenswrapper[31420]: I0220 12:19:40.379331 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-config-data\") pod \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " Feb 20 12:19:40.379412 master-0 kubenswrapper[31420]: I0220 12:19:40.379396 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-combined-ca-bundle\") pod \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\" (UID: \"964cb0d1-eb1a-404e-b395-0a733f4ae02b\") " Feb 20 12:19:40.382708 master-0 kubenswrapper[31420]: I0220 12:19:40.382596 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/964cb0d1-eb1a-404e-b395-0a733f4ae02b-kube-api-access-2tsbh" (OuterVolumeSpecName: "kube-api-access-2tsbh") pod "964cb0d1-eb1a-404e-b395-0a733f4ae02b" (UID: "964cb0d1-eb1a-404e-b395-0a733f4ae02b"). InnerVolumeSpecName "kube-api-access-2tsbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:40.382708 master-0 kubenswrapper[31420]: I0220 12:19:40.382653 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "964cb0d1-eb1a-404e-b395-0a733f4ae02b" (UID: "964cb0d1-eb1a-404e-b395-0a733f4ae02b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:19:40.432265 master-0 kubenswrapper[31420]: I0220 12:19:40.432186 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "964cb0d1-eb1a-404e-b395-0a733f4ae02b" (UID: "964cb0d1-eb1a-404e-b395-0a733f4ae02b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:19:40.451961 master-0 kubenswrapper[31420]: I0220 12:19:40.451896 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-config-data" (OuterVolumeSpecName: "config-data") pod "964cb0d1-eb1a-404e-b395-0a733f4ae02b" (UID: "964cb0d1-eb1a-404e-b395-0a733f4ae02b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:19:40.481495 master-0 kubenswrapper[31420]: I0220 12:19:40.481434 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:40.481495 master-0 kubenswrapper[31420]: I0220 12:19:40.481479 31420 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:40.481495 master-0 kubenswrapper[31420]: I0220 12:19:40.481497 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tsbh\" (UniqueName: \"kubernetes.io/projected/964cb0d1-eb1a-404e-b395-0a733f4ae02b-kube-api-access-2tsbh\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:40.481779 master-0 kubenswrapper[31420]: I0220 12:19:40.481514 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/964cb0d1-eb1a-404e-b395-0a733f4ae02b-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:40.647077 master-0 kubenswrapper[31420]: I0220 12:19:40.646510 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2d6nq" event={"ID":"964cb0d1-eb1a-404e-b395-0a733f4ae02b","Type":"ContainerDied","Data":"979e0a5a357b1d3f1b4604fc77d1bf3863ced13adad1606935aa20ff836c59a0"} Feb 20 12:19:40.647077 master-0 kubenswrapper[31420]: I0220 12:19:40.646648 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2d6nq" Feb 20 12:19:40.647077 master-0 kubenswrapper[31420]: I0220 12:19:40.646714 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="979e0a5a357b1d3f1b4604fc77d1bf3863ced13adad1606935aa20ff836c59a0" Feb 20 12:19:41.146660 master-0 kubenswrapper[31420]: I0220 12:19:41.130187 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cc6d6f9-6ct5q"] Feb 20 12:19:41.146660 master-0 kubenswrapper[31420]: I0220 12:19:41.130496 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" podUID="380ac98f-d2d2-42cd-be81-c98387690df4" containerName="dnsmasq-dns" containerID="cri-o://18df7d168eb282c5bbc91f49fac72678a4c662bbafe86bf67ba91551ed043aa5" gracePeriod=10 Feb 20 12:19:41.146660 master-0 kubenswrapper[31420]: I0220 12:19:41.134312 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:41.186039 master-0 kubenswrapper[31420]: I0220 12:19:41.181977 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr"] Feb 20 12:19:41.186039 master-0 kubenswrapper[31420]: E0220 12:19:41.182476 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964cb0d1-eb1a-404e-b395-0a733f4ae02b" containerName="glance-db-sync" Feb 20 12:19:41.186039 master-0 kubenswrapper[31420]: I0220 12:19:41.182492 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="964cb0d1-eb1a-404e-b395-0a733f4ae02b" containerName="glance-db-sync" Feb 20 12:19:41.186039 master-0 kubenswrapper[31420]: E0220 12:19:41.182573 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ddbb9f3-13a6-4a02-80f7-8b63047918e1" containerName="ovn-config" Feb 20 12:19:41.186039 master-0 kubenswrapper[31420]: I0220 12:19:41.182585 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ddbb9f3-13a6-4a02-80f7-8b63047918e1" containerName="ovn-config" Feb 20 12:19:41.187126 master-0 kubenswrapper[31420]: I0220 12:19:41.186659 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ddbb9f3-13a6-4a02-80f7-8b63047918e1" containerName="ovn-config" Feb 20 12:19:41.187126 master-0 kubenswrapper[31420]: I0220 12:19:41.186709 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="964cb0d1-eb1a-404e-b395-0a733f4ae02b" containerName="glance-db-sync" Feb 20 12:19:41.188848 master-0 kubenswrapper[31420]: I0220 12:19:41.188272 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.189505 master-0 kubenswrapper[31420]: I0220 12:19:41.189463 31420 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" podUID="380ac98f-d2d2-42cd-be81-c98387690df4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.198:5353: connect: connection refused" Feb 20 12:19:41.205984 master-0 kubenswrapper[31420]: I0220 12:19:41.203211 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-dns-svc\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.205984 master-0 kubenswrapper[31420]: I0220 12:19:41.203301 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hpgp\" (UniqueName: \"kubernetes.io/projected/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-kube-api-access-2hpgp\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.205984 master-0 kubenswrapper[31420]: I0220 12:19:41.203366 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.205984 master-0 kubenswrapper[31420]: I0220 12:19:41.203442 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.205984 master-0 kubenswrapper[31420]: I0220 12:19:41.203478 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.205984 master-0 kubenswrapper[31420]: I0220 12:19:41.204609 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-config\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.232068 master-0 kubenswrapper[31420]: I0220 12:19:41.231610 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr"] Feb 20 12:19:41.306779 master-0 kubenswrapper[31420]: I0220 12:19:41.306717 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-dns-svc\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.306779 master-0 kubenswrapper[31420]: I0220 12:19:41.306774 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hpgp\" (UniqueName: \"kubernetes.io/projected/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-kube-api-access-2hpgp\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.306978 master-0 kubenswrapper[31420]: I0220 12:19:41.306821 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.306978 master-0 kubenswrapper[31420]: I0220 12:19:41.306891 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.306978 master-0 kubenswrapper[31420]: I0220 12:19:41.306916 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.306978 master-0 kubenswrapper[31420]: I0220 12:19:41.306951 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-config\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.309644 master-0 kubenswrapper[31420]: I0220 12:19:41.309402 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-config\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.310019 master-0 kubenswrapper[31420]: I0220 12:19:41.309969 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-dns-svc\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.311193 master-0 kubenswrapper[31420]: I0220 12:19:41.311159 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.311734 master-0 kubenswrapper[31420]: I0220 12:19:41.311705 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.312326 master-0 kubenswrapper[31420]: I0220 12:19:41.312260 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.330393 master-0 kubenswrapper[31420]: I0220 12:19:41.330318 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hpgp\" (UniqueName: \"kubernetes.io/projected/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-kube-api-access-2hpgp\") pod \"dnsmasq-dns-6b9bfb6bf7-8rdzr\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.559155 master-0 kubenswrapper[31420]: I0220 12:19:41.559095 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:41.660691 master-0 kubenswrapper[31420]: I0220 12:19:41.660044 31420 generic.go:334] "Generic (PLEG): container finished" podID="380ac98f-d2d2-42cd-be81-c98387690df4" containerID="18df7d168eb282c5bbc91f49fac72678a4c662bbafe86bf67ba91551ed043aa5" exitCode=0 Feb 20 12:19:41.660691 master-0 kubenswrapper[31420]: I0220 12:19:41.660124 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" event={"ID":"380ac98f-d2d2-42cd-be81-c98387690df4","Type":"ContainerDied","Data":"18df7d168eb282c5bbc91f49fac72678a4c662bbafe86bf67ba91551ed043aa5"} Feb 20 12:19:41.660691 master-0 kubenswrapper[31420]: I0220 12:19:41.660231 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" event={"ID":"380ac98f-d2d2-42cd-be81-c98387690df4","Type":"ContainerDied","Data":"9de88a60084813faeee373770633ee70c5e8d58c4852b4b5d7bd5fa11bb864b7"} Feb 20 12:19:41.660691 master-0 kubenswrapper[31420]: I0220 12:19:41.660244 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9de88a60084813faeee373770633ee70c5e8d58c4852b4b5d7bd5fa11bb864b7" Feb 20 12:19:41.691789 master-0 kubenswrapper[31420]: I0220 12:19:41.691741 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:41.832509 master-0 kubenswrapper[31420]: I0220 12:19:41.832045 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-dns-swift-storage-0\") pod \"380ac98f-d2d2-42cd-be81-c98387690df4\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " Feb 20 12:19:41.832509 master-0 kubenswrapper[31420]: I0220 12:19:41.832164 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxvld\" (UniqueName: \"kubernetes.io/projected/380ac98f-d2d2-42cd-be81-c98387690df4-kube-api-access-nxvld\") pod \"380ac98f-d2d2-42cd-be81-c98387690df4\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " Feb 20 12:19:41.832509 master-0 kubenswrapper[31420]: I0220 12:19:41.832254 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-config\") pod \"380ac98f-d2d2-42cd-be81-c98387690df4\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " Feb 20 12:19:41.832509 master-0 kubenswrapper[31420]: I0220 12:19:41.832349 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-ovsdbserver-sb\") pod \"380ac98f-d2d2-42cd-be81-c98387690df4\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " Feb 20 12:19:41.833866 master-0 kubenswrapper[31420]: I0220 12:19:41.832881 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-dns-svc\") pod \"380ac98f-d2d2-42cd-be81-c98387690df4\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " Feb 20 12:19:41.833866 master-0 kubenswrapper[31420]: I0220 12:19:41.832928 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-ovsdbserver-nb\") pod \"380ac98f-d2d2-42cd-be81-c98387690df4\" (UID: \"380ac98f-d2d2-42cd-be81-c98387690df4\") " Feb 20 12:19:41.836833 master-0 kubenswrapper[31420]: I0220 12:19:41.836468 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/380ac98f-d2d2-42cd-be81-c98387690df4-kube-api-access-nxvld" (OuterVolumeSpecName: "kube-api-access-nxvld") pod "380ac98f-d2d2-42cd-be81-c98387690df4" (UID: "380ac98f-d2d2-42cd-be81-c98387690df4"). InnerVolumeSpecName "kube-api-access-nxvld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:41.892714 master-0 kubenswrapper[31420]: I0220 12:19:41.892623 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "380ac98f-d2d2-42cd-be81-c98387690df4" (UID: "380ac98f-d2d2-42cd-be81-c98387690df4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:41.892923 master-0 kubenswrapper[31420]: I0220 12:19:41.892702 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-config" (OuterVolumeSpecName: "config") pod "380ac98f-d2d2-42cd-be81-c98387690df4" (UID: "380ac98f-d2d2-42cd-be81-c98387690df4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:41.896366 master-0 kubenswrapper[31420]: I0220 12:19:41.896307 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "380ac98f-d2d2-42cd-be81-c98387690df4" (UID: "380ac98f-d2d2-42cd-be81-c98387690df4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:41.904676 master-0 kubenswrapper[31420]: I0220 12:19:41.904614 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "380ac98f-d2d2-42cd-be81-c98387690df4" (UID: "380ac98f-d2d2-42cd-be81-c98387690df4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:41.907328 master-0 kubenswrapper[31420]: I0220 12:19:41.906478 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "380ac98f-d2d2-42cd-be81-c98387690df4" (UID: "380ac98f-d2d2-42cd-be81-c98387690df4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:41.938920 master-0 kubenswrapper[31420]: I0220 12:19:41.937990 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxvld\" (UniqueName: \"kubernetes.io/projected/380ac98f-d2d2-42cd-be81-c98387690df4-kube-api-access-nxvld\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:41.938920 master-0 kubenswrapper[31420]: I0220 12:19:41.938061 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:41.938920 master-0 kubenswrapper[31420]: I0220 12:19:41.938077 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:41.938920 master-0 kubenswrapper[31420]: I0220 12:19:41.938088 31420 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:41.938920 master-0 kubenswrapper[31420]: I0220 12:19:41.938101 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:41.938920 master-0 kubenswrapper[31420]: I0220 12:19:41.938112 31420 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/380ac98f-d2d2-42cd-be81-c98387690df4-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:41.994352 master-0 kubenswrapper[31420]: I0220 12:19:41.993775 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr"] Feb 20 12:19:42.680395 master-0 kubenswrapper[31420]: I0220 12:19:42.680327 31420 generic.go:334] "Generic (PLEG): container finished" podID="359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" containerID="d8ac5b816f52692216025c704a036f55a8b6853585d811910bfc71679665dec5" exitCode=0 Feb 20 12:19:42.681189 master-0 kubenswrapper[31420]: I0220 12:19:42.680398 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" event={"ID":"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4","Type":"ContainerDied","Data":"d8ac5b816f52692216025c704a036f55a8b6853585d811910bfc71679665dec5"} Feb 20 12:19:42.681189 master-0 kubenswrapper[31420]: I0220 12:19:42.680446 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" event={"ID":"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4","Type":"ContainerStarted","Data":"d28884cfcfb83f057be8f956d79455ae8b698fc3d5dd33f946f75b9ec1db23ab"} Feb 20 12:19:42.681189 master-0 kubenswrapper[31420]: I0220 12:19:42.680494 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cc6d6f9-6ct5q" Feb 20 12:19:42.895831 master-0 kubenswrapper[31420]: I0220 12:19:42.895765 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cc6d6f9-6ct5q"] Feb 20 12:19:42.904638 master-0 kubenswrapper[31420]: I0220 12:19:42.904556 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cc6d6f9-6ct5q"] Feb 20 12:19:43.515969 master-0 kubenswrapper[31420]: I0220 12:19:43.515900 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="380ac98f-d2d2-42cd-be81-c98387690df4" path="/var/lib/kubelet/pods/380ac98f-d2d2-42cd-be81-c98387690df4/volumes" Feb 20 12:19:43.695637 master-0 kubenswrapper[31420]: I0220 12:19:43.695563 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" event={"ID":"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4","Type":"ContainerStarted","Data":"da731389c95edc5cb32c72f71c53568fec6984d8889fcaf7c10e28098e1b42c5"} Feb 20 12:19:43.696106 master-0 kubenswrapper[31420]: I0220 12:19:43.695734 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:43.721196 master-0 kubenswrapper[31420]: I0220 12:19:43.721111 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" podStartSLOduration=2.721096216 podStartE2EDuration="2.721096216s" podCreationTimestamp="2026-02-20 12:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:19:43.716560439 +0000 UTC m=+888.435798690" watchObservedRunningTime="2026-02-20 12:19:43.721096216 +0000 UTC m=+888.440334457" Feb 20 12:19:46.164797 master-0 kubenswrapper[31420]: I0220 12:19:46.164719 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 20 12:19:46.574560 master-0 kubenswrapper[31420]: I0220 12:19:46.570341 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-2jfwr"] Feb 20 12:19:46.574560 master-0 kubenswrapper[31420]: E0220 12:19:46.570831 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380ac98f-d2d2-42cd-be81-c98387690df4" containerName="init" Feb 20 12:19:46.574560 master-0 kubenswrapper[31420]: I0220 12:19:46.570868 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="380ac98f-d2d2-42cd-be81-c98387690df4" containerName="init" Feb 20 12:19:46.574560 master-0 kubenswrapper[31420]: E0220 12:19:46.570910 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380ac98f-d2d2-42cd-be81-c98387690df4" containerName="dnsmasq-dns" Feb 20 12:19:46.574560 master-0 kubenswrapper[31420]: I0220 12:19:46.570917 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="380ac98f-d2d2-42cd-be81-c98387690df4" containerName="dnsmasq-dns" Feb 20 12:19:46.574560 master-0 kubenswrapper[31420]: I0220 12:19:46.571153 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="380ac98f-d2d2-42cd-be81-c98387690df4" containerName="dnsmasq-dns" Feb 20 12:19:46.580846 master-0 kubenswrapper[31420]: I0220 12:19:46.577973 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2jfwr" Feb 20 12:19:46.607117 master-0 kubenswrapper[31420]: I0220 12:19:46.606731 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2jfwr"] Feb 20 12:19:46.704356 master-0 kubenswrapper[31420]: I0220 12:19:46.704234 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5976-account-create-update-4hl72"] Feb 20 12:19:46.706277 master-0 kubenswrapper[31420]: I0220 12:19:46.706251 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5976-account-create-update-4hl72" Feb 20 12:19:46.708637 master-0 kubenswrapper[31420]: I0220 12:19:46.708617 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 20 12:19:46.727341 master-0 kubenswrapper[31420]: I0220 12:19:46.727249 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5976-account-create-update-4hl72"] Feb 20 12:19:46.762131 master-0 kubenswrapper[31420]: I0220 12:19:46.760564 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff72fc46-bb67-400f-b402-cc01ed97f277-operator-scripts\") pod \"cinder-db-create-2jfwr\" (UID: \"ff72fc46-bb67-400f-b402-cc01ed97f277\") " pod="openstack/cinder-db-create-2jfwr" Feb 20 12:19:46.762131 master-0 kubenswrapper[31420]: I0220 12:19:46.760618 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8ssj\" (UniqueName: \"kubernetes.io/projected/ff72fc46-bb67-400f-b402-cc01ed97f277-kube-api-access-m8ssj\") pod \"cinder-db-create-2jfwr\" (UID: \"ff72fc46-bb67-400f-b402-cc01ed97f277\") " pod="openstack/cinder-db-create-2jfwr" Feb 20 12:19:46.791324 master-0 kubenswrapper[31420]: I0220 12:19:46.791266 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-8ds52"] Feb 20 12:19:46.795577 master-0 kubenswrapper[31420]: I0220 12:19:46.793180 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8ds52" Feb 20 12:19:46.808986 master-0 kubenswrapper[31420]: I0220 12:19:46.808137 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8ds52"] Feb 20 12:19:46.862839 master-0 kubenswrapper[31420]: I0220 12:19:46.862780 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9bc6\" (UniqueName: \"kubernetes.io/projected/f94c78c2-eb36-445b-a302-a5f67efdd1e8-kube-api-access-k9bc6\") pod \"cinder-5976-account-create-update-4hl72\" (UID: \"f94c78c2-eb36-445b-a302-a5f67efdd1e8\") " pod="openstack/cinder-5976-account-create-update-4hl72" Feb 20 12:19:46.863099 master-0 kubenswrapper[31420]: I0220 12:19:46.862918 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f94c78c2-eb36-445b-a302-a5f67efdd1e8-operator-scripts\") pod \"cinder-5976-account-create-update-4hl72\" (UID: \"f94c78c2-eb36-445b-a302-a5f67efdd1e8\") " pod="openstack/cinder-5976-account-create-update-4hl72" Feb 20 12:19:46.863099 master-0 kubenswrapper[31420]: I0220 12:19:46.862952 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff72fc46-bb67-400f-b402-cc01ed97f277-operator-scripts\") pod \"cinder-db-create-2jfwr\" (UID: \"ff72fc46-bb67-400f-b402-cc01ed97f277\") " pod="openstack/cinder-db-create-2jfwr" Feb 20 12:19:46.863099 master-0 kubenswrapper[31420]: I0220 12:19:46.862972 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8ssj\" (UniqueName: \"kubernetes.io/projected/ff72fc46-bb67-400f-b402-cc01ed97f277-kube-api-access-m8ssj\") pod \"cinder-db-create-2jfwr\" (UID: \"ff72fc46-bb67-400f-b402-cc01ed97f277\") " pod="openstack/cinder-db-create-2jfwr" Feb 20 12:19:46.864301 master-0 kubenswrapper[31420]: I0220 12:19:46.863950 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff72fc46-bb67-400f-b402-cc01ed97f277-operator-scripts\") pod \"cinder-db-create-2jfwr\" (UID: \"ff72fc46-bb67-400f-b402-cc01ed97f277\") " pod="openstack/cinder-db-create-2jfwr" Feb 20 12:19:46.885710 master-0 kubenswrapper[31420]: I0220 12:19:46.885670 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8ssj\" (UniqueName: \"kubernetes.io/projected/ff72fc46-bb67-400f-b402-cc01ed97f277-kube-api-access-m8ssj\") pod \"cinder-db-create-2jfwr\" (UID: \"ff72fc46-bb67-400f-b402-cc01ed97f277\") " pod="openstack/cinder-db-create-2jfwr" Feb 20 12:19:46.902466 master-0 kubenswrapper[31420]: I0220 12:19:46.902400 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c59-account-create-update-5sh5r"] Feb 20 12:19:46.903905 master-0 kubenswrapper[31420]: I0220 12:19:46.903868 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c59-account-create-update-5sh5r" Feb 20 12:19:46.906042 master-0 kubenswrapper[31420]: I0220 12:19:46.905990 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 20 12:19:46.938552 master-0 kubenswrapper[31420]: I0220 12:19:46.937595 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c59-account-create-update-5sh5r"] Feb 20 12:19:46.958732 master-0 kubenswrapper[31420]: I0220 12:19:46.956488 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2jfwr" Feb 20 12:19:46.980553 master-0 kubenswrapper[31420]: I0220 12:19:46.968976 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9bc6\" (UniqueName: \"kubernetes.io/projected/f94c78c2-eb36-445b-a302-a5f67efdd1e8-kube-api-access-k9bc6\") pod \"cinder-5976-account-create-update-4hl72\" (UID: \"f94c78c2-eb36-445b-a302-a5f67efdd1e8\") " pod="openstack/cinder-5976-account-create-update-4hl72" Feb 20 12:19:46.980553 master-0 kubenswrapper[31420]: I0220 12:19:46.969090 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pccf5\" (UniqueName: \"kubernetes.io/projected/4dc5e57b-2b58-449c-95e3-844ee6b42236-kube-api-access-pccf5\") pod \"neutron-db-create-8ds52\" (UID: \"4dc5e57b-2b58-449c-95e3-844ee6b42236\") " pod="openstack/neutron-db-create-8ds52" Feb 20 12:19:46.980553 master-0 kubenswrapper[31420]: I0220 12:19:46.969208 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc5e57b-2b58-449c-95e3-844ee6b42236-operator-scripts\") pod \"neutron-db-create-8ds52\" (UID: \"4dc5e57b-2b58-449c-95e3-844ee6b42236\") " pod="openstack/neutron-db-create-8ds52" Feb 20 12:19:46.980553 master-0 kubenswrapper[31420]: I0220 12:19:46.969280 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f94c78c2-eb36-445b-a302-a5f67efdd1e8-operator-scripts\") pod \"cinder-5976-account-create-update-4hl72\" (UID: \"f94c78c2-eb36-445b-a302-a5f67efdd1e8\") " pod="openstack/cinder-5976-account-create-update-4hl72" Feb 20 12:19:46.980553 master-0 kubenswrapper[31420]: I0220 12:19:46.970875 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f94c78c2-eb36-445b-a302-a5f67efdd1e8-operator-scripts\") pod \"cinder-5976-account-create-update-4hl72\" (UID: \"f94c78c2-eb36-445b-a302-a5f67efdd1e8\") " pod="openstack/cinder-5976-account-create-update-4hl72" Feb 20 12:19:47.012560 master-0 kubenswrapper[31420]: I0220 12:19:47.010820 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9bc6\" (UniqueName: \"kubernetes.io/projected/f94c78c2-eb36-445b-a302-a5f67efdd1e8-kube-api-access-k9bc6\") pod \"cinder-5976-account-create-update-4hl72\" (UID: \"f94c78c2-eb36-445b-a302-a5f67efdd1e8\") " pod="openstack/cinder-5976-account-create-update-4hl72" Feb 20 12:19:47.068559 master-0 kubenswrapper[31420]: I0220 12:19:47.064416 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-bg44c"] Feb 20 12:19:47.068559 master-0 kubenswrapper[31420]: I0220 12:19:47.065709 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bg44c" Feb 20 12:19:47.072608 master-0 kubenswrapper[31420]: I0220 12:19:47.070596 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkx9r\" (UniqueName: \"kubernetes.io/projected/d1bb6428-4a9d-4b9c-b6ba-d45526559d3a-kube-api-access-xkx9r\") pod \"neutron-7c59-account-create-update-5sh5r\" (UID: \"d1bb6428-4a9d-4b9c-b6ba-d45526559d3a\") " pod="openstack/neutron-7c59-account-create-update-5sh5r" Feb 20 12:19:47.072608 master-0 kubenswrapper[31420]: I0220 12:19:47.070714 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pccf5\" (UniqueName: \"kubernetes.io/projected/4dc5e57b-2b58-449c-95e3-844ee6b42236-kube-api-access-pccf5\") pod \"neutron-db-create-8ds52\" (UID: \"4dc5e57b-2b58-449c-95e3-844ee6b42236\") " pod="openstack/neutron-db-create-8ds52" Feb 20 12:19:47.072608 master-0 kubenswrapper[31420]: I0220 12:19:47.070742 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1bb6428-4a9d-4b9c-b6ba-d45526559d3a-operator-scripts\") pod \"neutron-7c59-account-create-update-5sh5r\" (UID: \"d1bb6428-4a9d-4b9c-b6ba-d45526559d3a\") " pod="openstack/neutron-7c59-account-create-update-5sh5r" Feb 20 12:19:47.072608 master-0 kubenswrapper[31420]: I0220 12:19:47.070803 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc5e57b-2b58-449c-95e3-844ee6b42236-operator-scripts\") pod \"neutron-db-create-8ds52\" (UID: \"4dc5e57b-2b58-449c-95e3-844ee6b42236\") " pod="openstack/neutron-db-create-8ds52" Feb 20 12:19:47.072608 master-0 kubenswrapper[31420]: I0220 12:19:47.071605 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc5e57b-2b58-449c-95e3-844ee6b42236-operator-scripts\") pod \"neutron-db-create-8ds52\" (UID: \"4dc5e57b-2b58-449c-95e3-844ee6b42236\") " pod="openstack/neutron-db-create-8ds52" Feb 20 12:19:47.074140 master-0 kubenswrapper[31420]: I0220 12:19:47.073464 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 20 12:19:47.074140 master-0 kubenswrapper[31420]: I0220 12:19:47.073680 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 20 12:19:47.074140 master-0 kubenswrapper[31420]: I0220 12:19:47.073745 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 20 12:19:47.094556 master-0 kubenswrapper[31420]: I0220 12:19:47.092793 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5976-account-create-update-4hl72" Feb 20 12:19:47.094556 master-0 kubenswrapper[31420]: I0220 12:19:47.094469 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bg44c"] Feb 20 12:19:47.119800 master-0 kubenswrapper[31420]: I0220 12:19:47.111237 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pccf5\" (UniqueName: \"kubernetes.io/projected/4dc5e57b-2b58-449c-95e3-844ee6b42236-kube-api-access-pccf5\") pod \"neutron-db-create-8ds52\" (UID: \"4dc5e57b-2b58-449c-95e3-844ee6b42236\") " pod="openstack/neutron-db-create-8ds52" Feb 20 12:19:47.130622 master-0 kubenswrapper[31420]: I0220 12:19:47.124950 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8ds52" Feb 20 12:19:47.174829 master-0 kubenswrapper[31420]: I0220 12:19:47.172022 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1bb6428-4a9d-4b9c-b6ba-d45526559d3a-operator-scripts\") pod \"neutron-7c59-account-create-update-5sh5r\" (UID: \"d1bb6428-4a9d-4b9c-b6ba-d45526559d3a\") " pod="openstack/neutron-7c59-account-create-update-5sh5r" Feb 20 12:19:47.174829 master-0 kubenswrapper[31420]: I0220 12:19:47.172130 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67854856-9af9-4d6c-af39-4a6c05afaa69-config-data\") pod \"keystone-db-sync-bg44c\" (UID: \"67854856-9af9-4d6c-af39-4a6c05afaa69\") " pod="openstack/keystone-db-sync-bg44c" Feb 20 12:19:47.174829 master-0 kubenswrapper[31420]: I0220 12:19:47.172239 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67854856-9af9-4d6c-af39-4a6c05afaa69-combined-ca-bundle\") pod \"keystone-db-sync-bg44c\" (UID: \"67854856-9af9-4d6c-af39-4a6c05afaa69\") " pod="openstack/keystone-db-sync-bg44c" Feb 20 12:19:47.174829 master-0 kubenswrapper[31420]: I0220 12:19:47.172276 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkx9r\" (UniqueName: \"kubernetes.io/projected/d1bb6428-4a9d-4b9c-b6ba-d45526559d3a-kube-api-access-xkx9r\") pod \"neutron-7c59-account-create-update-5sh5r\" (UID: \"d1bb6428-4a9d-4b9c-b6ba-d45526559d3a\") " pod="openstack/neutron-7c59-account-create-update-5sh5r" Feb 20 12:19:47.174829 master-0 kubenswrapper[31420]: I0220 12:19:47.172300 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhpqj\" (UniqueName: \"kubernetes.io/projected/67854856-9af9-4d6c-af39-4a6c05afaa69-kube-api-access-dhpqj\") pod \"keystone-db-sync-bg44c\" (UID: \"67854856-9af9-4d6c-af39-4a6c05afaa69\") " pod="openstack/keystone-db-sync-bg44c" Feb 20 12:19:47.174829 master-0 kubenswrapper[31420]: I0220 12:19:47.174667 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1bb6428-4a9d-4b9c-b6ba-d45526559d3a-operator-scripts\") pod \"neutron-7c59-account-create-update-5sh5r\" (UID: \"d1bb6428-4a9d-4b9c-b6ba-d45526559d3a\") " pod="openstack/neutron-7c59-account-create-update-5sh5r" Feb 20 12:19:47.205454 master-0 kubenswrapper[31420]: I0220 12:19:47.205413 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkx9r\" (UniqueName: \"kubernetes.io/projected/d1bb6428-4a9d-4b9c-b6ba-d45526559d3a-kube-api-access-xkx9r\") pod \"neutron-7c59-account-create-update-5sh5r\" (UID: \"d1bb6428-4a9d-4b9c-b6ba-d45526559d3a\") " pod="openstack/neutron-7c59-account-create-update-5sh5r" Feb 20 12:19:47.274770 master-0 kubenswrapper[31420]: I0220 12:19:47.274698 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67854856-9af9-4d6c-af39-4a6c05afaa69-config-data\") pod \"keystone-db-sync-bg44c\" (UID: \"67854856-9af9-4d6c-af39-4a6c05afaa69\") " pod="openstack/keystone-db-sync-bg44c" Feb 20 12:19:47.274966 master-0 kubenswrapper[31420]: I0220 12:19:47.274887 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67854856-9af9-4d6c-af39-4a6c05afaa69-combined-ca-bundle\") pod \"keystone-db-sync-bg44c\" (UID: \"67854856-9af9-4d6c-af39-4a6c05afaa69\") " pod="openstack/keystone-db-sync-bg44c" Feb 20 12:19:47.274966 master-0 kubenswrapper[31420]: I0220 12:19:47.274936 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhpqj\" (UniqueName: \"kubernetes.io/projected/67854856-9af9-4d6c-af39-4a6c05afaa69-kube-api-access-dhpqj\") pod \"keystone-db-sync-bg44c\" (UID: \"67854856-9af9-4d6c-af39-4a6c05afaa69\") " pod="openstack/keystone-db-sync-bg44c" Feb 20 12:19:47.279595 master-0 kubenswrapper[31420]: I0220 12:19:47.279201 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67854856-9af9-4d6c-af39-4a6c05afaa69-combined-ca-bundle\") pod \"keystone-db-sync-bg44c\" (UID: \"67854856-9af9-4d6c-af39-4a6c05afaa69\") " pod="openstack/keystone-db-sync-bg44c" Feb 20 12:19:47.279595 master-0 kubenswrapper[31420]: I0220 12:19:47.279481 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67854856-9af9-4d6c-af39-4a6c05afaa69-config-data\") pod \"keystone-db-sync-bg44c\" (UID: \"67854856-9af9-4d6c-af39-4a6c05afaa69\") " pod="openstack/keystone-db-sync-bg44c" Feb 20 12:19:47.303638 master-0 kubenswrapper[31420]: I0220 12:19:47.303590 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhpqj\" (UniqueName: \"kubernetes.io/projected/67854856-9af9-4d6c-af39-4a6c05afaa69-kube-api-access-dhpqj\") pod \"keystone-db-sync-bg44c\" (UID: \"67854856-9af9-4d6c-af39-4a6c05afaa69\") " pod="openstack/keystone-db-sync-bg44c" Feb 20 12:19:47.491710 master-0 kubenswrapper[31420]: W0220 12:19:47.491028 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff72fc46_bb67_400f_b402_cc01ed97f277.slice/crio-79f9b4a71bea47eb095b39b581eb82697543340bc9bb88db20289d3f0fece8a4 WatchSource:0}: Error finding container 79f9b4a71bea47eb095b39b581eb82697543340bc9bb88db20289d3f0fece8a4: Status 404 returned error can't find the container with id 79f9b4a71bea47eb095b39b581eb82697543340bc9bb88db20289d3f0fece8a4 Feb 20 12:19:47.496003 master-0 kubenswrapper[31420]: I0220 12:19:47.493801 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c59-account-create-update-5sh5r" Feb 20 12:19:47.504567 master-0 kubenswrapper[31420]: I0220 12:19:47.501367 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bg44c" Feb 20 12:19:47.514033 master-0 kubenswrapper[31420]: I0220 12:19:47.513974 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2jfwr"] Feb 20 12:19:47.692586 master-0 kubenswrapper[31420]: I0220 12:19:47.692489 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5976-account-create-update-4hl72"] Feb 20 12:19:47.705839 master-0 kubenswrapper[31420]: I0220 12:19:47.705787 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8ds52"] Feb 20 12:19:47.779418 master-0 kubenswrapper[31420]: I0220 12:19:47.778669 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2jfwr" event={"ID":"ff72fc46-bb67-400f-b402-cc01ed97f277","Type":"ContainerStarted","Data":"8b0d3997e849ef504066bb898a28f27f05b34e75f7b68648844a7e24ece351a0"} Feb 20 12:19:47.779418 master-0 kubenswrapper[31420]: I0220 12:19:47.779392 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2jfwr" event={"ID":"ff72fc46-bb67-400f-b402-cc01ed97f277","Type":"ContainerStarted","Data":"79f9b4a71bea47eb095b39b581eb82697543340bc9bb88db20289d3f0fece8a4"} Feb 20 12:19:47.779630 master-0 kubenswrapper[31420]: I0220 12:19:47.779484 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8ds52" event={"ID":"4dc5e57b-2b58-449c-95e3-844ee6b42236","Type":"ContainerStarted","Data":"756e862a4fd56a88625b601961e22ed0f33680bd9871298627b6f78f63c9f49c"} Feb 20 12:19:47.780689 master-0 kubenswrapper[31420]: I0220 12:19:47.780468 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5976-account-create-update-4hl72" event={"ID":"f94c78c2-eb36-445b-a302-a5f67efdd1e8","Type":"ContainerStarted","Data":"afa591e7b7a5b3b5ba3f2d47bd37f78b5840c7cb13b6ef3806842de24e86cba2"} Feb 20 12:19:47.828562 master-0 kubenswrapper[31420]: I0220 12:19:47.809654 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-2jfwr" podStartSLOduration=1.8096273520000001 podStartE2EDuration="1.809627352s" podCreationTimestamp="2026-02-20 12:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:19:47.803254712 +0000 UTC m=+892.522492943" watchObservedRunningTime="2026-02-20 12:19:47.809627352 +0000 UTC m=+892.528865583" Feb 20 12:19:48.045041 master-0 kubenswrapper[31420]: I0220 12:19:48.044979 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c59-account-create-update-5sh5r"] Feb 20 12:19:48.096744 master-0 kubenswrapper[31420]: W0220 12:19:48.096654 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1bb6428_4a9d_4b9c_b6ba_d45526559d3a.slice/crio-7dc6111de858e6d7b18394d48932814e07b376c20bd969007fb204ae94dc3ffd WatchSource:0}: Error finding container 7dc6111de858e6d7b18394d48932814e07b376c20bd969007fb204ae94dc3ffd: Status 404 returned error can't find the container with id 7dc6111de858e6d7b18394d48932814e07b376c20bd969007fb204ae94dc3ffd Feb 20 12:19:48.217550 master-0 kubenswrapper[31420]: I0220 12:19:48.209810 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bg44c"] Feb 20 12:19:48.459082 master-0 kubenswrapper[31420]: I0220 12:19:48.459032 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 20 12:19:48.799624 master-0 kubenswrapper[31420]: I0220 12:19:48.797944 31420 generic.go:334] "Generic (PLEG): container finished" podID="4dc5e57b-2b58-449c-95e3-844ee6b42236" containerID="bc235ae96af00ab441d425f1b0833f39c43498f8a4b325b2b78ac03deb486c99" exitCode=0 Feb 20 12:19:48.799624 master-0 kubenswrapper[31420]: I0220 12:19:48.798074 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8ds52" event={"ID":"4dc5e57b-2b58-449c-95e3-844ee6b42236","Type":"ContainerDied","Data":"bc235ae96af00ab441d425f1b0833f39c43498f8a4b325b2b78ac03deb486c99"} Feb 20 12:19:48.802298 master-0 kubenswrapper[31420]: I0220 12:19:48.800810 31420 generic.go:334] "Generic (PLEG): container finished" podID="f94c78c2-eb36-445b-a302-a5f67efdd1e8" containerID="718efa5e0092db719d3aa3dc1f1adc336410e90ed8c7ad41f1becfa9beb14c6e" exitCode=0 Feb 20 12:19:48.802298 master-0 kubenswrapper[31420]: I0220 12:19:48.800939 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5976-account-create-update-4hl72" event={"ID":"f94c78c2-eb36-445b-a302-a5f67efdd1e8","Type":"ContainerDied","Data":"718efa5e0092db719d3aa3dc1f1adc336410e90ed8c7ad41f1becfa9beb14c6e"} Feb 20 12:19:48.805360 master-0 kubenswrapper[31420]: I0220 12:19:48.805293 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bg44c" event={"ID":"67854856-9af9-4d6c-af39-4a6c05afaa69","Type":"ContainerStarted","Data":"d0ce262de38441f59c76125fc51f10e63bbf8f7dcf76254f1f8a4c31bf1fd9db"} Feb 20 12:19:48.807447 master-0 kubenswrapper[31420]: I0220 12:19:48.807420 31420 generic.go:334] "Generic (PLEG): container finished" podID="ff72fc46-bb67-400f-b402-cc01ed97f277" containerID="8b0d3997e849ef504066bb898a28f27f05b34e75f7b68648844a7e24ece351a0" exitCode=0 Feb 20 12:19:48.807507 master-0 kubenswrapper[31420]: I0220 12:19:48.807491 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2jfwr" event={"ID":"ff72fc46-bb67-400f-b402-cc01ed97f277","Type":"ContainerDied","Data":"8b0d3997e849ef504066bb898a28f27f05b34e75f7b68648844a7e24ece351a0"} Feb 20 12:19:48.812619 master-0 kubenswrapper[31420]: I0220 12:19:48.812510 31420 generic.go:334] "Generic (PLEG): container finished" podID="d1bb6428-4a9d-4b9c-b6ba-d45526559d3a" containerID="c13474afee77470017ea846d2ca6f3fec08ef7c3ab5607a81156d250816ab599" exitCode=0 Feb 20 12:19:48.812687 master-0 kubenswrapper[31420]: I0220 12:19:48.812550 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c59-account-create-update-5sh5r" event={"ID":"d1bb6428-4a9d-4b9c-b6ba-d45526559d3a","Type":"ContainerDied","Data":"c13474afee77470017ea846d2ca6f3fec08ef7c3ab5607a81156d250816ab599"} Feb 20 12:19:48.812735 master-0 kubenswrapper[31420]: I0220 12:19:48.812688 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c59-account-create-update-5sh5r" event={"ID":"d1bb6428-4a9d-4b9c-b6ba-d45526559d3a","Type":"ContainerStarted","Data":"7dc6111de858e6d7b18394d48932814e07b376c20bd969007fb204ae94dc3ffd"} Feb 20 12:19:51.561710 master-0 kubenswrapper[31420]: I0220 12:19:51.561633 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:19:51.686506 master-0 kubenswrapper[31420]: I0220 12:19:51.686301 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-676b9854bc-9lfkk"] Feb 20 12:19:51.687094 master-0 kubenswrapper[31420]: I0220 12:19:51.687049 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" podUID="739e8b2d-4f4b-4b30-8836-c655c9ebb68a" containerName="dnsmasq-dns" containerID="cri-o://2e3389cb8f41c5ec97f02aa1a0138cb9bf08aabcca9c34064661418399af110e" gracePeriod=10 Feb 20 12:19:51.863734 master-0 kubenswrapper[31420]: I0220 12:19:51.863610 31420 generic.go:334] "Generic (PLEG): container finished" podID="739e8b2d-4f4b-4b30-8836-c655c9ebb68a" containerID="2e3389cb8f41c5ec97f02aa1a0138cb9bf08aabcca9c34064661418399af110e" exitCode=0 Feb 20 12:19:51.863981 master-0 kubenswrapper[31420]: I0220 12:19:51.863677 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" event={"ID":"739e8b2d-4f4b-4b30-8836-c655c9ebb68a","Type":"ContainerDied","Data":"2e3389cb8f41c5ec97f02aa1a0138cb9bf08aabcca9c34064661418399af110e"} Feb 20 12:19:53.037608 master-0 kubenswrapper[31420]: I0220 12:19:53.036327 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2jfwr" Feb 20 12:19:53.041684 master-0 kubenswrapper[31420]: I0220 12:19:53.041443 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c59-account-create-update-5sh5r" Feb 20 12:19:53.049656 master-0 kubenswrapper[31420]: I0220 12:19:53.049161 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5976-account-create-update-4hl72" Feb 20 12:19:53.115709 master-0 kubenswrapper[31420]: I0220 12:19:53.115563 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8ds52" Feb 20 12:19:53.192023 master-0 kubenswrapper[31420]: I0220 12:19:53.191923 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8ssj\" (UniqueName: \"kubernetes.io/projected/ff72fc46-bb67-400f-b402-cc01ed97f277-kube-api-access-m8ssj\") pod \"ff72fc46-bb67-400f-b402-cc01ed97f277\" (UID: \"ff72fc46-bb67-400f-b402-cc01ed97f277\") " Feb 20 12:19:53.192023 master-0 kubenswrapper[31420]: I0220 12:19:53.192025 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkx9r\" (UniqueName: \"kubernetes.io/projected/d1bb6428-4a9d-4b9c-b6ba-d45526559d3a-kube-api-access-xkx9r\") pod \"d1bb6428-4a9d-4b9c-b6ba-d45526559d3a\" (UID: \"d1bb6428-4a9d-4b9c-b6ba-d45526559d3a\") " Feb 20 12:19:53.192575 master-0 kubenswrapper[31420]: I0220 12:19:53.192063 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1bb6428-4a9d-4b9c-b6ba-d45526559d3a-operator-scripts\") pod \"d1bb6428-4a9d-4b9c-b6ba-d45526559d3a\" (UID: \"d1bb6428-4a9d-4b9c-b6ba-d45526559d3a\") " Feb 20 12:19:53.192575 master-0 kubenswrapper[31420]: I0220 12:19:53.192165 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9bc6\" (UniqueName: \"kubernetes.io/projected/f94c78c2-eb36-445b-a302-a5f67efdd1e8-kube-api-access-k9bc6\") pod \"f94c78c2-eb36-445b-a302-a5f67efdd1e8\" (UID: \"f94c78c2-eb36-445b-a302-a5f67efdd1e8\") " Feb 20 12:19:53.192575 master-0 kubenswrapper[31420]: I0220 12:19:53.192266 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff72fc46-bb67-400f-b402-cc01ed97f277-operator-scripts\") pod \"ff72fc46-bb67-400f-b402-cc01ed97f277\" (UID: \"ff72fc46-bb67-400f-b402-cc01ed97f277\") " Feb 20 12:19:53.192575 master-0 kubenswrapper[31420]: I0220 12:19:53.192320 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pccf5\" (UniqueName: \"kubernetes.io/projected/4dc5e57b-2b58-449c-95e3-844ee6b42236-kube-api-access-pccf5\") pod \"4dc5e57b-2b58-449c-95e3-844ee6b42236\" (UID: \"4dc5e57b-2b58-449c-95e3-844ee6b42236\") " Feb 20 12:19:53.192575 master-0 kubenswrapper[31420]: I0220 12:19:53.192404 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f94c78c2-eb36-445b-a302-a5f67efdd1e8-operator-scripts\") pod \"f94c78c2-eb36-445b-a302-a5f67efdd1e8\" (UID: \"f94c78c2-eb36-445b-a302-a5f67efdd1e8\") " Feb 20 12:19:53.193327 master-0 kubenswrapper[31420]: I0220 12:19:53.193131 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff72fc46-bb67-400f-b402-cc01ed97f277-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff72fc46-bb67-400f-b402-cc01ed97f277" (UID: "ff72fc46-bb67-400f-b402-cc01ed97f277"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:53.193377 master-0 kubenswrapper[31420]: I0220 12:19:53.193341 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1bb6428-4a9d-4b9c-b6ba-d45526559d3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1bb6428-4a9d-4b9c-b6ba-d45526559d3a" (UID: "d1bb6428-4a9d-4b9c-b6ba-d45526559d3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:53.193875 master-0 kubenswrapper[31420]: I0220 12:19:53.193845 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94c78c2-eb36-445b-a302-a5f67efdd1e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f94c78c2-eb36-445b-a302-a5f67efdd1e8" (UID: "f94c78c2-eb36-445b-a302-a5f67efdd1e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:53.194025 master-0 kubenswrapper[31420]: I0220 12:19:53.193982 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1bb6428-4a9d-4b9c-b6ba-d45526559d3a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:53.194096 master-0 kubenswrapper[31420]: I0220 12:19:53.194032 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff72fc46-bb67-400f-b402-cc01ed97f277-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:53.195920 master-0 kubenswrapper[31420]: I0220 12:19:53.195879 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f94c78c2-eb36-445b-a302-a5f67efdd1e8-kube-api-access-k9bc6" (OuterVolumeSpecName: "kube-api-access-k9bc6") pod "f94c78c2-eb36-445b-a302-a5f67efdd1e8" (UID: "f94c78c2-eb36-445b-a302-a5f67efdd1e8"). InnerVolumeSpecName "kube-api-access-k9bc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:53.199437 master-0 kubenswrapper[31420]: I0220 12:19:53.199337 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff72fc46-bb67-400f-b402-cc01ed97f277-kube-api-access-m8ssj" (OuterVolumeSpecName: "kube-api-access-m8ssj") pod "ff72fc46-bb67-400f-b402-cc01ed97f277" (UID: "ff72fc46-bb67-400f-b402-cc01ed97f277"). InnerVolumeSpecName "kube-api-access-m8ssj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:53.200758 master-0 kubenswrapper[31420]: I0220 12:19:53.200725 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1bb6428-4a9d-4b9c-b6ba-d45526559d3a-kube-api-access-xkx9r" (OuterVolumeSpecName: "kube-api-access-xkx9r") pod "d1bb6428-4a9d-4b9c-b6ba-d45526559d3a" (UID: "d1bb6428-4a9d-4b9c-b6ba-d45526559d3a"). InnerVolumeSpecName "kube-api-access-xkx9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:53.217357 master-0 kubenswrapper[31420]: I0220 12:19:53.202195 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dc5e57b-2b58-449c-95e3-844ee6b42236-kube-api-access-pccf5" (OuterVolumeSpecName: "kube-api-access-pccf5") pod "4dc5e57b-2b58-449c-95e3-844ee6b42236" (UID: "4dc5e57b-2b58-449c-95e3-844ee6b42236"). InnerVolumeSpecName "kube-api-access-pccf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:53.295668 master-0 kubenswrapper[31420]: I0220 12:19:53.295611 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc5e57b-2b58-449c-95e3-844ee6b42236-operator-scripts\") pod \"4dc5e57b-2b58-449c-95e3-844ee6b42236\" (UID: \"4dc5e57b-2b58-449c-95e3-844ee6b42236\") " Feb 20 12:19:53.296196 master-0 kubenswrapper[31420]: I0220 12:19:53.296143 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc5e57b-2b58-449c-95e3-844ee6b42236-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4dc5e57b-2b58-449c-95e3-844ee6b42236" (UID: "4dc5e57b-2b58-449c-95e3-844ee6b42236"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:53.296248 master-0 kubenswrapper[31420]: I0220 12:19:53.296180 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pccf5\" (UniqueName: \"kubernetes.io/projected/4dc5e57b-2b58-449c-95e3-844ee6b42236-kube-api-access-pccf5\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:53.296248 master-0 kubenswrapper[31420]: I0220 12:19:53.296232 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f94c78c2-eb36-445b-a302-a5f67efdd1e8-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:53.296322 master-0 kubenswrapper[31420]: I0220 12:19:53.296253 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8ssj\" (UniqueName: \"kubernetes.io/projected/ff72fc46-bb67-400f-b402-cc01ed97f277-kube-api-access-m8ssj\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:53.296322 master-0 kubenswrapper[31420]: I0220 12:19:53.296270 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkx9r\" (UniqueName: \"kubernetes.io/projected/d1bb6428-4a9d-4b9c-b6ba-d45526559d3a-kube-api-access-xkx9r\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:53.296322 master-0 kubenswrapper[31420]: I0220 12:19:53.296285 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9bc6\" (UniqueName: \"kubernetes.io/projected/f94c78c2-eb36-445b-a302-a5f67efdd1e8-kube-api-access-k9bc6\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:53.300502 master-0 kubenswrapper[31420]: I0220 12:19:53.300459 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:19:53.408905 master-0 kubenswrapper[31420]: I0220 12:19:53.407079 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc5e57b-2b58-449c-95e3-844ee6b42236-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:53.508379 master-0 kubenswrapper[31420]: I0220 12:19:53.508313 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z62j8\" (UniqueName: \"kubernetes.io/projected/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-kube-api-access-z62j8\") pod \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " Feb 20 12:19:53.508632 master-0 kubenswrapper[31420]: I0220 12:19:53.508480 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-config\") pod \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " Feb 20 12:19:53.508632 master-0 kubenswrapper[31420]: I0220 12:19:53.508541 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-ovsdbserver-sb\") pod \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " Feb 20 12:19:53.508632 master-0 kubenswrapper[31420]: I0220 12:19:53.508612 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-dns-svc\") pod \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " Feb 20 12:19:53.508763 master-0 kubenswrapper[31420]: I0220 12:19:53.508686 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-ovsdbserver-nb\") pod \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\" (UID: \"739e8b2d-4f4b-4b30-8836-c655c9ebb68a\") " Feb 20 12:19:53.511677 master-0 kubenswrapper[31420]: I0220 12:19:53.511644 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-kube-api-access-z62j8" (OuterVolumeSpecName: "kube-api-access-z62j8") pod "739e8b2d-4f4b-4b30-8836-c655c9ebb68a" (UID: "739e8b2d-4f4b-4b30-8836-c655c9ebb68a"). InnerVolumeSpecName "kube-api-access-z62j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:53.557035 master-0 kubenswrapper[31420]: I0220 12:19:53.556624 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "739e8b2d-4f4b-4b30-8836-c655c9ebb68a" (UID: "739e8b2d-4f4b-4b30-8836-c655c9ebb68a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:53.581477 master-0 kubenswrapper[31420]: I0220 12:19:53.581388 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-config" (OuterVolumeSpecName: "config") pod "739e8b2d-4f4b-4b30-8836-c655c9ebb68a" (UID: "739e8b2d-4f4b-4b30-8836-c655c9ebb68a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:53.584031 master-0 kubenswrapper[31420]: I0220 12:19:53.583963 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "739e8b2d-4f4b-4b30-8836-c655c9ebb68a" (UID: "739e8b2d-4f4b-4b30-8836-c655c9ebb68a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:53.590483 master-0 kubenswrapper[31420]: I0220 12:19:53.590423 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "739e8b2d-4f4b-4b30-8836-c655c9ebb68a" (UID: "739e8b2d-4f4b-4b30-8836-c655c9ebb68a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:19:53.611334 master-0 kubenswrapper[31420]: I0220 12:19:53.611260 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:53.611334 master-0 kubenswrapper[31420]: I0220 12:19:53.611305 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:53.611334 master-0 kubenswrapper[31420]: I0220 12:19:53.611317 31420 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:53.611334 master-0 kubenswrapper[31420]: I0220 12:19:53.611325 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:53.611334 master-0 kubenswrapper[31420]: I0220 12:19:53.611335 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z62j8\" (UniqueName: \"kubernetes.io/projected/739e8b2d-4f4b-4b30-8836-c655c9ebb68a-kube-api-access-z62j8\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:53.895948 master-0 kubenswrapper[31420]: I0220 12:19:53.895862 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8ds52" Feb 20 12:19:53.897022 master-0 kubenswrapper[31420]: I0220 12:19:53.896908 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8ds52" event={"ID":"4dc5e57b-2b58-449c-95e3-844ee6b42236","Type":"ContainerDied","Data":"756e862a4fd56a88625b601961e22ed0f33680bd9871298627b6f78f63c9f49c"} Feb 20 12:19:53.897408 master-0 kubenswrapper[31420]: I0220 12:19:53.897043 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="756e862a4fd56a88625b601961e22ed0f33680bd9871298627b6f78f63c9f49c" Feb 20 12:19:53.907182 master-0 kubenswrapper[31420]: I0220 12:19:53.903408 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" event={"ID":"739e8b2d-4f4b-4b30-8836-c655c9ebb68a","Type":"ContainerDied","Data":"88ea0d441e05ac9923eaad90b285acfde61d86ccfb7f745125e748895217002f"} Feb 20 12:19:53.907182 master-0 kubenswrapper[31420]: I0220 12:19:53.903443 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" Feb 20 12:19:53.907182 master-0 kubenswrapper[31420]: I0220 12:19:53.903508 31420 scope.go:117] "RemoveContainer" containerID="2e3389cb8f41c5ec97f02aa1a0138cb9bf08aabcca9c34064661418399af110e" Feb 20 12:19:53.907182 master-0 kubenswrapper[31420]: I0220 12:19:53.905769 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5976-account-create-update-4hl72" event={"ID":"f94c78c2-eb36-445b-a302-a5f67efdd1e8","Type":"ContainerDied","Data":"afa591e7b7a5b3b5ba3f2d47bd37f78b5840c7cb13b6ef3806842de24e86cba2"} Feb 20 12:19:53.907182 master-0 kubenswrapper[31420]: I0220 12:19:53.905826 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afa591e7b7a5b3b5ba3f2d47bd37f78b5840c7cb13b6ef3806842de24e86cba2" Feb 20 12:19:53.907182 master-0 kubenswrapper[31420]: I0220 12:19:53.905914 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5976-account-create-update-4hl72" Feb 20 12:19:53.909165 master-0 kubenswrapper[31420]: I0220 12:19:53.908327 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bg44c" event={"ID":"67854856-9af9-4d6c-af39-4a6c05afaa69","Type":"ContainerStarted","Data":"8e9ea463f1c7507d7bcb6ea06d6b97f934c584defeeedab8c83b630db28fe8cf"} Feb 20 12:19:53.912231 master-0 kubenswrapper[31420]: I0220 12:19:53.911444 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2jfwr" Feb 20 12:19:53.912414 master-0 kubenswrapper[31420]: I0220 12:19:53.912293 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2jfwr" event={"ID":"ff72fc46-bb67-400f-b402-cc01ed97f277","Type":"ContainerDied","Data":"79f9b4a71bea47eb095b39b581eb82697543340bc9bb88db20289d3f0fece8a4"} Feb 20 12:19:53.912414 master-0 kubenswrapper[31420]: I0220 12:19:53.912326 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79f9b4a71bea47eb095b39b581eb82697543340bc9bb88db20289d3f0fece8a4" Feb 20 12:19:53.915108 master-0 kubenswrapper[31420]: I0220 12:19:53.915043 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c59-account-create-update-5sh5r" event={"ID":"d1bb6428-4a9d-4b9c-b6ba-d45526559d3a","Type":"ContainerDied","Data":"7dc6111de858e6d7b18394d48932814e07b376c20bd969007fb204ae94dc3ffd"} Feb 20 12:19:53.915108 master-0 kubenswrapper[31420]: I0220 12:19:53.915088 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dc6111de858e6d7b18394d48932814e07b376c20bd969007fb204ae94dc3ffd" Feb 20 12:19:53.915382 master-0 kubenswrapper[31420]: I0220 12:19:53.915164 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c59-account-create-update-5sh5r" Feb 20 12:19:53.958156 master-0 kubenswrapper[31420]: I0220 12:19:53.958015 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-bg44c" podStartSLOduration=3.23041083 podStartE2EDuration="7.957987706s" podCreationTimestamp="2026-02-20 12:19:46 +0000 UTC" firstStartedPulling="2026-02-20 12:19:48.205761238 +0000 UTC m=+892.924999479" lastFinishedPulling="2026-02-20 12:19:52.933338094 +0000 UTC m=+897.652576355" observedRunningTime="2026-02-20 12:19:53.941832619 +0000 UTC m=+898.661070900" watchObservedRunningTime="2026-02-20 12:19:53.957987706 +0000 UTC m=+898.677225987" Feb 20 12:19:53.995277 master-0 kubenswrapper[31420]: I0220 12:19:53.993877 31420 scope.go:117] "RemoveContainer" containerID="0bd56b6dff0477fda0ff5cbfddf3cc3bc39c7fc06b098cadfaf944cff0bf86ce" Feb 20 12:19:54.047235 master-0 kubenswrapper[31420]: I0220 12:19:54.047154 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-676b9854bc-9lfkk"] Feb 20 12:19:54.059682 master-0 kubenswrapper[31420]: I0220 12:19:54.059605 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-676b9854bc-9lfkk"] Feb 20 12:19:55.510587 master-0 kubenswrapper[31420]: I0220 12:19:55.510502 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="739e8b2d-4f4b-4b30-8836-c655c9ebb68a" path="/var/lib/kubelet/pods/739e8b2d-4f4b-4b30-8836-c655c9ebb68a/volumes" Feb 20 12:19:57.981833 master-0 kubenswrapper[31420]: I0220 12:19:57.981743 31420 generic.go:334] "Generic (PLEG): container finished" podID="67854856-9af9-4d6c-af39-4a6c05afaa69" containerID="8e9ea463f1c7507d7bcb6ea06d6b97f934c584defeeedab8c83b630db28fe8cf" exitCode=0 Feb 20 12:19:57.981833 master-0 kubenswrapper[31420]: I0220 12:19:57.981808 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bg44c" event={"ID":"67854856-9af9-4d6c-af39-4a6c05afaa69","Type":"ContainerDied","Data":"8e9ea463f1c7507d7bcb6ea06d6b97f934c584defeeedab8c83b630db28fe8cf"} Feb 20 12:19:58.063624 master-0 kubenswrapper[31420]: I0220 12:19:58.063479 31420 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-676b9854bc-9lfkk" podUID="739e8b2d-4f4b-4b30-8836-c655c9ebb68a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.185:5353: i/o timeout" Feb 20 12:19:59.458811 master-0 kubenswrapper[31420]: I0220 12:19:59.458732 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bg44c" Feb 20 12:19:59.548829 master-0 kubenswrapper[31420]: I0220 12:19:59.548748 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67854856-9af9-4d6c-af39-4a6c05afaa69-config-data\") pod \"67854856-9af9-4d6c-af39-4a6c05afaa69\" (UID: \"67854856-9af9-4d6c-af39-4a6c05afaa69\") " Feb 20 12:19:59.549036 master-0 kubenswrapper[31420]: I0220 12:19:59.549010 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67854856-9af9-4d6c-af39-4a6c05afaa69-combined-ca-bundle\") pod \"67854856-9af9-4d6c-af39-4a6c05afaa69\" (UID: \"67854856-9af9-4d6c-af39-4a6c05afaa69\") " Feb 20 12:19:59.549162 master-0 kubenswrapper[31420]: I0220 12:19:59.549129 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhpqj\" (UniqueName: \"kubernetes.io/projected/67854856-9af9-4d6c-af39-4a6c05afaa69-kube-api-access-dhpqj\") pod \"67854856-9af9-4d6c-af39-4a6c05afaa69\" (UID: \"67854856-9af9-4d6c-af39-4a6c05afaa69\") " Feb 20 12:19:59.554044 master-0 kubenswrapper[31420]: I0220 12:19:59.553997 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67854856-9af9-4d6c-af39-4a6c05afaa69-kube-api-access-dhpqj" (OuterVolumeSpecName: "kube-api-access-dhpqj") pod "67854856-9af9-4d6c-af39-4a6c05afaa69" (UID: "67854856-9af9-4d6c-af39-4a6c05afaa69"). InnerVolumeSpecName "kube-api-access-dhpqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:19:59.586276 master-0 kubenswrapper[31420]: I0220 12:19:59.586201 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67854856-9af9-4d6c-af39-4a6c05afaa69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67854856-9af9-4d6c-af39-4a6c05afaa69" (UID: "67854856-9af9-4d6c-af39-4a6c05afaa69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:19:59.614652 master-0 kubenswrapper[31420]: I0220 12:19:59.614557 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67854856-9af9-4d6c-af39-4a6c05afaa69-config-data" (OuterVolumeSpecName: "config-data") pod "67854856-9af9-4d6c-af39-4a6c05afaa69" (UID: "67854856-9af9-4d6c-af39-4a6c05afaa69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:19:59.653771 master-0 kubenswrapper[31420]: I0220 12:19:59.653705 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67854856-9af9-4d6c-af39-4a6c05afaa69-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:59.653771 master-0 kubenswrapper[31420]: I0220 12:19:59.653772 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhpqj\" (UniqueName: \"kubernetes.io/projected/67854856-9af9-4d6c-af39-4a6c05afaa69-kube-api-access-dhpqj\") on node \"master-0\" DevicePath \"\"" Feb 20 12:19:59.654050 master-0 kubenswrapper[31420]: I0220 12:19:59.653796 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67854856-9af9-4d6c-af39-4a6c05afaa69-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:00.014450 master-0 kubenswrapper[31420]: I0220 12:20:00.014390 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bg44c" event={"ID":"67854856-9af9-4d6c-af39-4a6c05afaa69","Type":"ContainerDied","Data":"d0ce262de38441f59c76125fc51f10e63bbf8f7dcf76254f1f8a4c31bf1fd9db"} Feb 20 12:20:00.014925 master-0 kubenswrapper[31420]: I0220 12:20:00.014881 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0ce262de38441f59c76125fc51f10e63bbf8f7dcf76254f1f8a4c31bf1fd9db" Feb 20 12:20:00.015104 master-0 kubenswrapper[31420]: I0220 12:20:00.014488 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bg44c" Feb 20 12:20:00.351777 master-0 kubenswrapper[31420]: I0220 12:20:00.351704 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7cvhp"] Feb 20 12:20:00.352191 master-0 kubenswrapper[31420]: E0220 12:20:00.352171 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f94c78c2-eb36-445b-a302-a5f67efdd1e8" containerName="mariadb-account-create-update" Feb 20 12:20:00.352191 master-0 kubenswrapper[31420]: I0220 12:20:00.352188 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="f94c78c2-eb36-445b-a302-a5f67efdd1e8" containerName="mariadb-account-create-update" Feb 20 12:20:00.352296 master-0 kubenswrapper[31420]: E0220 12:20:00.352222 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff72fc46-bb67-400f-b402-cc01ed97f277" containerName="mariadb-database-create" Feb 20 12:20:00.352296 master-0 kubenswrapper[31420]: I0220 12:20:00.352231 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff72fc46-bb67-400f-b402-cc01ed97f277" containerName="mariadb-database-create" Feb 20 12:20:00.352296 master-0 kubenswrapper[31420]: E0220 12:20:00.352258 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1bb6428-4a9d-4b9c-b6ba-d45526559d3a" containerName="mariadb-account-create-update" Feb 20 12:20:00.352296 master-0 kubenswrapper[31420]: I0220 12:20:00.352265 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1bb6428-4a9d-4b9c-b6ba-d45526559d3a" containerName="mariadb-account-create-update" Feb 20 12:20:00.352296 master-0 kubenswrapper[31420]: E0220 12:20:00.352286 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67854856-9af9-4d6c-af39-4a6c05afaa69" containerName="keystone-db-sync" Feb 20 12:20:00.352296 master-0 kubenswrapper[31420]: I0220 12:20:00.352293 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="67854856-9af9-4d6c-af39-4a6c05afaa69" containerName="keystone-db-sync" Feb 20 12:20:00.352549 master-0 kubenswrapper[31420]: E0220 12:20:00.352311 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739e8b2d-4f4b-4b30-8836-c655c9ebb68a" containerName="dnsmasq-dns" Feb 20 12:20:00.352549 master-0 kubenswrapper[31420]: I0220 12:20:00.352319 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="739e8b2d-4f4b-4b30-8836-c655c9ebb68a" containerName="dnsmasq-dns" Feb 20 12:20:00.352549 master-0 kubenswrapper[31420]: E0220 12:20:00.352333 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc5e57b-2b58-449c-95e3-844ee6b42236" containerName="mariadb-database-create" Feb 20 12:20:00.352549 master-0 kubenswrapper[31420]: I0220 12:20:00.352339 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc5e57b-2b58-449c-95e3-844ee6b42236" containerName="mariadb-database-create" Feb 20 12:20:00.352549 master-0 kubenswrapper[31420]: E0220 12:20:00.352350 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739e8b2d-4f4b-4b30-8836-c655c9ebb68a" containerName="init" Feb 20 12:20:00.352549 master-0 kubenswrapper[31420]: I0220 12:20:00.352356 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="739e8b2d-4f4b-4b30-8836-c655c9ebb68a" containerName="init" Feb 20 12:20:00.352809 master-0 kubenswrapper[31420]: I0220 12:20:00.352567 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1bb6428-4a9d-4b9c-b6ba-d45526559d3a" containerName="mariadb-account-create-update" Feb 20 12:20:00.352809 master-0 kubenswrapper[31420]: I0220 12:20:00.352585 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="f94c78c2-eb36-445b-a302-a5f67efdd1e8" containerName="mariadb-account-create-update" Feb 20 12:20:00.352809 master-0 kubenswrapper[31420]: I0220 12:20:00.352606 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="739e8b2d-4f4b-4b30-8836-c655c9ebb68a" containerName="dnsmasq-dns" Feb 20 12:20:00.352809 master-0 kubenswrapper[31420]: I0220 12:20:00.352631 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff72fc46-bb67-400f-b402-cc01ed97f277" containerName="mariadb-database-create" Feb 20 12:20:00.352809 master-0 kubenswrapper[31420]: I0220 12:20:00.352642 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dc5e57b-2b58-449c-95e3-844ee6b42236" containerName="mariadb-database-create" Feb 20 12:20:00.352809 master-0 kubenswrapper[31420]: I0220 12:20:00.352656 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="67854856-9af9-4d6c-af39-4a6c05afaa69" containerName="keystone-db-sync" Feb 20 12:20:00.361932 master-0 kubenswrapper[31420]: I0220 12:20:00.353294 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.368436 master-0 kubenswrapper[31420]: I0220 12:20:00.368390 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 20 12:20:00.368700 master-0 kubenswrapper[31420]: I0220 12:20:00.368515 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 20 12:20:00.368700 master-0 kubenswrapper[31420]: I0220 12:20:00.368408 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 20 12:20:00.368829 master-0 kubenswrapper[31420]: I0220 12:20:00.368760 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 20 12:20:00.404349 master-0 kubenswrapper[31420]: I0220 12:20:00.389319 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-866f884f75-w68xj"] Feb 20 12:20:00.404349 master-0 kubenswrapper[31420]: I0220 12:20:00.391323 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.429036 master-0 kubenswrapper[31420]: I0220 12:20:00.428924 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7cvhp"] Feb 20 12:20:00.452865 master-0 kubenswrapper[31420]: I0220 12:20:00.452797 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-866f884f75-w68xj"] Feb 20 12:20:00.475133 master-0 kubenswrapper[31420]: I0220 12:20:00.475056 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-fernet-keys\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.475851 master-0 kubenswrapper[31420]: I0220 12:20:00.475141 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6npg\" (UniqueName: \"kubernetes.io/projected/86d64296-9441-4c59-ac5a-0a6d94cf9499-kube-api-access-m6npg\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.475851 master-0 kubenswrapper[31420]: I0220 12:20:00.475192 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-ovsdbserver-sb\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.475851 master-0 kubenswrapper[31420]: I0220 12:20:00.475220 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-config\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.475851 master-0 kubenswrapper[31420]: I0220 12:20:00.475269 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-ovsdbserver-nb\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.475851 master-0 kubenswrapper[31420]: I0220 12:20:00.475293 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-config-data\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.475851 master-0 kubenswrapper[31420]: I0220 12:20:00.475351 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-dns-svc\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.475851 master-0 kubenswrapper[31420]: I0220 12:20:00.475401 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-dns-swift-storage-0\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.475851 master-0 kubenswrapper[31420]: I0220 12:20:00.475425 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-scripts\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.475851 master-0 kubenswrapper[31420]: I0220 12:20:00.475458 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-credential-keys\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.475851 master-0 kubenswrapper[31420]: I0220 12:20:00.475480 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-combined-ca-bundle\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.475851 master-0 kubenswrapper[31420]: I0220 12:20:00.475549 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stlld\" (UniqueName: \"kubernetes.io/projected/4dba19cd-d9f4-419e-acd9-5a240202d577-kube-api-access-stlld\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.530731 master-0 kubenswrapper[31420]: I0220 12:20:00.530574 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-4q2vm"] Feb 20 12:20:00.533695 master-0 kubenswrapper[31420]: I0220 12:20:00.533623 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-4q2vm" Feb 20 12:20:00.546863 master-0 kubenswrapper[31420]: I0220 12:20:00.546223 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-4q2vm"] Feb 20 12:20:00.584785 master-0 kubenswrapper[31420]: I0220 12:20:00.584430 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-fernet-keys\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.585484 master-0 kubenswrapper[31420]: I0220 12:20:00.585435 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6npg\" (UniqueName: \"kubernetes.io/projected/86d64296-9441-4c59-ac5a-0a6d94cf9499-kube-api-access-m6npg\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.585610 master-0 kubenswrapper[31420]: I0220 12:20:00.585571 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-ovsdbserver-sb\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.586229 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-config\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.586604 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-ovsdbserver-nb\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.586659 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-config-data\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.586884 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-dns-svc\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.586916 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-ovsdbserver-sb\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.586954 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33f57898-ebe9-4c8b-984b-86a4b35fed36-operator-scripts\") pod \"ironic-db-create-4q2vm\" (UID: \"33f57898-ebe9-4c8b-984b-86a4b35fed36\") " pod="openstack/ironic-db-create-4q2vm" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.587078 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-scripts\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.587109 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-dns-swift-storage-0\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.587197 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-credential-keys\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.587267 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-combined-ca-bundle\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.587375 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vh54\" (UniqueName: \"kubernetes.io/projected/33f57898-ebe9-4c8b-984b-86a4b35fed36-kube-api-access-2vh54\") pod \"ironic-db-create-4q2vm\" (UID: \"33f57898-ebe9-4c8b-984b-86a4b35fed36\") " pod="openstack/ironic-db-create-4q2vm" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.587469 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stlld\" (UniqueName: \"kubernetes.io/projected/4dba19cd-d9f4-419e-acd9-5a240202d577-kube-api-access-stlld\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.587563 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-fernet-keys\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.587831 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-ovsdbserver-nb\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.588034 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-dns-svc\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.589010 master-0 kubenswrapper[31420]: I0220 12:20:00.588953 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-dns-swift-storage-0\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.605654 master-0 kubenswrapper[31420]: I0220 12:20:00.590328 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-credential-keys\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.605654 master-0 kubenswrapper[31420]: I0220 12:20:00.591071 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-scripts\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.605654 master-0 kubenswrapper[31420]: I0220 12:20:00.592110 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-combined-ca-bundle\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.605654 master-0 kubenswrapper[31420]: I0220 12:20:00.595364 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-config\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.605654 master-0 kubenswrapper[31420]: I0220 12:20:00.599888 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-config-data\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.623635 master-0 kubenswrapper[31420]: I0220 12:20:00.622560 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6npg\" (UniqueName: \"kubernetes.io/projected/86d64296-9441-4c59-ac5a-0a6d94cf9499-kube-api-access-m6npg\") pod \"keystone-bootstrap-7cvhp\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.627828 master-0 kubenswrapper[31420]: I0220 12:20:00.627616 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stlld\" (UniqueName: \"kubernetes.io/projected/4dba19cd-d9f4-419e-acd9-5a240202d577-kube-api-access-stlld\") pod \"dnsmasq-dns-866f884f75-w68xj\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.664271 master-0 kubenswrapper[31420]: I0220 12:20:00.664036 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d44a4-db-sync-wkljp"] Feb 20 12:20:00.673722 master-0 kubenswrapper[31420]: I0220 12:20:00.673651 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.676859 master-0 kubenswrapper[31420]: I0220 12:20:00.676802 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-b737-account-create-update-wnbqm"] Feb 20 12:20:00.677011 master-0 kubenswrapper[31420]: I0220 12:20:00.676959 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-d44a4-scripts" Feb 20 12:20:00.680549 master-0 kubenswrapper[31420]: I0220 12:20:00.678562 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-d44a4-config-data" Feb 20 12:20:00.696502 master-0 kubenswrapper[31420]: I0220 12:20:00.689351 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33f57898-ebe9-4c8b-984b-86a4b35fed36-operator-scripts\") pod \"ironic-db-create-4q2vm\" (UID: \"33f57898-ebe9-4c8b-984b-86a4b35fed36\") " pod="openstack/ironic-db-create-4q2vm" Feb 20 12:20:00.696502 master-0 kubenswrapper[31420]: I0220 12:20:00.689445 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vh54\" (UniqueName: \"kubernetes.io/projected/33f57898-ebe9-4c8b-984b-86a4b35fed36-kube-api-access-2vh54\") pod \"ironic-db-create-4q2vm\" (UID: \"33f57898-ebe9-4c8b-984b-86a4b35fed36\") " pod="openstack/ironic-db-create-4q2vm" Feb 20 12:20:00.696502 master-0 kubenswrapper[31420]: I0220 12:20:00.690366 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33f57898-ebe9-4c8b-984b-86a4b35fed36-operator-scripts\") pod \"ironic-db-create-4q2vm\" (UID: \"33f57898-ebe9-4c8b-984b-86a4b35fed36\") " pod="openstack/ironic-db-create-4q2vm" Feb 20 12:20:00.696502 master-0 kubenswrapper[31420]: I0220 12:20:00.692161 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-v9jsf"] Feb 20 12:20:00.696502 master-0 kubenswrapper[31420]: I0220 12:20:00.694161 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-b737-account-create-update-wnbqm" Feb 20 12:20:00.696502 master-0 kubenswrapper[31420]: I0220 12:20:00.695853 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-v9jsf" Feb 20 12:20:00.698098 master-0 kubenswrapper[31420]: I0220 12:20:00.697136 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Feb 20 12:20:00.698098 master-0 kubenswrapper[31420]: I0220 12:20:00.697194 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 20 12:20:00.699128 master-0 kubenswrapper[31420]: I0220 12:20:00.699061 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 20 12:20:00.707788 master-0 kubenswrapper[31420]: I0220 12:20:00.706964 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-db-sync-wkljp"] Feb 20 12:20:00.718004 master-0 kubenswrapper[31420]: I0220 12:20:00.717950 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-b737-account-create-update-wnbqm"] Feb 20 12:20:00.726652 master-0 kubenswrapper[31420]: I0220 12:20:00.726519 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-v9jsf"] Feb 20 12:20:00.752091 master-0 kubenswrapper[31420]: I0220 12:20:00.752029 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:00.758073 master-0 kubenswrapper[31420]: I0220 12:20:00.757979 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vh54\" (UniqueName: \"kubernetes.io/projected/33f57898-ebe9-4c8b-984b-86a4b35fed36-kube-api-access-2vh54\") pod \"ironic-db-create-4q2vm\" (UID: \"33f57898-ebe9-4c8b-984b-86a4b35fed36\") " pod="openstack/ironic-db-create-4q2vm" Feb 20 12:20:00.769094 master-0 kubenswrapper[31420]: I0220 12:20:00.768029 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:00.791429 master-0 kubenswrapper[31420]: I0220 12:20:00.791370 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4hvr\" (UniqueName: \"kubernetes.io/projected/e730e756-3c53-48ff-a27d-5ddbf042a996-kube-api-access-z4hvr\") pod \"neutron-db-sync-v9jsf\" (UID: \"e730e756-3c53-48ff-a27d-5ddbf042a996\") " pod="openstack/neutron-db-sync-v9jsf" Feb 20 12:20:00.791694 master-0 kubenswrapper[31420]: I0220 12:20:00.791440 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f85180db-2d00-4ec9-b408-813c4db2d86b-etc-machine-id\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.791694 master-0 kubenswrapper[31420]: I0220 12:20:00.791458 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-scripts\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.791694 master-0 kubenswrapper[31420]: I0220 12:20:00.791476 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba9661c0-9b94-468a-a207-51d7de5ecc92-operator-scripts\") pod \"ironic-b737-account-create-update-wnbqm\" (UID: \"ba9661c0-9b94-468a-a207-51d7de5ecc92\") " pod="openstack/ironic-b737-account-create-update-wnbqm" Feb 20 12:20:00.791694 master-0 kubenswrapper[31420]: I0220 12:20:00.791505 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-config-data\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.791694 master-0 kubenswrapper[31420]: I0220 12:20:00.791588 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-db-sync-config-data\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.791694 master-0 kubenswrapper[31420]: I0220 12:20:00.791614 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgcf7\" (UniqueName: \"kubernetes.io/projected/ba9661c0-9b94-468a-a207-51d7de5ecc92-kube-api-access-vgcf7\") pod \"ironic-b737-account-create-update-wnbqm\" (UID: \"ba9661c0-9b94-468a-a207-51d7de5ecc92\") " pod="openstack/ironic-b737-account-create-update-wnbqm" Feb 20 12:20:00.791694 master-0 kubenswrapper[31420]: I0220 12:20:00.791671 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h5s5\" (UniqueName: \"kubernetes.io/projected/f85180db-2d00-4ec9-b408-813c4db2d86b-kube-api-access-6h5s5\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.791694 master-0 kubenswrapper[31420]: I0220 12:20:00.791693 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e730e756-3c53-48ff-a27d-5ddbf042a996-config\") pod \"neutron-db-sync-v9jsf\" (UID: \"e730e756-3c53-48ff-a27d-5ddbf042a996\") " pod="openstack/neutron-db-sync-v9jsf" Feb 20 12:20:00.792073 master-0 kubenswrapper[31420]: I0220 12:20:00.791716 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-combined-ca-bundle\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.792073 master-0 kubenswrapper[31420]: I0220 12:20:00.791734 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e730e756-3c53-48ff-a27d-5ddbf042a996-combined-ca-bundle\") pod \"neutron-db-sync-v9jsf\" (UID: \"e730e756-3c53-48ff-a27d-5ddbf042a996\") " pod="openstack/neutron-db-sync-v9jsf" Feb 20 12:20:00.813104 master-0 kubenswrapper[31420]: I0220 12:20:00.813036 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-866f884f75-w68xj"] Feb 20 12:20:00.850765 master-0 kubenswrapper[31420]: I0220 12:20:00.850688 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75b6f45695-fkj5c"] Feb 20 12:20:00.880883 master-0 kubenswrapper[31420]: I0220 12:20:00.879005 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-4q2vm" Feb 20 12:20:00.895789 master-0 kubenswrapper[31420]: I0220 12:20:00.895544 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-db-sync-config-data\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.895789 master-0 kubenswrapper[31420]: I0220 12:20:00.895611 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgcf7\" (UniqueName: \"kubernetes.io/projected/ba9661c0-9b94-468a-a207-51d7de5ecc92-kube-api-access-vgcf7\") pod \"ironic-b737-account-create-update-wnbqm\" (UID: \"ba9661c0-9b94-468a-a207-51d7de5ecc92\") " pod="openstack/ironic-b737-account-create-update-wnbqm" Feb 20 12:20:00.895789 master-0 kubenswrapper[31420]: I0220 12:20:00.895688 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h5s5\" (UniqueName: \"kubernetes.io/projected/f85180db-2d00-4ec9-b408-813c4db2d86b-kube-api-access-6h5s5\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.895789 master-0 kubenswrapper[31420]: I0220 12:20:00.895722 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e730e756-3c53-48ff-a27d-5ddbf042a996-config\") pod \"neutron-db-sync-v9jsf\" (UID: \"e730e756-3c53-48ff-a27d-5ddbf042a996\") " pod="openstack/neutron-db-sync-v9jsf" Feb 20 12:20:00.895789 master-0 kubenswrapper[31420]: I0220 12:20:00.895761 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-combined-ca-bundle\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.895789 master-0 kubenswrapper[31420]: I0220 12:20:00.895786 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e730e756-3c53-48ff-a27d-5ddbf042a996-combined-ca-bundle\") pod \"neutron-db-sync-v9jsf\" (UID: \"e730e756-3c53-48ff-a27d-5ddbf042a996\") " pod="openstack/neutron-db-sync-v9jsf" Feb 20 12:20:00.895789 master-0 kubenswrapper[31420]: I0220 12:20:00.895822 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4hvr\" (UniqueName: \"kubernetes.io/projected/e730e756-3c53-48ff-a27d-5ddbf042a996-kube-api-access-z4hvr\") pod \"neutron-db-sync-v9jsf\" (UID: \"e730e756-3c53-48ff-a27d-5ddbf042a996\") " pod="openstack/neutron-db-sync-v9jsf" Feb 20 12:20:00.896889 master-0 kubenswrapper[31420]: I0220 12:20:00.896792 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f85180db-2d00-4ec9-b408-813c4db2d86b-etc-machine-id\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.896990 master-0 kubenswrapper[31420]: I0220 12:20:00.896886 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-scripts\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.897041 master-0 kubenswrapper[31420]: I0220 12:20:00.896984 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba9661c0-9b94-468a-a207-51d7de5ecc92-operator-scripts\") pod \"ironic-b737-account-create-update-wnbqm\" (UID: \"ba9661c0-9b94-468a-a207-51d7de5ecc92\") " pod="openstack/ironic-b737-account-create-update-wnbqm" Feb 20 12:20:00.897092 master-0 kubenswrapper[31420]: I0220 12:20:00.897076 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-config-data\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.898122 master-0 kubenswrapper[31420]: I0220 12:20:00.897425 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f85180db-2d00-4ec9-b408-813c4db2d86b-etc-machine-id\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.899076 master-0 kubenswrapper[31420]: I0220 12:20:00.899018 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba9661c0-9b94-468a-a207-51d7de5ecc92-operator-scripts\") pod \"ironic-b737-account-create-update-wnbqm\" (UID: \"ba9661c0-9b94-468a-a207-51d7de5ecc92\") " pod="openstack/ironic-b737-account-create-update-wnbqm" Feb 20 12:20:00.900665 master-0 kubenswrapper[31420]: I0220 12:20:00.900431 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b6f45695-fkj5c"] Feb 20 12:20:00.900797 master-0 kubenswrapper[31420]: I0220 12:20:00.900516 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:00.900997 master-0 kubenswrapper[31420]: I0220 12:20:00.900963 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e730e756-3c53-48ff-a27d-5ddbf042a996-combined-ca-bundle\") pod \"neutron-db-sync-v9jsf\" (UID: \"e730e756-3c53-48ff-a27d-5ddbf042a996\") " pod="openstack/neutron-db-sync-v9jsf" Feb 20 12:20:00.901068 master-0 kubenswrapper[31420]: I0220 12:20:00.900561 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-db-sync-config-data\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.902862 master-0 kubenswrapper[31420]: I0220 12:20:00.902810 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-scripts\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.903624 master-0 kubenswrapper[31420]: I0220 12:20:00.903592 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-config-data\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.914812 master-0 kubenswrapper[31420]: I0220 12:20:00.913882 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e730e756-3c53-48ff-a27d-5ddbf042a996-config\") pod \"neutron-db-sync-v9jsf\" (UID: \"e730e756-3c53-48ff-a27d-5ddbf042a996\") " pod="openstack/neutron-db-sync-v9jsf" Feb 20 12:20:00.914812 master-0 kubenswrapper[31420]: I0220 12:20:00.914750 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-combined-ca-bundle\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.919285 master-0 kubenswrapper[31420]: I0220 12:20:00.917894 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4hvr\" (UniqueName: \"kubernetes.io/projected/e730e756-3c53-48ff-a27d-5ddbf042a996-kube-api-access-z4hvr\") pod \"neutron-db-sync-v9jsf\" (UID: \"e730e756-3c53-48ff-a27d-5ddbf042a996\") " pod="openstack/neutron-db-sync-v9jsf" Feb 20 12:20:00.919285 master-0 kubenswrapper[31420]: I0220 12:20:00.917909 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgcf7\" (UniqueName: \"kubernetes.io/projected/ba9661c0-9b94-468a-a207-51d7de5ecc92-kube-api-access-vgcf7\") pod \"ironic-b737-account-create-update-wnbqm\" (UID: \"ba9661c0-9b94-468a-a207-51d7de5ecc92\") " pod="openstack/ironic-b737-account-create-update-wnbqm" Feb 20 12:20:00.920884 master-0 kubenswrapper[31420]: I0220 12:20:00.920793 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h5s5\" (UniqueName: \"kubernetes.io/projected/f85180db-2d00-4ec9-b408-813c4db2d86b-kube-api-access-6h5s5\") pod \"cinder-d44a4-db-sync-wkljp\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:00.926308 master-0 kubenswrapper[31420]: I0220 12:20:00.926213 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-g4frr"] Feb 20 12:20:00.927888 master-0 kubenswrapper[31420]: I0220 12:20:00.927841 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:00.930238 master-0 kubenswrapper[31420]: I0220 12:20:00.930209 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 20 12:20:00.930554 master-0 kubenswrapper[31420]: I0220 12:20:00.930522 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 20 12:20:00.935801 master-0 kubenswrapper[31420]: I0220 12:20:00.935756 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-g4frr"] Feb 20 12:20:00.999986 master-0 kubenswrapper[31420]: I0220 12:20:00.999932 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qwz8\" (UniqueName: \"kubernetes.io/projected/cebde23d-32f9-453e-9cea-3c240e0a8e43-kube-api-access-5qwz8\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.000224 master-0 kubenswrapper[31420]: I0220 12:20:01.000177 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sflcg\" (UniqueName: \"kubernetes.io/projected/d644bcbb-d205-4408-a0c7-7e3bbc55e180-kube-api-access-sflcg\") pod \"placement-db-sync-g4frr\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.000279 master-0 kubenswrapper[31420]: I0220 12:20:01.000241 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-ovsdbserver-sb\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.000647 master-0 kubenswrapper[31420]: I0220 12:20:01.000619 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-dns-svc\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.000724 master-0 kubenswrapper[31420]: I0220 12:20:01.000704 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-config\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.000796 master-0 kubenswrapper[31420]: I0220 12:20:01.000776 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-scripts\") pod \"placement-db-sync-g4frr\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.000835 master-0 kubenswrapper[31420]: I0220 12:20:01.000804 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-config-data\") pod \"placement-db-sync-g4frr\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.000967 master-0 kubenswrapper[31420]: I0220 12:20:01.000944 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d644bcbb-d205-4408-a0c7-7e3bbc55e180-logs\") pod \"placement-db-sync-g4frr\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.001017 master-0 kubenswrapper[31420]: I0220 12:20:01.000972 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-combined-ca-bundle\") pod \"placement-db-sync-g4frr\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.001017 master-0 kubenswrapper[31420]: I0220 12:20:01.000994 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-dns-swift-storage-0\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.001077 master-0 kubenswrapper[31420]: I0220 12:20:01.001048 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-ovsdbserver-nb\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.102849 master-0 kubenswrapper[31420]: I0220 12:20:01.102774 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-dns-swift-storage-0\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.103055 master-0 kubenswrapper[31420]: I0220 12:20:01.102868 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-ovsdbserver-nb\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.103055 master-0 kubenswrapper[31420]: I0220 12:20:01.102918 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qwz8\" (UniqueName: \"kubernetes.io/projected/cebde23d-32f9-453e-9cea-3c240e0a8e43-kube-api-access-5qwz8\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.103196 master-0 kubenswrapper[31420]: I0220 12:20:01.103152 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sflcg\" (UniqueName: \"kubernetes.io/projected/d644bcbb-d205-4408-a0c7-7e3bbc55e180-kube-api-access-sflcg\") pod \"placement-db-sync-g4frr\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.103245 master-0 kubenswrapper[31420]: I0220 12:20:01.103230 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-ovsdbserver-sb\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.103411 master-0 kubenswrapper[31420]: I0220 12:20:01.103394 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-dns-svc\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.103489 master-0 kubenswrapper[31420]: I0220 12:20:01.103474 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-config\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.103776 master-0 kubenswrapper[31420]: I0220 12:20:01.103757 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-scripts\") pod \"placement-db-sync-g4frr\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.103823 master-0 kubenswrapper[31420]: I0220 12:20:01.103789 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-config-data\") pod \"placement-db-sync-g4frr\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.103875 master-0 kubenswrapper[31420]: I0220 12:20:01.103859 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d644bcbb-d205-4408-a0c7-7e3bbc55e180-logs\") pod \"placement-db-sync-g4frr\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.103918 master-0 kubenswrapper[31420]: I0220 12:20:01.103887 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-combined-ca-bundle\") pod \"placement-db-sync-g4frr\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.104456 master-0 kubenswrapper[31420]: I0220 12:20:01.104277 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-ovsdbserver-nb\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.104456 master-0 kubenswrapper[31420]: I0220 12:20:01.104322 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-dns-swift-storage-0\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.104456 master-0 kubenswrapper[31420]: I0220 12:20:01.104355 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-ovsdbserver-sb\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.104456 master-0 kubenswrapper[31420]: I0220 12:20:01.104410 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-dns-svc\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.104627 master-0 kubenswrapper[31420]: I0220 12:20:01.104532 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d644bcbb-d205-4408-a0c7-7e3bbc55e180-logs\") pod \"placement-db-sync-g4frr\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.107716 master-0 kubenswrapper[31420]: I0220 12:20:01.105447 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-config\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.111740 master-0 kubenswrapper[31420]: I0220 12:20:01.109649 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:01.111740 master-0 kubenswrapper[31420]: I0220 12:20:01.110634 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-combined-ca-bundle\") pod \"placement-db-sync-g4frr\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.111740 master-0 kubenswrapper[31420]: I0220 12:20:01.110849 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-config-data\") pod \"placement-db-sync-g4frr\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.111740 master-0 kubenswrapper[31420]: I0220 12:20:01.110897 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-scripts\") pod \"placement-db-sync-g4frr\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.122278 master-0 kubenswrapper[31420]: I0220 12:20:01.121772 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qwz8\" (UniqueName: \"kubernetes.io/projected/cebde23d-32f9-453e-9cea-3c240e0a8e43-kube-api-access-5qwz8\") pod \"dnsmasq-dns-75b6f45695-fkj5c\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.128595 master-0 kubenswrapper[31420]: I0220 12:20:01.125892 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sflcg\" (UniqueName: \"kubernetes.io/projected/d644bcbb-d205-4408-a0c7-7e3bbc55e180-kube-api-access-sflcg\") pod \"placement-db-sync-g4frr\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.200678 master-0 kubenswrapper[31420]: I0220 12:20:01.200181 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-b737-account-create-update-wnbqm" Feb 20 12:20:01.212031 master-0 kubenswrapper[31420]: I0220 12:20:01.211573 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-v9jsf" Feb 20 12:20:01.260102 master-0 kubenswrapper[31420]: I0220 12:20:01.256715 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:01.292334 master-0 kubenswrapper[31420]: I0220 12:20:01.291578 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:01.578459 master-0 kubenswrapper[31420]: I0220 12:20:01.578325 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-866f884f75-w68xj"] Feb 20 12:20:01.603155 master-0 kubenswrapper[31420]: I0220 12:20:01.602481 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7cvhp"] Feb 20 12:20:01.617117 master-0 kubenswrapper[31420]: W0220 12:20:01.616901 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86d64296_9441_4c59_ac5a_0a6d94cf9499.slice/crio-0a4f3e2836984859108a1be089d0cfe8870375998a32a49372589998bb041586 WatchSource:0}: Error finding container 0a4f3e2836984859108a1be089d0cfe8870375998a32a49372589998bb041586: Status 404 returned error can't find the container with id 0a4f3e2836984859108a1be089d0cfe8870375998a32a49372589998bb041586 Feb 20 12:20:01.619024 master-0 kubenswrapper[31420]: W0220 12:20:01.618885 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dba19cd_d9f4_419e_acd9_5a240202d577.slice/crio-1dd5f3db3ac60b4fdc2c3ae699cb7e767d61e169bcb35564ada50f0a561b00a0 WatchSource:0}: Error finding container 1dd5f3db3ac60b4fdc2c3ae699cb7e767d61e169bcb35564ada50f0a561b00a0: Status 404 returned error can't find the container with id 1dd5f3db3ac60b4fdc2c3ae699cb7e767d61e169bcb35564ada50f0a561b00a0 Feb 20 12:20:01.625808 master-0 kubenswrapper[31420]: I0220 12:20:01.625774 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-4q2vm"] Feb 20 12:20:01.658043 master-0 kubenswrapper[31420]: W0220 12:20:01.657721 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85180db_2d00_4ec9_b408_813c4db2d86b.slice/crio-6d238b393f7c900a53df45f0eaa5cfd2a7d934f9929ad4f3cf7ea5d3bf94a8e8 WatchSource:0}: Error finding container 6d238b393f7c900a53df45f0eaa5cfd2a7d934f9929ad4f3cf7ea5d3bf94a8e8: Status 404 returned error can't find the container with id 6d238b393f7c900a53df45f0eaa5cfd2a7d934f9929ad4f3cf7ea5d3bf94a8e8 Feb 20 12:20:01.666327 master-0 kubenswrapper[31420]: I0220 12:20:01.666268 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-db-sync-wkljp"] Feb 20 12:20:01.895765 master-0 kubenswrapper[31420]: I0220 12:20:01.895718 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-b737-account-create-update-wnbqm"] Feb 20 12:20:02.049780 master-0 kubenswrapper[31420]: I0220 12:20:02.049453 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-4q2vm" event={"ID":"33f57898-ebe9-4c8b-984b-86a4b35fed36","Type":"ContainerStarted","Data":"6cfa026e3bf259a7d671a1b5788cae2f9c7cd73916a2fd08aebc17f25e3c4856"} Feb 20 12:20:02.049780 master-0 kubenswrapper[31420]: I0220 12:20:02.049543 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-4q2vm" event={"ID":"33f57898-ebe9-4c8b-984b-86a4b35fed36","Type":"ContainerStarted","Data":"3dd22e756cdf4660af0fc1505c51569a61b389bd5e17c13fd2a373a1a119ab76"} Feb 20 12:20:02.061261 master-0 kubenswrapper[31420]: I0220 12:20:02.060590 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-db-sync-wkljp" event={"ID":"f85180db-2d00-4ec9-b408-813c4db2d86b","Type":"ContainerStarted","Data":"6d238b393f7c900a53df45f0eaa5cfd2a7d934f9929ad4f3cf7ea5d3bf94a8e8"} Feb 20 12:20:02.068124 master-0 kubenswrapper[31420]: I0220 12:20:02.063346 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-866f884f75-w68xj" event={"ID":"4dba19cd-d9f4-419e-acd9-5a240202d577","Type":"ContainerStarted","Data":"0de51f7f49a31f2c9d89354e893509b56ea81a15e920a73326d577be8e61c29f"} Feb 20 12:20:02.068124 master-0 kubenswrapper[31420]: I0220 12:20:02.063382 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-866f884f75-w68xj" event={"ID":"4dba19cd-d9f4-419e-acd9-5a240202d577","Type":"ContainerStarted","Data":"1dd5f3db3ac60b4fdc2c3ae699cb7e767d61e169bcb35564ada50f0a561b00a0"} Feb 20 12:20:02.071737 master-0 kubenswrapper[31420]: I0220 12:20:02.071696 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7cvhp" event={"ID":"86d64296-9441-4c59-ac5a-0a6d94cf9499","Type":"ContainerStarted","Data":"0a4f3e2836984859108a1be089d0cfe8870375998a32a49372589998bb041586"} Feb 20 12:20:02.072734 master-0 kubenswrapper[31420]: I0220 12:20:02.072714 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b737-account-create-update-wnbqm" event={"ID":"ba9661c0-9b94-468a-a207-51d7de5ecc92","Type":"ContainerStarted","Data":"8c51da38572d9a35b85e2a1c7ecdff3ebb5194dcf3f6959de8ec40a30fbcc4d5"} Feb 20 12:20:02.125926 master-0 kubenswrapper[31420]: I0220 12:20:02.125867 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-v9jsf"] Feb 20 12:20:02.718558 master-0 kubenswrapper[31420]: I0220 12:20:02.717716 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e60fa-default-external-api-0"] Feb 20 12:20:02.721962 master-0 kubenswrapper[31420]: I0220 12:20:02.721896 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.725774 master-0 kubenswrapper[31420]: I0220 12:20:02.725745 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 20 12:20:02.725905 master-0 kubenswrapper[31420]: I0220 12:20:02.725882 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 20 12:20:02.727097 master-0 kubenswrapper[31420]: I0220 12:20:02.727077 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-e60fa-default-external-config-data" Feb 20 12:20:02.734623 master-0 kubenswrapper[31420]: I0220 12:20:02.734014 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7cvhp" podStartSLOduration=2.733997938 podStartE2EDuration="2.733997938s" podCreationTimestamp="2026-02-20 12:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:02.708725774 +0000 UTC m=+907.427964025" watchObservedRunningTime="2026-02-20 12:20:02.733997938 +0000 UTC m=+907.453236179" Feb 20 12:20:02.734623 master-0 kubenswrapper[31420]: I0220 12:20:02.734137 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b6f45695-fkj5c"] Feb 20 12:20:02.751253 master-0 kubenswrapper[31420]: W0220 12:20:02.751202 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcebde23d_32f9_453e_9cea_3c240e0a8e43.slice/crio-5f987a254c3b84a1b2d3682466c8f2a5b2cbf026263d7aeae239b6d75f9ae4f7 WatchSource:0}: Error finding container 5f987a254c3b84a1b2d3682466c8f2a5b2cbf026263d7aeae239b6d75f9ae4f7: Status 404 returned error can't find the container with id 5f987a254c3b84a1b2d3682466c8f2a5b2cbf026263d7aeae239b6d75f9ae4f7 Feb 20 12:20:02.756828 master-0 kubenswrapper[31420]: I0220 12:20:02.756796 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-g4frr"] Feb 20 12:20:02.781547 master-0 kubenswrapper[31420]: I0220 12:20:02.779154 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e60fa-default-external-api-0"] Feb 20 12:20:02.843079 master-0 kubenswrapper[31420]: I0220 12:20:02.841425 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.843079 master-0 kubenswrapper[31420]: I0220 12:20:02.841513 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-public-tls-certs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.843079 master-0 kubenswrapper[31420]: I0220 12:20:02.842833 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47a5fec2-090c-4795-800d-7c55ce83f16d-httpd-run\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.843079 master-0 kubenswrapper[31420]: I0220 12:20:02.842891 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsh9v\" (UniqueName: \"kubernetes.io/projected/47a5fec2-090c-4795-800d-7c55ce83f16d-kube-api-access-fsh9v\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.843079 master-0 kubenswrapper[31420]: I0220 12:20:02.842944 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47a5fec2-090c-4795-800d-7c55ce83f16d-logs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.843079 master-0 kubenswrapper[31420]: I0220 12:20:02.843012 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-config-data\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.843079 master-0 kubenswrapper[31420]: I0220 12:20:02.843081 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-combined-ca-bundle\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.843461 master-0 kubenswrapper[31420]: I0220 12:20:02.843142 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-scripts\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.945611 master-0 kubenswrapper[31420]: I0220 12:20:02.945062 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47a5fec2-090c-4795-800d-7c55ce83f16d-httpd-run\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.945611 master-0 kubenswrapper[31420]: I0220 12:20:02.945160 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsh9v\" (UniqueName: \"kubernetes.io/projected/47a5fec2-090c-4795-800d-7c55ce83f16d-kube-api-access-fsh9v\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.945611 master-0 kubenswrapper[31420]: I0220 12:20:02.945195 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47a5fec2-090c-4795-800d-7c55ce83f16d-logs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.945611 master-0 kubenswrapper[31420]: I0220 12:20:02.945224 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-config-data\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.945611 master-0 kubenswrapper[31420]: I0220 12:20:02.945259 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-combined-ca-bundle\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.945611 master-0 kubenswrapper[31420]: I0220 12:20:02.945295 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-scripts\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.945611 master-0 kubenswrapper[31420]: I0220 12:20:02.945374 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.945611 master-0 kubenswrapper[31420]: I0220 12:20:02.945406 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-public-tls-certs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.947270 master-0 kubenswrapper[31420]: I0220 12:20:02.946876 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47a5fec2-090c-4795-800d-7c55ce83f16d-logs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.947270 master-0 kubenswrapper[31420]: I0220 12:20:02.946929 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47a5fec2-090c-4795-800d-7c55ce83f16d-httpd-run\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.953741 master-0 kubenswrapper[31420]: I0220 12:20:02.951884 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-scripts\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.953741 master-0 kubenswrapper[31420]: I0220 12:20:02.952025 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-combined-ca-bundle\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.953741 master-0 kubenswrapper[31420]: I0220 12:20:02.952463 31420 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 12:20:02.953741 master-0 kubenswrapper[31420]: I0220 12:20:02.952487 31420 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/368f719461f0c265cdc1ceb7166e6bad74c18134a381ef3f2ecc6c3c88bbea1f/globalmount\"" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.954999 master-0 kubenswrapper[31420]: I0220 12:20:02.954964 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-config-data\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.955718 master-0 kubenswrapper[31420]: I0220 12:20:02.955687 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-public-tls-certs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:02.969467 master-0 kubenswrapper[31420]: I0220 12:20:02.969415 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsh9v\" (UniqueName: \"kubernetes.io/projected/47a5fec2-090c-4795-800d-7c55ce83f16d-kube-api-access-fsh9v\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:03.092023 master-0 kubenswrapper[31420]: I0220 12:20:03.091952 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-4q2vm" event={"ID":"33f57898-ebe9-4c8b-984b-86a4b35fed36","Type":"ContainerDied","Data":"6cfa026e3bf259a7d671a1b5788cae2f9c7cd73916a2fd08aebc17f25e3c4856"} Feb 20 12:20:03.092319 master-0 kubenswrapper[31420]: I0220 12:20:03.091772 31420 generic.go:334] "Generic (PLEG): container finished" podID="33f57898-ebe9-4c8b-984b-86a4b35fed36" containerID="6cfa026e3bf259a7d671a1b5788cae2f9c7cd73916a2fd08aebc17f25e3c4856" exitCode=0 Feb 20 12:20:03.098081 master-0 kubenswrapper[31420]: I0220 12:20:03.097023 31420 generic.go:334] "Generic (PLEG): container finished" podID="4dba19cd-d9f4-419e-acd9-5a240202d577" containerID="0de51f7f49a31f2c9d89354e893509b56ea81a15e920a73326d577be8e61c29f" exitCode=0 Feb 20 12:20:03.098081 master-0 kubenswrapper[31420]: I0220 12:20:03.097120 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-866f884f75-w68xj" event={"ID":"4dba19cd-d9f4-419e-acd9-5a240202d577","Type":"ContainerDied","Data":"0de51f7f49a31f2c9d89354e893509b56ea81a15e920a73326d577be8e61c29f"} Feb 20 12:20:03.098081 master-0 kubenswrapper[31420]: I0220 12:20:03.097149 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-866f884f75-w68xj" event={"ID":"4dba19cd-d9f4-419e-acd9-5a240202d577","Type":"ContainerDied","Data":"1dd5f3db3ac60b4fdc2c3ae699cb7e767d61e169bcb35564ada50f0a561b00a0"} Feb 20 12:20:03.098081 master-0 kubenswrapper[31420]: I0220 12:20:03.097159 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dd5f3db3ac60b4fdc2c3ae699cb7e767d61e169bcb35564ada50f0a561b00a0" Feb 20 12:20:03.101843 master-0 kubenswrapper[31420]: I0220 12:20:03.101780 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7cvhp" event={"ID":"86d64296-9441-4c59-ac5a-0a6d94cf9499","Type":"ContainerStarted","Data":"d4c4b10edf183fb34c17d57ce1327c4a724f81082f7e37ecdd3583f5864e45a3"} Feb 20 12:20:03.106864 master-0 kubenswrapper[31420]: I0220 12:20:03.104459 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-g4frr" event={"ID":"d644bcbb-d205-4408-a0c7-7e3bbc55e180","Type":"ContainerStarted","Data":"1e6ecb6b67541a60da48b5db4cc36166da3aa144b93396cc10b713dc030c0211"} Feb 20 12:20:03.109514 master-0 kubenswrapper[31420]: I0220 12:20:03.107296 31420 generic.go:334] "Generic (PLEG): container finished" podID="ba9661c0-9b94-468a-a207-51d7de5ecc92" containerID="672c4272aec9b7e8ded032efdcd0ebd3b995afe98c3005fde71f6773d902c8c4" exitCode=0 Feb 20 12:20:03.109514 master-0 kubenswrapper[31420]: I0220 12:20:03.107359 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b737-account-create-update-wnbqm" event={"ID":"ba9661c0-9b94-468a-a207-51d7de5ecc92","Type":"ContainerDied","Data":"672c4272aec9b7e8ded032efdcd0ebd3b995afe98c3005fde71f6773d902c8c4"} Feb 20 12:20:03.135212 master-0 kubenswrapper[31420]: I0220 12:20:03.135149 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" event={"ID":"cebde23d-32f9-453e-9cea-3c240e0a8e43","Type":"ContainerStarted","Data":"5f987a254c3b84a1b2d3682466c8f2a5b2cbf026263d7aeae239b6d75f9ae4f7"} Feb 20 12:20:03.139032 master-0 kubenswrapper[31420]: I0220 12:20:03.138978 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-v9jsf" event={"ID":"e730e756-3c53-48ff-a27d-5ddbf042a996","Type":"ContainerStarted","Data":"af7bcd0389da6ecbf387e067862097cbdde12c6359b8812eb3086092ba104b4a"} Feb 20 12:20:03.139032 master-0 kubenswrapper[31420]: I0220 12:20:03.139032 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-v9jsf" event={"ID":"e730e756-3c53-48ff-a27d-5ddbf042a996","Type":"ContainerStarted","Data":"7681679c5d482adba1ea08795c751d392ab5e721368ff0cf5889cca74ac8e48e"} Feb 20 12:20:03.199263 master-0 kubenswrapper[31420]: I0220 12:20:03.199193 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:03.205907 master-0 kubenswrapper[31420]: I0220 12:20:03.204709 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-v9jsf" podStartSLOduration=3.204682872 podStartE2EDuration="3.204682872s" podCreationTimestamp="2026-02-20 12:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:03.161325767 +0000 UTC m=+907.880564008" watchObservedRunningTime="2026-02-20 12:20:03.204682872 +0000 UTC m=+907.923921113" Feb 20 12:20:03.353635 master-0 kubenswrapper[31420]: I0220 12:20:03.353485 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stlld\" (UniqueName: \"kubernetes.io/projected/4dba19cd-d9f4-419e-acd9-5a240202d577-kube-api-access-stlld\") pod \"4dba19cd-d9f4-419e-acd9-5a240202d577\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " Feb 20 12:20:03.353848 master-0 kubenswrapper[31420]: I0220 12:20:03.353691 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-dns-swift-storage-0\") pod \"4dba19cd-d9f4-419e-acd9-5a240202d577\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " Feb 20 12:20:03.353996 master-0 kubenswrapper[31420]: I0220 12:20:03.353974 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-config\") pod \"4dba19cd-d9f4-419e-acd9-5a240202d577\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " Feb 20 12:20:03.354041 master-0 kubenswrapper[31420]: I0220 12:20:03.354009 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-ovsdbserver-sb\") pod \"4dba19cd-d9f4-419e-acd9-5a240202d577\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " Feb 20 12:20:03.354100 master-0 kubenswrapper[31420]: I0220 12:20:03.354079 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-dns-svc\") pod \"4dba19cd-d9f4-419e-acd9-5a240202d577\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " Feb 20 12:20:03.354137 master-0 kubenswrapper[31420]: I0220 12:20:03.354124 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-ovsdbserver-nb\") pod \"4dba19cd-d9f4-419e-acd9-5a240202d577\" (UID: \"4dba19cd-d9f4-419e-acd9-5a240202d577\") " Feb 20 12:20:03.357615 master-0 kubenswrapper[31420]: I0220 12:20:03.357558 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dba19cd-d9f4-419e-acd9-5a240202d577-kube-api-access-stlld" (OuterVolumeSpecName: "kube-api-access-stlld") pod "4dba19cd-d9f4-419e-acd9-5a240202d577" (UID: "4dba19cd-d9f4-419e-acd9-5a240202d577"). InnerVolumeSpecName "kube-api-access-stlld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:03.378567 master-0 kubenswrapper[31420]: I0220 12:20:03.378471 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-config" (OuterVolumeSpecName: "config") pod "4dba19cd-d9f4-419e-acd9-5a240202d577" (UID: "4dba19cd-d9f4-419e-acd9-5a240202d577"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:03.380187 master-0 kubenswrapper[31420]: I0220 12:20:03.380122 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4dba19cd-d9f4-419e-acd9-5a240202d577" (UID: "4dba19cd-d9f4-419e-acd9-5a240202d577"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:03.381352 master-0 kubenswrapper[31420]: I0220 12:20:03.381270 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4dba19cd-d9f4-419e-acd9-5a240202d577" (UID: "4dba19cd-d9f4-419e-acd9-5a240202d577"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:03.390554 master-0 kubenswrapper[31420]: I0220 12:20:03.390293 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4dba19cd-d9f4-419e-acd9-5a240202d577" (UID: "4dba19cd-d9f4-419e-acd9-5a240202d577"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:03.444899 master-0 kubenswrapper[31420]: I0220 12:20:03.444748 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4dba19cd-d9f4-419e-acd9-5a240202d577" (UID: "4dba19cd-d9f4-419e-acd9-5a240202d577"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:03.457163 master-0 kubenswrapper[31420]: I0220 12:20:03.457110 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:03.457163 master-0 kubenswrapper[31420]: I0220 12:20:03.457163 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:03.457405 master-0 kubenswrapper[31420]: I0220 12:20:03.457177 31420 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:03.457405 master-0 kubenswrapper[31420]: I0220 12:20:03.457186 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:03.457405 master-0 kubenswrapper[31420]: I0220 12:20:03.457196 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stlld\" (UniqueName: \"kubernetes.io/projected/4dba19cd-d9f4-419e-acd9-5a240202d577-kube-api-access-stlld\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:03.457405 master-0 kubenswrapper[31420]: I0220 12:20:03.457205 31420 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dba19cd-d9f4-419e-acd9-5a240202d577-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:03.584423 master-0 kubenswrapper[31420]: I0220 12:20:03.582237 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e60fa-default-external-api-0"] Feb 20 12:20:03.585468 master-0 kubenswrapper[31420]: E0220 12:20:03.585428 31420 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-e60fa-default-external-api-0" podUID="47a5fec2-090c-4795-800d-7c55ce83f16d" Feb 20 12:20:04.157213 master-0 kubenswrapper[31420]: I0220 12:20:04.156319 31420 generic.go:334] "Generic (PLEG): container finished" podID="cebde23d-32f9-453e-9cea-3c240e0a8e43" containerID="06de34299e9fbb5f40cd4903ce58ddde79dd310774d2b62b0aa9687d7d997a1f" exitCode=0 Feb 20 12:20:04.157213 master-0 kubenswrapper[31420]: I0220 12:20:04.156547 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" event={"ID":"cebde23d-32f9-453e-9cea-3c240e0a8e43","Type":"ContainerDied","Data":"06de34299e9fbb5f40cd4903ce58ddde79dd310774d2b62b0aa9687d7d997a1f"} Feb 20 12:20:04.157213 master-0 kubenswrapper[31420]: I0220 12:20:04.156629 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:04.157213 master-0 kubenswrapper[31420]: I0220 12:20:04.156812 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866f884f75-w68xj" Feb 20 12:20:04.399828 master-0 kubenswrapper[31420]: I0220 12:20:04.399604 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:04.439200 master-0 kubenswrapper[31420]: I0220 12:20:04.439163 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") pod \"glance-e60fa-default-external-api-0\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:04.515026 master-0 kubenswrapper[31420]: I0220 12:20:04.513914 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-combined-ca-bundle\") pod \"47a5fec2-090c-4795-800d-7c55ce83f16d\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " Feb 20 12:20:04.515026 master-0 kubenswrapper[31420]: I0220 12:20:04.513978 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47a5fec2-090c-4795-800d-7c55ce83f16d-logs\") pod \"47a5fec2-090c-4795-800d-7c55ce83f16d\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " Feb 20 12:20:04.515026 master-0 kubenswrapper[31420]: I0220 12:20:04.514005 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsh9v\" (UniqueName: \"kubernetes.io/projected/47a5fec2-090c-4795-800d-7c55ce83f16d-kube-api-access-fsh9v\") pod \"47a5fec2-090c-4795-800d-7c55ce83f16d\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " Feb 20 12:20:04.515026 master-0 kubenswrapper[31420]: I0220 12:20:04.514044 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47a5fec2-090c-4795-800d-7c55ce83f16d-httpd-run\") pod \"47a5fec2-090c-4795-800d-7c55ce83f16d\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " Feb 20 12:20:04.515026 master-0 kubenswrapper[31420]: I0220 12:20:04.514092 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-scripts\") pod \"47a5fec2-090c-4795-800d-7c55ce83f16d\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " Feb 20 12:20:04.515026 master-0 kubenswrapper[31420]: I0220 12:20:04.514227 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-public-tls-certs\") pod \"47a5fec2-090c-4795-800d-7c55ce83f16d\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " Feb 20 12:20:04.515026 master-0 kubenswrapper[31420]: I0220 12:20:04.514246 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-config-data\") pod \"47a5fec2-090c-4795-800d-7c55ce83f16d\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " Feb 20 12:20:04.515026 master-0 kubenswrapper[31420]: I0220 12:20:04.514608 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47a5fec2-090c-4795-800d-7c55ce83f16d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "47a5fec2-090c-4795-800d-7c55ce83f16d" (UID: "47a5fec2-090c-4795-800d-7c55ce83f16d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:20:04.515026 master-0 kubenswrapper[31420]: I0220 12:20:04.514792 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47a5fec2-090c-4795-800d-7c55ce83f16d-logs" (OuterVolumeSpecName: "logs") pod "47a5fec2-090c-4795-800d-7c55ce83f16d" (UID: "47a5fec2-090c-4795-800d-7c55ce83f16d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:20:04.517331 master-0 kubenswrapper[31420]: I0220 12:20:04.517220 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-866f884f75-w68xj"] Feb 20 12:20:04.517851 master-0 kubenswrapper[31420]: I0220 12:20:04.517788 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47a5fec2-090c-4795-800d-7c55ce83f16d" (UID: "47a5fec2-090c-4795-800d-7c55ce83f16d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:04.525933 master-0 kubenswrapper[31420]: I0220 12:20:04.519908 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a5fec2-090c-4795-800d-7c55ce83f16d-kube-api-access-fsh9v" (OuterVolumeSpecName: "kube-api-access-fsh9v") pod "47a5fec2-090c-4795-800d-7c55ce83f16d" (UID: "47a5fec2-090c-4795-800d-7c55ce83f16d"). InnerVolumeSpecName "kube-api-access-fsh9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:04.525933 master-0 kubenswrapper[31420]: I0220 12:20:04.522201 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "47a5fec2-090c-4795-800d-7c55ce83f16d" (UID: "47a5fec2-090c-4795-800d-7c55ce83f16d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:04.527021 master-0 kubenswrapper[31420]: I0220 12:20:04.526920 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-config-data" (OuterVolumeSpecName: "config-data") pod "47a5fec2-090c-4795-800d-7c55ce83f16d" (UID: "47a5fec2-090c-4795-800d-7c55ce83f16d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:04.532987 master-0 kubenswrapper[31420]: I0220 12:20:04.532831 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-866f884f75-w68xj"] Feb 20 12:20:04.532987 master-0 kubenswrapper[31420]: I0220 12:20:04.532920 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-scripts" (OuterVolumeSpecName: "scripts") pod "47a5fec2-090c-4795-800d-7c55ce83f16d" (UID: "47a5fec2-090c-4795-800d-7c55ce83f16d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:04.617431 master-0 kubenswrapper[31420]: I0220 12:20:04.616016 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") pod \"47a5fec2-090c-4795-800d-7c55ce83f16d\" (UID: \"47a5fec2-090c-4795-800d-7c55ce83f16d\") " Feb 20 12:20:04.617431 master-0 kubenswrapper[31420]: I0220 12:20:04.616672 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:04.617431 master-0 kubenswrapper[31420]: I0220 12:20:04.616686 31420 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47a5fec2-090c-4795-800d-7c55ce83f16d-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:04.617431 master-0 kubenswrapper[31420]: I0220 12:20:04.616698 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsh9v\" (UniqueName: \"kubernetes.io/projected/47a5fec2-090c-4795-800d-7c55ce83f16d-kube-api-access-fsh9v\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:04.617431 master-0 kubenswrapper[31420]: I0220 12:20:04.616710 31420 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47a5fec2-090c-4795-800d-7c55ce83f16d-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:04.617431 master-0 kubenswrapper[31420]: I0220 12:20:04.616717 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:04.617431 master-0 kubenswrapper[31420]: I0220 12:20:04.616725 31420 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:04.617431 master-0 kubenswrapper[31420]: I0220 12:20:04.616733 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47a5fec2-090c-4795-800d-7c55ce83f16d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:04.644903 master-0 kubenswrapper[31420]: I0220 12:20:04.640452 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a" (OuterVolumeSpecName: "glance") pod "47a5fec2-090c-4795-800d-7c55ce83f16d" (UID: "47a5fec2-090c-4795-800d-7c55ce83f16d"). InnerVolumeSpecName "pvc-935c0235-ccde-432b-b98e-92716df30f4a". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 20 12:20:04.720346 master-0 kubenswrapper[31420]: I0220 12:20:04.720209 31420 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") on node \"master-0\" " Feb 20 12:20:04.759473 master-0 kubenswrapper[31420]: I0220 12:20:04.758606 31420 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 20 12:20:04.759473 master-0 kubenswrapper[31420]: I0220 12:20:04.758795 31420 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-935c0235-ccde-432b-b98e-92716df30f4a" (UniqueName: "kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a") on node "master-0" Feb 20 12:20:04.823115 master-0 kubenswrapper[31420]: I0220 12:20:04.822303 31420 reconciler_common.go:293] "Volume detached for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:04.918627 master-0 kubenswrapper[31420]: I0220 12:20:04.918227 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-b737-account-create-update-wnbqm" Feb 20 12:20:04.928284 master-0 kubenswrapper[31420]: I0220 12:20:04.928236 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-4q2vm" Feb 20 12:20:05.031394 master-0 kubenswrapper[31420]: I0220 12:20:05.031102 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgcf7\" (UniqueName: \"kubernetes.io/projected/ba9661c0-9b94-468a-a207-51d7de5ecc92-kube-api-access-vgcf7\") pod \"ba9661c0-9b94-468a-a207-51d7de5ecc92\" (UID: \"ba9661c0-9b94-468a-a207-51d7de5ecc92\") " Feb 20 12:20:05.031394 master-0 kubenswrapper[31420]: I0220 12:20:05.031364 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33f57898-ebe9-4c8b-984b-86a4b35fed36-operator-scripts\") pod \"33f57898-ebe9-4c8b-984b-86a4b35fed36\" (UID: \"33f57898-ebe9-4c8b-984b-86a4b35fed36\") " Feb 20 12:20:05.031850 master-0 kubenswrapper[31420]: I0220 12:20:05.031808 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33f57898-ebe9-4c8b-984b-86a4b35fed36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33f57898-ebe9-4c8b-984b-86a4b35fed36" (UID: "33f57898-ebe9-4c8b-984b-86a4b35fed36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:05.031978 master-0 kubenswrapper[31420]: I0220 12:20:05.031948 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vh54\" (UniqueName: \"kubernetes.io/projected/33f57898-ebe9-4c8b-984b-86a4b35fed36-kube-api-access-2vh54\") pod \"33f57898-ebe9-4c8b-984b-86a4b35fed36\" (UID: \"33f57898-ebe9-4c8b-984b-86a4b35fed36\") " Feb 20 12:20:05.032292 master-0 kubenswrapper[31420]: I0220 12:20:05.032272 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba9661c0-9b94-468a-a207-51d7de5ecc92-operator-scripts\") pod \"ba9661c0-9b94-468a-a207-51d7de5ecc92\" (UID: \"ba9661c0-9b94-468a-a207-51d7de5ecc92\") " Feb 20 12:20:05.033209 master-0 kubenswrapper[31420]: I0220 12:20:05.033115 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33f57898-ebe9-4c8b-984b-86a4b35fed36-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:05.033661 master-0 kubenswrapper[31420]: I0220 12:20:05.033518 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba9661c0-9b94-468a-a207-51d7de5ecc92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba9661c0-9b94-468a-a207-51d7de5ecc92" (UID: "ba9661c0-9b94-468a-a207-51d7de5ecc92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:05.045060 master-0 kubenswrapper[31420]: I0220 12:20:05.045015 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba9661c0-9b94-468a-a207-51d7de5ecc92-kube-api-access-vgcf7" (OuterVolumeSpecName: "kube-api-access-vgcf7") pod "ba9661c0-9b94-468a-a207-51d7de5ecc92" (UID: "ba9661c0-9b94-468a-a207-51d7de5ecc92"). InnerVolumeSpecName "kube-api-access-vgcf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:05.045371 master-0 kubenswrapper[31420]: I0220 12:20:05.045341 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f57898-ebe9-4c8b-984b-86a4b35fed36-kube-api-access-2vh54" (OuterVolumeSpecName: "kube-api-access-2vh54") pod "33f57898-ebe9-4c8b-984b-86a4b35fed36" (UID: "33f57898-ebe9-4c8b-984b-86a4b35fed36"). InnerVolumeSpecName "kube-api-access-2vh54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:05.136002 master-0 kubenswrapper[31420]: I0220 12:20:05.135384 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgcf7\" (UniqueName: \"kubernetes.io/projected/ba9661c0-9b94-468a-a207-51d7de5ecc92-kube-api-access-vgcf7\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:05.136002 master-0 kubenswrapper[31420]: I0220 12:20:05.135444 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vh54\" (UniqueName: \"kubernetes.io/projected/33f57898-ebe9-4c8b-984b-86a4b35fed36-kube-api-access-2vh54\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:05.136002 master-0 kubenswrapper[31420]: I0220 12:20:05.135462 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba9661c0-9b94-468a-a207-51d7de5ecc92-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:05.174683 master-0 kubenswrapper[31420]: I0220 12:20:05.174613 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-4q2vm" event={"ID":"33f57898-ebe9-4c8b-984b-86a4b35fed36","Type":"ContainerDied","Data":"3dd22e756cdf4660af0fc1505c51569a61b389bd5e17c13fd2a373a1a119ab76"} Feb 20 12:20:05.174683 master-0 kubenswrapper[31420]: I0220 12:20:05.174665 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dd22e756cdf4660af0fc1505c51569a61b389bd5e17c13fd2a373a1a119ab76" Feb 20 12:20:05.175401 master-0 kubenswrapper[31420]: I0220 12:20:05.174722 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-4q2vm" Feb 20 12:20:05.178655 master-0 kubenswrapper[31420]: I0220 12:20:05.178604 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b737-account-create-update-wnbqm" event={"ID":"ba9661c0-9b94-468a-a207-51d7de5ecc92","Type":"ContainerDied","Data":"8c51da38572d9a35b85e2a1c7ecdff3ebb5194dcf3f6959de8ec40a30fbcc4d5"} Feb 20 12:20:05.178779 master-0 kubenswrapper[31420]: I0220 12:20:05.178665 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c51da38572d9a35b85e2a1c7ecdff3ebb5194dcf3f6959de8ec40a30fbcc4d5" Feb 20 12:20:05.178779 master-0 kubenswrapper[31420]: I0220 12:20:05.178741 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-b737-account-create-update-wnbqm" Feb 20 12:20:05.182206 master-0 kubenswrapper[31420]: I0220 12:20:05.182175 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.182301 master-0 kubenswrapper[31420]: I0220 12:20:05.182245 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" event={"ID":"cebde23d-32f9-453e-9cea-3c240e0a8e43","Type":"ContainerStarted","Data":"844fc409bd4fd08593b300e7fcacbcdecf3ba5a3e68c9f7a90c2f9f597024073"} Feb 20 12:20:05.182882 master-0 kubenswrapper[31420]: I0220 12:20:05.182804 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:05.228202 master-0 kubenswrapper[31420]: I0220 12:20:05.228123 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" podStartSLOduration=5.228104265 podStartE2EDuration="5.228104265s" podCreationTimestamp="2026-02-20 12:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:05.220734697 +0000 UTC m=+909.939972958" watchObservedRunningTime="2026-02-20 12:20:05.228104265 +0000 UTC m=+909.947342506" Feb 20 12:20:05.276851 master-0 kubenswrapper[31420]: I0220 12:20:05.276445 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e60fa-default-external-api-0"] Feb 20 12:20:05.311352 master-0 kubenswrapper[31420]: I0220 12:20:05.311156 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e60fa-default-external-api-0"] Feb 20 12:20:05.334642 master-0 kubenswrapper[31420]: I0220 12:20:05.332409 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e60fa-default-external-api-0"] Feb 20 12:20:05.334642 master-0 kubenswrapper[31420]: E0220 12:20:05.333072 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba9661c0-9b94-468a-a207-51d7de5ecc92" containerName="mariadb-account-create-update" Feb 20 12:20:05.334642 master-0 kubenswrapper[31420]: I0220 12:20:05.333089 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba9661c0-9b94-468a-a207-51d7de5ecc92" containerName="mariadb-account-create-update" Feb 20 12:20:05.334642 master-0 kubenswrapper[31420]: E0220 12:20:05.333141 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dba19cd-d9f4-419e-acd9-5a240202d577" containerName="init" Feb 20 12:20:05.334642 master-0 kubenswrapper[31420]: I0220 12:20:05.333149 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dba19cd-d9f4-419e-acd9-5a240202d577" containerName="init" Feb 20 12:20:05.334642 master-0 kubenswrapper[31420]: E0220 12:20:05.333174 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f57898-ebe9-4c8b-984b-86a4b35fed36" containerName="mariadb-database-create" Feb 20 12:20:05.334642 master-0 kubenswrapper[31420]: I0220 12:20:05.333180 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f57898-ebe9-4c8b-984b-86a4b35fed36" containerName="mariadb-database-create" Feb 20 12:20:05.334642 master-0 kubenswrapper[31420]: I0220 12:20:05.333480 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dba19cd-d9f4-419e-acd9-5a240202d577" containerName="init" Feb 20 12:20:05.334642 master-0 kubenswrapper[31420]: I0220 12:20:05.333501 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f57898-ebe9-4c8b-984b-86a4b35fed36" containerName="mariadb-database-create" Feb 20 12:20:05.334642 master-0 kubenswrapper[31420]: I0220 12:20:05.333544 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba9661c0-9b94-468a-a207-51d7de5ecc92" containerName="mariadb-account-create-update" Feb 20 12:20:05.335227 master-0 kubenswrapper[31420]: I0220 12:20:05.334671 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.342761 master-0 kubenswrapper[31420]: I0220 12:20:05.342704 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 20 12:20:05.343165 master-0 kubenswrapper[31420]: I0220 12:20:05.342925 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-e60fa-default-external-config-data" Feb 20 12:20:05.346607 master-0 kubenswrapper[31420]: I0220 12:20:05.345956 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 20 12:20:05.354611 master-0 kubenswrapper[31420]: I0220 12:20:05.353281 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e60fa-default-external-api-0"] Feb 20 12:20:05.443951 master-0 kubenswrapper[31420]: I0220 12:20:05.443869 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9drk4\" (UniqueName: \"kubernetes.io/projected/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-kube-api-access-9drk4\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.444322 master-0 kubenswrapper[31420]: I0220 12:20:05.444006 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-public-tls-certs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.444322 master-0 kubenswrapper[31420]: I0220 12:20:05.444118 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-scripts\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.444322 master-0 kubenswrapper[31420]: I0220 12:20:05.444162 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-logs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.444322 master-0 kubenswrapper[31420]: I0220 12:20:05.444199 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.444322 master-0 kubenswrapper[31420]: I0220 12:20:05.444290 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-config-data\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.444558 master-0 kubenswrapper[31420]: I0220 12:20:05.444386 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-combined-ca-bundle\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.444558 master-0 kubenswrapper[31420]: I0220 12:20:05.444408 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-httpd-run\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.513142 master-0 kubenswrapper[31420]: I0220 12:20:05.513098 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a5fec2-090c-4795-800d-7c55ce83f16d" path="/var/lib/kubelet/pods/47a5fec2-090c-4795-800d-7c55ce83f16d/volumes" Feb 20 12:20:05.513948 master-0 kubenswrapper[31420]: I0220 12:20:05.513933 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dba19cd-d9f4-419e-acd9-5a240202d577" path="/var/lib/kubelet/pods/4dba19cd-d9f4-419e-acd9-5a240202d577/volumes" Feb 20 12:20:05.546276 master-0 kubenswrapper[31420]: I0220 12:20:05.546210 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9drk4\" (UniqueName: \"kubernetes.io/projected/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-kube-api-access-9drk4\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.546522 master-0 kubenswrapper[31420]: I0220 12:20:05.546304 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-public-tls-certs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.546522 master-0 kubenswrapper[31420]: I0220 12:20:05.546455 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-logs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.546522 master-0 kubenswrapper[31420]: I0220 12:20:05.546483 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-scripts\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.546693 master-0 kubenswrapper[31420]: I0220 12:20:05.546557 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.546693 master-0 kubenswrapper[31420]: I0220 12:20:05.546678 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-config-data\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.546855 master-0 kubenswrapper[31420]: I0220 12:20:05.546727 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-combined-ca-bundle\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.546855 master-0 kubenswrapper[31420]: I0220 12:20:05.546753 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-httpd-run\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.547315 master-0 kubenswrapper[31420]: I0220 12:20:05.547286 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-httpd-run\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.550050 master-0 kubenswrapper[31420]: I0220 12:20:05.549996 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-logs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.550769 master-0 kubenswrapper[31420]: I0220 12:20:05.550339 31420 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 12:20:05.550769 master-0 kubenswrapper[31420]: I0220 12:20:05.550385 31420 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/368f719461f0c265cdc1ceb7166e6bad74c18134a381ef3f2ecc6c3c88bbea1f/globalmount\"" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.552695 master-0 kubenswrapper[31420]: I0220 12:20:05.552662 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-combined-ca-bundle\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.553376 master-0 kubenswrapper[31420]: I0220 12:20:05.553345 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-scripts\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.555153 master-0 kubenswrapper[31420]: I0220 12:20:05.555098 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-public-tls-certs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.558079 master-0 kubenswrapper[31420]: I0220 12:20:05.558013 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-config-data\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.633910 master-0 kubenswrapper[31420]: I0220 12:20:05.633766 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9drk4\" (UniqueName: \"kubernetes.io/projected/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-kube-api-access-9drk4\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:05.806733 master-0 kubenswrapper[31420]: I0220 12:20:05.806657 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e60fa-default-internal-api-0"] Feb 20 12:20:05.812211 master-0 kubenswrapper[31420]: I0220 12:20:05.812163 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.831572 master-0 kubenswrapper[31420]: I0220 12:20:05.818740 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-e60fa-default-internal-config-data" Feb 20 12:20:05.831572 master-0 kubenswrapper[31420]: I0220 12:20:05.821795 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 20 12:20:05.844954 master-0 kubenswrapper[31420]: I0220 12:20:05.844892 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e60fa-default-internal-api-0"] Feb 20 12:20:05.853748 master-0 kubenswrapper[31420]: I0220 12:20:05.853639 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2092a975-d3a8-4034-9301-d99c84087164-httpd-run\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.853947 master-0 kubenswrapper[31420]: I0220 12:20:05.853814 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-02654c43-ecb4-4424-b56e-75484a4f3f54\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a16d4698-b9da-481e-bdf6-e66bfc4f546b\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.853947 master-0 kubenswrapper[31420]: I0220 12:20:05.853849 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-scripts\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.853947 master-0 kubenswrapper[31420]: I0220 12:20:05.853922 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2092a975-d3a8-4034-9301-d99c84087164-logs\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.854131 master-0 kubenswrapper[31420]: I0220 12:20:05.853953 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-internal-tls-certs\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.854131 master-0 kubenswrapper[31420]: I0220 12:20:05.854026 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-combined-ca-bundle\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.854131 master-0 kubenswrapper[31420]: I0220 12:20:05.854084 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxpjf\" (UniqueName: \"kubernetes.io/projected/2092a975-d3a8-4034-9301-d99c84087164-kube-api-access-pxpjf\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.854297 master-0 kubenswrapper[31420]: I0220 12:20:05.854146 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-config-data\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.956244 master-0 kubenswrapper[31420]: I0220 12:20:05.956099 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-02654c43-ecb4-4424-b56e-75484a4f3f54\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a16d4698-b9da-481e-bdf6-e66bfc4f546b\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.956244 master-0 kubenswrapper[31420]: I0220 12:20:05.956189 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-scripts\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.956481 master-0 kubenswrapper[31420]: I0220 12:20:05.956254 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2092a975-d3a8-4034-9301-d99c84087164-logs\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.956481 master-0 kubenswrapper[31420]: I0220 12:20:05.956281 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-internal-tls-certs\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.956481 master-0 kubenswrapper[31420]: I0220 12:20:05.956350 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-combined-ca-bundle\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.956481 master-0 kubenswrapper[31420]: I0220 12:20:05.956396 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxpjf\" (UniqueName: \"kubernetes.io/projected/2092a975-d3a8-4034-9301-d99c84087164-kube-api-access-pxpjf\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.956481 master-0 kubenswrapper[31420]: I0220 12:20:05.956443 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-config-data\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.956659 master-0 kubenswrapper[31420]: I0220 12:20:05.956519 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2092a975-d3a8-4034-9301-d99c84087164-httpd-run\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.957129 master-0 kubenswrapper[31420]: I0220 12:20:05.957084 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2092a975-d3a8-4034-9301-d99c84087164-httpd-run\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.958909 master-0 kubenswrapper[31420]: I0220 12:20:05.958873 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2092a975-d3a8-4034-9301-d99c84087164-logs\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.962617 master-0 kubenswrapper[31420]: I0220 12:20:05.962550 31420 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 12:20:05.962617 master-0 kubenswrapper[31420]: I0220 12:20:05.962606 31420 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-02654c43-ecb4-4424-b56e-75484a4f3f54\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a16d4698-b9da-481e-bdf6-e66bfc4f546b\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/7fdfa428e75fa078c9f92f14e5c5b13eb7c249d4961d8f3276e21bff102641ef/globalmount\"" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.962848 master-0 kubenswrapper[31420]: I0220 12:20:05.962814 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-config-data\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.963237 master-0 kubenswrapper[31420]: I0220 12:20:05.963198 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-internal-tls-certs\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.968636 master-0 kubenswrapper[31420]: I0220 12:20:05.967155 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-scripts\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.974461 master-0 kubenswrapper[31420]: I0220 12:20:05.974412 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-combined-ca-bundle\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:05.979127 master-0 kubenswrapper[31420]: I0220 12:20:05.979073 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxpjf\" (UniqueName: \"kubernetes.io/projected/2092a975-d3a8-4034-9301-d99c84087164-kube-api-access-pxpjf\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:06.888647 master-0 kubenswrapper[31420]: I0220 12:20:06.888599 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") pod \"glance-e60fa-default-external-api-0\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:07.165021 master-0 kubenswrapper[31420]: I0220 12:20:07.164617 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:07.213243 master-0 kubenswrapper[31420]: I0220 12:20:07.213173 31420 generic.go:334] "Generic (PLEG): container finished" podID="86d64296-9441-4c59-ac5a-0a6d94cf9499" containerID="d4c4b10edf183fb34c17d57ce1327c4a724f81082f7e37ecdd3583f5864e45a3" exitCode=0 Feb 20 12:20:07.213243 master-0 kubenswrapper[31420]: I0220 12:20:07.213222 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7cvhp" event={"ID":"86d64296-9441-4c59-ac5a-0a6d94cf9499","Type":"ContainerDied","Data":"d4c4b10edf183fb34c17d57ce1327c4a724f81082f7e37ecdd3583f5864e45a3"} Feb 20 12:20:07.758707 master-0 kubenswrapper[31420]: I0220 12:20:07.758661 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e60fa-default-external-api-0"] Feb 20 12:20:08.232539 master-0 kubenswrapper[31420]: I0220 12:20:08.232418 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-external-api-0" event={"ID":"f95db2f7-4dcd-43a4-93fc-650f8cf79f68","Type":"ContainerStarted","Data":"2835cfd96341f9e53665b810b042d43598ec62257baa536e632637d0c70bf674"} Feb 20 12:20:08.250445 master-0 kubenswrapper[31420]: I0220 12:20:08.250347 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-g4frr" event={"ID":"d644bcbb-d205-4408-a0c7-7e3bbc55e180","Type":"ContainerStarted","Data":"89b844a662e66706dd4946c8d62966d5c7ea40f1737816ca2dca836c3b17c380"} Feb 20 12:20:08.276706 master-0 kubenswrapper[31420]: I0220 12:20:08.276621 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-g4frr" podStartSLOduration=3.866707765 podStartE2EDuration="8.27659687s" podCreationTimestamp="2026-02-20 12:20:00 +0000 UTC" firstStartedPulling="2026-02-20 12:20:02.749933249 +0000 UTC m=+907.469171490" lastFinishedPulling="2026-02-20 12:20:07.159822354 +0000 UTC m=+911.879060595" observedRunningTime="2026-02-20 12:20:08.266295369 +0000 UTC m=+912.985533620" watchObservedRunningTime="2026-02-20 12:20:08.27659687 +0000 UTC m=+912.995835111" Feb 20 12:20:08.680674 master-0 kubenswrapper[31420]: I0220 12:20:08.680604 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:08.828415 master-0 kubenswrapper[31420]: I0220 12:20:08.828363 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-fernet-keys\") pod \"86d64296-9441-4c59-ac5a-0a6d94cf9499\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " Feb 20 12:20:08.828725 master-0 kubenswrapper[31420]: I0220 12:20:08.828484 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-combined-ca-bundle\") pod \"86d64296-9441-4c59-ac5a-0a6d94cf9499\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " Feb 20 12:20:08.828725 master-0 kubenswrapper[31420]: I0220 12:20:08.828546 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-scripts\") pod \"86d64296-9441-4c59-ac5a-0a6d94cf9499\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " Feb 20 12:20:08.828725 master-0 kubenswrapper[31420]: I0220 12:20:08.828570 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-config-data\") pod \"86d64296-9441-4c59-ac5a-0a6d94cf9499\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " Feb 20 12:20:08.828881 master-0 kubenswrapper[31420]: I0220 12:20:08.828738 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-credential-keys\") pod \"86d64296-9441-4c59-ac5a-0a6d94cf9499\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " Feb 20 12:20:08.828881 master-0 kubenswrapper[31420]: I0220 12:20:08.828777 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6npg\" (UniqueName: \"kubernetes.io/projected/86d64296-9441-4c59-ac5a-0a6d94cf9499-kube-api-access-m6npg\") pod \"86d64296-9441-4c59-ac5a-0a6d94cf9499\" (UID: \"86d64296-9441-4c59-ac5a-0a6d94cf9499\") " Feb 20 12:20:08.832516 master-0 kubenswrapper[31420]: I0220 12:20:08.832445 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-scripts" (OuterVolumeSpecName: "scripts") pod "86d64296-9441-4c59-ac5a-0a6d94cf9499" (UID: "86d64296-9441-4c59-ac5a-0a6d94cf9499"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:08.832695 master-0 kubenswrapper[31420]: I0220 12:20:08.832593 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "86d64296-9441-4c59-ac5a-0a6d94cf9499" (UID: "86d64296-9441-4c59-ac5a-0a6d94cf9499"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:08.835473 master-0 kubenswrapper[31420]: I0220 12:20:08.835386 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86d64296-9441-4c59-ac5a-0a6d94cf9499-kube-api-access-m6npg" (OuterVolumeSpecName: "kube-api-access-m6npg") pod "86d64296-9441-4c59-ac5a-0a6d94cf9499" (UID: "86d64296-9441-4c59-ac5a-0a6d94cf9499"). InnerVolumeSpecName "kube-api-access-m6npg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:08.838598 master-0 kubenswrapper[31420]: I0220 12:20:08.838520 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "86d64296-9441-4c59-ac5a-0a6d94cf9499" (UID: "86d64296-9441-4c59-ac5a-0a6d94cf9499"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:08.862984 master-0 kubenswrapper[31420]: I0220 12:20:08.862914 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86d64296-9441-4c59-ac5a-0a6d94cf9499" (UID: "86d64296-9441-4c59-ac5a-0a6d94cf9499"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:08.864626 master-0 kubenswrapper[31420]: I0220 12:20:08.864575 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-config-data" (OuterVolumeSpecName: "config-data") pod "86d64296-9441-4c59-ac5a-0a6d94cf9499" (UID: "86d64296-9441-4c59-ac5a-0a6d94cf9499"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:08.903655 master-0 kubenswrapper[31420]: I0220 12:20:08.903602 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-02654c43-ecb4-4424-b56e-75484a4f3f54\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a16d4698-b9da-481e-bdf6-e66bfc4f546b\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:08.931008 master-0 kubenswrapper[31420]: I0220 12:20:08.930919 31420 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:08.931008 master-0 kubenswrapper[31420]: I0220 12:20:08.930981 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6npg\" (UniqueName: \"kubernetes.io/projected/86d64296-9441-4c59-ac5a-0a6d94cf9499-kube-api-access-m6npg\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:08.931008 master-0 kubenswrapper[31420]: I0220 12:20:08.930999 31420 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:08.931008 master-0 kubenswrapper[31420]: I0220 12:20:08.931011 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:08.931008 master-0 kubenswrapper[31420]: I0220 12:20:08.931023 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:08.931349 master-0 kubenswrapper[31420]: I0220 12:20:08.931038 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86d64296-9441-4c59-ac5a-0a6d94cf9499-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:09.269638 master-0 kubenswrapper[31420]: I0220 12:20:09.266670 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-external-api-0" event={"ID":"f95db2f7-4dcd-43a4-93fc-650f8cf79f68","Type":"ContainerStarted","Data":"1724846740ee17916c57bb8036a6f78794e53d3205353094d9eaaa4dc6bce4ee"} Feb 20 12:20:09.269638 master-0 kubenswrapper[31420]: I0220 12:20:09.266793 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-external-api-0" event={"ID":"f95db2f7-4dcd-43a4-93fc-650f8cf79f68","Type":"ContainerStarted","Data":"356157ba73415aa002f3e37c1ee802ff68584de48c801ce706ce3de5e01cc676"} Feb 20 12:20:09.272063 master-0 kubenswrapper[31420]: I0220 12:20:09.272028 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7cvhp" Feb 20 12:20:09.274896 master-0 kubenswrapper[31420]: I0220 12:20:09.274805 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7cvhp" event={"ID":"86d64296-9441-4c59-ac5a-0a6d94cf9499","Type":"ContainerDied","Data":"0a4f3e2836984859108a1be089d0cfe8870375998a32a49372589998bb041586"} Feb 20 12:20:09.274896 master-0 kubenswrapper[31420]: I0220 12:20:09.274894 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a4f3e2836984859108a1be089d0cfe8870375998a32a49372589998bb041586" Feb 20 12:20:09.387262 master-0 kubenswrapper[31420]: I0220 12:20:09.387160 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e60fa-default-external-api-0" podStartSLOduration=4.387138069 podStartE2EDuration="4.387138069s" podCreationTimestamp="2026-02-20 12:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:09.362711929 +0000 UTC m=+914.081950190" watchObservedRunningTime="2026-02-20 12:20:09.387138069 +0000 UTC m=+914.106376330" Feb 20 12:20:09.445946 master-0 kubenswrapper[31420]: I0220 12:20:09.445773 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:09.457554 master-0 kubenswrapper[31420]: I0220 12:20:09.453704 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7cvhp"] Feb 20 12:20:09.471456 master-0 kubenswrapper[31420]: I0220 12:20:09.471404 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7cvhp"] Feb 20 12:20:09.541270 master-0 kubenswrapper[31420]: I0220 12:20:09.541165 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86d64296-9441-4c59-ac5a-0a6d94cf9499" path="/var/lib/kubelet/pods/86d64296-9441-4c59-ac5a-0a6d94cf9499/volumes" Feb 20 12:20:09.544094 master-0 kubenswrapper[31420]: I0220 12:20:09.544068 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6c2dd"] Feb 20 12:20:09.544712 master-0 kubenswrapper[31420]: E0220 12:20:09.544672 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d64296-9441-4c59-ac5a-0a6d94cf9499" containerName="keystone-bootstrap" Feb 20 12:20:09.544813 master-0 kubenswrapper[31420]: I0220 12:20:09.544801 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d64296-9441-4c59-ac5a-0a6d94cf9499" containerName="keystone-bootstrap" Feb 20 12:20:09.545093 master-0 kubenswrapper[31420]: I0220 12:20:09.545079 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d64296-9441-4c59-ac5a-0a6d94cf9499" containerName="keystone-bootstrap" Feb 20 12:20:09.546575 master-0 kubenswrapper[31420]: I0220 12:20:09.546512 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.550025 master-0 kubenswrapper[31420]: I0220 12:20:09.549997 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 20 12:20:09.550299 master-0 kubenswrapper[31420]: I0220 12:20:09.550285 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 20 12:20:09.550699 master-0 kubenswrapper[31420]: I0220 12:20:09.550681 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 20 12:20:09.552438 master-0 kubenswrapper[31420]: I0220 12:20:09.552418 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 20 12:20:09.557028 master-0 kubenswrapper[31420]: I0220 12:20:09.556888 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6c2dd"] Feb 20 12:20:09.669672 master-0 kubenswrapper[31420]: I0220 12:20:09.661220 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-fernet-keys\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.669672 master-0 kubenswrapper[31420]: I0220 12:20:09.661344 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-credential-keys\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.669672 master-0 kubenswrapper[31420]: I0220 12:20:09.661439 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-scripts\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.669672 master-0 kubenswrapper[31420]: I0220 12:20:09.661574 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-config-data\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.669672 master-0 kubenswrapper[31420]: I0220 12:20:09.661606 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-combined-ca-bundle\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.669672 master-0 kubenswrapper[31420]: I0220 12:20:09.661642 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8656\" (UniqueName: \"kubernetes.io/projected/546ac957-a54d-45ab-aaf7-f0f22fbb5883-kube-api-access-s8656\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.763790 master-0 kubenswrapper[31420]: I0220 12:20:09.763221 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-config-data\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.763790 master-0 kubenswrapper[31420]: I0220 12:20:09.763277 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-combined-ca-bundle\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.763790 master-0 kubenswrapper[31420]: I0220 12:20:09.763327 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8656\" (UniqueName: \"kubernetes.io/projected/546ac957-a54d-45ab-aaf7-f0f22fbb5883-kube-api-access-s8656\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.763790 master-0 kubenswrapper[31420]: I0220 12:20:09.763403 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-fernet-keys\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.763790 master-0 kubenswrapper[31420]: I0220 12:20:09.763442 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-credential-keys\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.763790 master-0 kubenswrapper[31420]: I0220 12:20:09.763479 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-scripts\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.769934 master-0 kubenswrapper[31420]: I0220 12:20:09.768387 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-scripts\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.771361 master-0 kubenswrapper[31420]: I0220 12:20:09.771162 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-combined-ca-bundle\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.771491 master-0 kubenswrapper[31420]: I0220 12:20:09.771385 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-fernet-keys\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.780679 master-0 kubenswrapper[31420]: I0220 12:20:09.772768 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-config-data\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.783373 master-0 kubenswrapper[31420]: I0220 12:20:09.783318 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-credential-keys\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.786024 master-0 kubenswrapper[31420]: I0220 12:20:09.785973 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8656\" (UniqueName: \"kubernetes.io/projected/546ac957-a54d-45ab-aaf7-f0f22fbb5883-kube-api-access-s8656\") pod \"keystone-bootstrap-6c2dd\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:09.941225 master-0 kubenswrapper[31420]: I0220 12:20:09.941077 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:10.117714 master-0 kubenswrapper[31420]: I0220 12:20:10.117613 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e60fa-default-internal-api-0"] Feb 20 12:20:11.052341 master-0 kubenswrapper[31420]: I0220 12:20:11.052281 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-9z9n4"] Feb 20 12:20:11.054468 master-0 kubenswrapper[31420]: I0220 12:20:11.054420 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.057769 master-0 kubenswrapper[31420]: I0220 12:20:11.057488 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 20 12:20:11.057769 master-0 kubenswrapper[31420]: I0220 12:20:11.057755 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Feb 20 12:20:11.081740 master-0 kubenswrapper[31420]: I0220 12:20:11.075412 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-9z9n4"] Feb 20 12:20:11.140549 master-0 kubenswrapper[31420]: E0220 12:20:11.139729 31420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd644bcbb_d205_4408_a0c7_7e3bbc55e180.slice/crio-89b844a662e66706dd4946c8d62966d5c7ea40f1737816ca2dca836c3b17c380.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd644bcbb_d205_4408_a0c7_7e3bbc55e180.slice/crio-conmon-89b844a662e66706dd4946c8d62966d5c7ea40f1737816ca2dca836c3b17c380.scope\": RecentStats: unable to find data in memory cache]" Feb 20 12:20:11.223548 master-0 kubenswrapper[31420]: I0220 12:20:11.209977 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1caf9802-b963-4368-ac29-e47812b48ad3-config-data-merged\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.223548 master-0 kubenswrapper[31420]: I0220 12:20:11.210103 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1caf9802-b963-4368-ac29-e47812b48ad3-etc-podinfo\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.223548 master-0 kubenswrapper[31420]: I0220 12:20:11.210182 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-combined-ca-bundle\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.223548 master-0 kubenswrapper[31420]: I0220 12:20:11.210213 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4jdp\" (UniqueName: \"kubernetes.io/projected/1caf9802-b963-4368-ac29-e47812b48ad3-kube-api-access-p4jdp\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.223548 master-0 kubenswrapper[31420]: I0220 12:20:11.210305 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-config-data\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.223548 master-0 kubenswrapper[31420]: I0220 12:20:11.210332 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-scripts\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.272552 master-0 kubenswrapper[31420]: I0220 12:20:11.262684 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:11.298347 master-0 kubenswrapper[31420]: I0220 12:20:11.298045 31420 generic.go:334] "Generic (PLEG): container finished" podID="d644bcbb-d205-4408-a0c7-7e3bbc55e180" containerID="89b844a662e66706dd4946c8d62966d5c7ea40f1737816ca2dca836c3b17c380" exitCode=0 Feb 20 12:20:11.298347 master-0 kubenswrapper[31420]: I0220 12:20:11.298109 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-g4frr" event={"ID":"d644bcbb-d205-4408-a0c7-7e3bbc55e180","Type":"ContainerDied","Data":"89b844a662e66706dd4946c8d62966d5c7ea40f1737816ca2dca836c3b17c380"} Feb 20 12:20:11.313240 master-0 kubenswrapper[31420]: I0220 12:20:11.313185 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1caf9802-b963-4368-ac29-e47812b48ad3-config-data-merged\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.313429 master-0 kubenswrapper[31420]: I0220 12:20:11.313275 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1caf9802-b963-4368-ac29-e47812b48ad3-etc-podinfo\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.313472 master-0 kubenswrapper[31420]: I0220 12:20:11.313442 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-combined-ca-bundle\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.313508 master-0 kubenswrapper[31420]: I0220 12:20:11.313472 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4jdp\" (UniqueName: \"kubernetes.io/projected/1caf9802-b963-4368-ac29-e47812b48ad3-kube-api-access-p4jdp\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.313715 master-0 kubenswrapper[31420]: I0220 12:20:11.313668 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1caf9802-b963-4368-ac29-e47812b48ad3-config-data-merged\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.313779 master-0 kubenswrapper[31420]: I0220 12:20:11.313732 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-config-data\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.313842 master-0 kubenswrapper[31420]: I0220 12:20:11.313826 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-scripts\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.320044 master-0 kubenswrapper[31420]: I0220 12:20:11.318884 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1caf9802-b963-4368-ac29-e47812b48ad3-etc-podinfo\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.320284 master-0 kubenswrapper[31420]: I0220 12:20:11.320238 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-combined-ca-bundle\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.320385 master-0 kubenswrapper[31420]: I0220 12:20:11.320292 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-scripts\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.322239 master-0 kubenswrapper[31420]: I0220 12:20:11.321742 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-config-data\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.366225 master-0 kubenswrapper[31420]: I0220 12:20:11.366163 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4jdp\" (UniqueName: \"kubernetes.io/projected/1caf9802-b963-4368-ac29-e47812b48ad3-kube-api-access-p4jdp\") pod \"ironic-db-sync-9z9n4\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.399598 master-0 kubenswrapper[31420]: I0220 12:20:11.399308 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:11.417580 master-0 kubenswrapper[31420]: I0220 12:20:11.416885 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr"] Feb 20 12:20:11.417580 master-0 kubenswrapper[31420]: I0220 12:20:11.417256 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" podUID="359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" containerName="dnsmasq-dns" containerID="cri-o://da731389c95edc5cb32c72f71c53568fec6984d8889fcaf7c10e28098e1b42c5" gracePeriod=10 Feb 20 12:20:11.561457 master-0 kubenswrapper[31420]: I0220 12:20:11.561030 31420 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" podUID="359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.199:5353: connect: connection refused" Feb 20 12:20:12.317841 master-0 kubenswrapper[31420]: I0220 12:20:12.317734 31420 generic.go:334] "Generic (PLEG): container finished" podID="359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" containerID="da731389c95edc5cb32c72f71c53568fec6984d8889fcaf7c10e28098e1b42c5" exitCode=0 Feb 20 12:20:12.318840 master-0 kubenswrapper[31420]: I0220 12:20:12.317833 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" event={"ID":"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4","Type":"ContainerDied","Data":"da731389c95edc5cb32c72f71c53568fec6984d8889fcaf7c10e28098e1b42c5"} Feb 20 12:20:16.562335 master-0 kubenswrapper[31420]: I0220 12:20:16.562262 31420 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" podUID="359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.199:5353: connect: connection refused" Feb 20 12:20:17.165261 master-0 kubenswrapper[31420]: I0220 12:20:17.165171 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:17.165810 master-0 kubenswrapper[31420]: I0220 12:20:17.165751 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:17.211483 master-0 kubenswrapper[31420]: I0220 12:20:17.211415 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:17.219340 master-0 kubenswrapper[31420]: I0220 12:20:17.219277 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:17.391863 master-0 kubenswrapper[31420]: I0220 12:20:17.391821 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:17.392060 master-0 kubenswrapper[31420]: I0220 12:20:17.391887 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:17.524796 master-0 kubenswrapper[31420]: W0220 12:20:17.524439 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2092a975_d3a8_4034_9301_d99c84087164.slice/crio-ffc21f80eb6e08b57d951e581b7f08fb1ca94c23ef7c689bdb9c9b9fb4448a96 WatchSource:0}: Error finding container ffc21f80eb6e08b57d951e581b7f08fb1ca94c23ef7c689bdb9c9b9fb4448a96: Status 404 returned error can't find the container with id ffc21f80eb6e08b57d951e581b7f08fb1ca94c23ef7c689bdb9c9b9fb4448a96 Feb 20 12:20:17.624891 master-0 kubenswrapper[31420]: I0220 12:20:17.624500 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:17.755220 master-0 kubenswrapper[31420]: I0220 12:20:17.755162 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-scripts\") pod \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " Feb 20 12:20:17.755464 master-0 kubenswrapper[31420]: I0220 12:20:17.755413 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-config-data\") pod \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " Feb 20 12:20:17.755522 master-0 kubenswrapper[31420]: I0220 12:20:17.755468 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-combined-ca-bundle\") pod \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " Feb 20 12:20:17.755522 master-0 kubenswrapper[31420]: I0220 12:20:17.755504 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sflcg\" (UniqueName: \"kubernetes.io/projected/d644bcbb-d205-4408-a0c7-7e3bbc55e180-kube-api-access-sflcg\") pod \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " Feb 20 12:20:17.755636 master-0 kubenswrapper[31420]: I0220 12:20:17.755582 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d644bcbb-d205-4408-a0c7-7e3bbc55e180-logs\") pod \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\" (UID: \"d644bcbb-d205-4408-a0c7-7e3bbc55e180\") " Feb 20 12:20:17.756709 master-0 kubenswrapper[31420]: I0220 12:20:17.756656 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d644bcbb-d205-4408-a0c7-7e3bbc55e180-logs" (OuterVolumeSpecName: "logs") pod "d644bcbb-d205-4408-a0c7-7e3bbc55e180" (UID: "d644bcbb-d205-4408-a0c7-7e3bbc55e180"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:20:17.758352 master-0 kubenswrapper[31420]: I0220 12:20:17.758291 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-scripts" (OuterVolumeSpecName: "scripts") pod "d644bcbb-d205-4408-a0c7-7e3bbc55e180" (UID: "d644bcbb-d205-4408-a0c7-7e3bbc55e180"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:17.760682 master-0 kubenswrapper[31420]: I0220 12:20:17.760633 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d644bcbb-d205-4408-a0c7-7e3bbc55e180-kube-api-access-sflcg" (OuterVolumeSpecName: "kube-api-access-sflcg") pod "d644bcbb-d205-4408-a0c7-7e3bbc55e180" (UID: "d644bcbb-d205-4408-a0c7-7e3bbc55e180"). InnerVolumeSpecName "kube-api-access-sflcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:17.800273 master-0 kubenswrapper[31420]: I0220 12:20:17.800169 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d644bcbb-d205-4408-a0c7-7e3bbc55e180" (UID: "d644bcbb-d205-4408-a0c7-7e3bbc55e180"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:17.809331 master-0 kubenswrapper[31420]: I0220 12:20:17.808710 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-config-data" (OuterVolumeSpecName: "config-data") pod "d644bcbb-d205-4408-a0c7-7e3bbc55e180" (UID: "d644bcbb-d205-4408-a0c7-7e3bbc55e180"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:17.859705 master-0 kubenswrapper[31420]: I0220 12:20:17.859654 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:17.859931 master-0 kubenswrapper[31420]: I0220 12:20:17.859916 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:17.860003 master-0 kubenswrapper[31420]: I0220 12:20:17.859991 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d644bcbb-d205-4408-a0c7-7e3bbc55e180-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:17.860082 master-0 kubenswrapper[31420]: I0220 12:20:17.860071 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sflcg\" (UniqueName: \"kubernetes.io/projected/d644bcbb-d205-4408-a0c7-7e3bbc55e180-kube-api-access-sflcg\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:17.860150 master-0 kubenswrapper[31420]: I0220 12:20:17.860140 31420 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d644bcbb-d205-4408-a0c7-7e3bbc55e180-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:18.404286 master-0 kubenswrapper[31420]: I0220 12:20:18.404206 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-g4frr" event={"ID":"d644bcbb-d205-4408-a0c7-7e3bbc55e180","Type":"ContainerDied","Data":"1e6ecb6b67541a60da48b5db4cc36166da3aa144b93396cc10b713dc030c0211"} Feb 20 12:20:18.404286 master-0 kubenswrapper[31420]: I0220 12:20:18.404269 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e6ecb6b67541a60da48b5db4cc36166da3aa144b93396cc10b713dc030c0211" Feb 20 12:20:18.404516 master-0 kubenswrapper[31420]: I0220 12:20:18.404279 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-g4frr" Feb 20 12:20:18.407897 master-0 kubenswrapper[31420]: I0220 12:20:18.407785 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-internal-api-0" event={"ID":"2092a975-d3a8-4034-9301-d99c84087164","Type":"ContainerStarted","Data":"ffc21f80eb6e08b57d951e581b7f08fb1ca94c23ef7c689bdb9c9b9fb4448a96"} Feb 20 12:20:18.830110 master-0 kubenswrapper[31420]: I0220 12:20:18.828626 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7556d48dcd-b785x"] Feb 20 12:20:18.830110 master-0 kubenswrapper[31420]: E0220 12:20:18.829290 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d644bcbb-d205-4408-a0c7-7e3bbc55e180" containerName="placement-db-sync" Feb 20 12:20:18.830110 master-0 kubenswrapper[31420]: I0220 12:20:18.829309 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="d644bcbb-d205-4408-a0c7-7e3bbc55e180" containerName="placement-db-sync" Feb 20 12:20:18.830110 master-0 kubenswrapper[31420]: I0220 12:20:18.829630 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="d644bcbb-d205-4408-a0c7-7e3bbc55e180" containerName="placement-db-sync" Feb 20 12:20:18.830851 master-0 kubenswrapper[31420]: I0220 12:20:18.830831 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.832803 master-0 kubenswrapper[31420]: I0220 12:20:18.832763 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 20 12:20:18.832884 master-0 kubenswrapper[31420]: I0220 12:20:18.832850 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 20 12:20:18.832928 master-0 kubenswrapper[31420]: I0220 12:20:18.832763 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 20 12:20:18.834284 master-0 kubenswrapper[31420]: I0220 12:20:18.834251 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 20 12:20:18.854627 master-0 kubenswrapper[31420]: I0220 12:20:18.852982 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7556d48dcd-b785x"] Feb 20 12:20:18.886851 master-0 kubenswrapper[31420]: I0220 12:20:18.886789 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-combined-ca-bundle\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.887136 master-0 kubenswrapper[31420]: I0220 12:20:18.887080 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-internal-tls-certs\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.887352 master-0 kubenswrapper[31420]: I0220 12:20:18.887326 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-scripts\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.887409 master-0 kubenswrapper[31420]: I0220 12:20:18.887375 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-public-tls-certs\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.887445 master-0 kubenswrapper[31420]: I0220 12:20:18.887407 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cljq9\" (UniqueName: \"kubernetes.io/projected/3b5f8852-b52b-402c-bc6b-f21c99991432-kube-api-access-cljq9\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.887812 master-0 kubenswrapper[31420]: I0220 12:20:18.887729 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-config-data\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.887946 master-0 kubenswrapper[31420]: I0220 12:20:18.887925 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b5f8852-b52b-402c-bc6b-f21c99991432-logs\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.991242 master-0 kubenswrapper[31420]: I0220 12:20:18.991178 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b5f8852-b52b-402c-bc6b-f21c99991432-logs\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.991453 master-0 kubenswrapper[31420]: I0220 12:20:18.991296 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-combined-ca-bundle\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.991610 master-0 kubenswrapper[31420]: I0220 12:20:18.991518 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-internal-tls-certs\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.991776 master-0 kubenswrapper[31420]: I0220 12:20:18.991735 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b5f8852-b52b-402c-bc6b-f21c99991432-logs\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.991818 master-0 kubenswrapper[31420]: I0220 12:20:18.991741 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-scripts\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.991852 master-0 kubenswrapper[31420]: I0220 12:20:18.991841 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-public-tls-certs\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.991885 master-0 kubenswrapper[31420]: I0220 12:20:18.991873 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cljq9\" (UniqueName: \"kubernetes.io/projected/3b5f8852-b52b-402c-bc6b-f21c99991432-kube-api-access-cljq9\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.992052 master-0 kubenswrapper[31420]: I0220 12:20:18.992025 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-config-data\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.996147 master-0 kubenswrapper[31420]: I0220 12:20:18.996108 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-public-tls-certs\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.996248 master-0 kubenswrapper[31420]: I0220 12:20:18.996213 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-combined-ca-bundle\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:18.998227 master-0 kubenswrapper[31420]: I0220 12:20:18.997702 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-config-data\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:19.001611 master-0 kubenswrapper[31420]: I0220 12:20:19.000167 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-scripts\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:19.005065 master-0 kubenswrapper[31420]: I0220 12:20:19.005014 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-internal-tls-certs\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:19.023304 master-0 kubenswrapper[31420]: I0220 12:20:19.021821 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cljq9\" (UniqueName: \"kubernetes.io/projected/3b5f8852-b52b-402c-bc6b-f21c99991432-kube-api-access-cljq9\") pod \"placement-7556d48dcd-b785x\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:19.025418 master-0 kubenswrapper[31420]: I0220 12:20:19.024942 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:20:19.063454 master-0 kubenswrapper[31420]: I0220 12:20:19.063405 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:19.093363 master-0 kubenswrapper[31420]: I0220 12:20:19.093303 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-ovsdbserver-sb\") pod \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " Feb 20 12:20:19.093672 master-0 kubenswrapper[31420]: I0220 12:20:19.093553 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-config\") pod \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " Feb 20 12:20:19.093672 master-0 kubenswrapper[31420]: I0220 12:20:19.093620 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-ovsdbserver-nb\") pod \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " Feb 20 12:20:19.093672 master-0 kubenswrapper[31420]: I0220 12:20:19.093664 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-dns-svc\") pod \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " Feb 20 12:20:19.093832 master-0 kubenswrapper[31420]: I0220 12:20:19.093737 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-dns-swift-storage-0\") pod \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " Feb 20 12:20:19.093832 master-0 kubenswrapper[31420]: I0220 12:20:19.093775 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hpgp\" (UniqueName: \"kubernetes.io/projected/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-kube-api-access-2hpgp\") pod \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\" (UID: \"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4\") " Feb 20 12:20:19.098953 master-0 kubenswrapper[31420]: I0220 12:20:19.098886 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-kube-api-access-2hpgp" (OuterVolumeSpecName: "kube-api-access-2hpgp") pod "359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" (UID: "359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4"). InnerVolumeSpecName "kube-api-access-2hpgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:19.152387 master-0 kubenswrapper[31420]: I0220 12:20:19.151226 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" (UID: "359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:19.163563 master-0 kubenswrapper[31420]: I0220 12:20:19.163485 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" (UID: "359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:19.171073 master-0 kubenswrapper[31420]: I0220 12:20:19.171007 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" (UID: "359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:19.177109 master-0 kubenswrapper[31420]: I0220 12:20:19.176707 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" (UID: "359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:19.196512 master-0 kubenswrapper[31420]: I0220 12:20:19.196414 31420 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:19.196512 master-0 kubenswrapper[31420]: I0220 12:20:19.196482 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hpgp\" (UniqueName: \"kubernetes.io/projected/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-kube-api-access-2hpgp\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:19.196512 master-0 kubenswrapper[31420]: I0220 12:20:19.196497 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:19.196512 master-0 kubenswrapper[31420]: I0220 12:20:19.196509 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:19.197216 master-0 kubenswrapper[31420]: I0220 12:20:19.196539 31420 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:19.205718 master-0 kubenswrapper[31420]: I0220 12:20:19.205651 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-config" (OuterVolumeSpecName: "config") pod "359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" (UID: "359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:19.298259 master-0 kubenswrapper[31420]: I0220 12:20:19.298133 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:19.357562 master-0 kubenswrapper[31420]: I0220 12:20:19.354737 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6c2dd"] Feb 20 12:20:19.381849 master-0 kubenswrapper[31420]: W0220 12:20:19.376688 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod546ac957_a54d_45ab_aaf7_f0f22fbb5883.slice/crio-77ad47d667ed14bb327abd5af18757734db08d405c4f07bfdf01c92d2fb0165a WatchSource:0}: Error finding container 77ad47d667ed14bb327abd5af18757734db08d405c4f07bfdf01c92d2fb0165a: Status 404 returned error can't find the container with id 77ad47d667ed14bb327abd5af18757734db08d405c4f07bfdf01c92d2fb0165a Feb 20 12:20:19.389603 master-0 kubenswrapper[31420]: I0220 12:20:19.389165 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:19.392769 master-0 kubenswrapper[31420]: I0220 12:20:19.391061 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:20:19.453001 master-0 kubenswrapper[31420]: I0220 12:20:19.452945 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6c2dd" event={"ID":"546ac957-a54d-45ab-aaf7-f0f22fbb5883","Type":"ContainerStarted","Data":"77ad47d667ed14bb327abd5af18757734db08d405c4f07bfdf01c92d2fb0165a"} Feb 20 12:20:19.464968 master-0 kubenswrapper[31420]: I0220 12:20:19.464923 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" Feb 20 12:20:19.469660 master-0 kubenswrapper[31420]: I0220 12:20:19.469621 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr" event={"ID":"359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4","Type":"ContainerDied","Data":"d28884cfcfb83f057be8f956d79455ae8b698fc3d5dd33f946f75b9ec1db23ab"} Feb 20 12:20:19.469846 master-0 kubenswrapper[31420]: I0220 12:20:19.469829 31420 scope.go:117] "RemoveContainer" containerID="da731389c95edc5cb32c72f71c53568fec6984d8889fcaf7c10e28098e1b42c5" Feb 20 12:20:19.510839 master-0 kubenswrapper[31420]: W0220 12:20:19.510784 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1caf9802_b963_4368_ac29_e47812b48ad3.slice/crio-704b949559673cc39aea04ffc269074517b7e2bfc091883085abd01edd13ed97 WatchSource:0}: Error finding container 704b949559673cc39aea04ffc269074517b7e2bfc091883085abd01edd13ed97: Status 404 returned error can't find the container with id 704b949559673cc39aea04ffc269074517b7e2bfc091883085abd01edd13ed97 Feb 20 12:20:19.555403 master-0 kubenswrapper[31420]: I0220 12:20:19.555355 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-9z9n4"] Feb 20 12:20:19.566052 master-0 kubenswrapper[31420]: I0220 12:20:19.565984 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr"] Feb 20 12:20:19.566892 master-0 kubenswrapper[31420]: I0220 12:20:19.566853 31420 scope.go:117] "RemoveContainer" containerID="d8ac5b816f52692216025c704a036f55a8b6853585d811910bfc71679665dec5" Feb 20 12:20:19.576415 master-0 kubenswrapper[31420]: I0220 12:20:19.574398 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b9bfb6bf7-8rdzr"] Feb 20 12:20:19.701011 master-0 kubenswrapper[31420]: W0220 12:20:19.700863 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b5f8852_b52b_402c_bc6b_f21c99991432.slice/crio-89bd61052748ca0169c372476387958e2c9e4dbbf7e4b9932eb1e732cd333afe WatchSource:0}: Error finding container 89bd61052748ca0169c372476387958e2c9e4dbbf7e4b9932eb1e732cd333afe: Status 404 returned error can't find the container with id 89bd61052748ca0169c372476387958e2c9e4dbbf7e4b9932eb1e732cd333afe Feb 20 12:20:19.707170 master-0 kubenswrapper[31420]: I0220 12:20:19.707106 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7556d48dcd-b785x"] Feb 20 12:20:20.524575 master-0 kubenswrapper[31420]: I0220 12:20:20.524487 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-db-sync-wkljp" event={"ID":"f85180db-2d00-4ec9-b408-813c4db2d86b","Type":"ContainerStarted","Data":"fc1b6c7301a8aa1f6470e8ecb18f25402b2f6fd16e029212b08909682ebf8d84"} Feb 20 12:20:20.542554 master-0 kubenswrapper[31420]: I0220 12:20:20.542307 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6c2dd" event={"ID":"546ac957-a54d-45ab-aaf7-f0f22fbb5883","Type":"ContainerStarted","Data":"a837f6722373de985a0022d381f5e0bd07e868b8b2bee4a3be2b2f377ec6c72d"} Feb 20 12:20:20.566994 master-0 kubenswrapper[31420]: I0220 12:20:20.566479 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-d44a4-db-sync-wkljp" podStartSLOduration=3.331258489 podStartE2EDuration="20.566460412s" podCreationTimestamp="2026-02-20 12:20:00 +0000 UTC" firstStartedPulling="2026-02-20 12:20:01.676434106 +0000 UTC m=+906.395672347" lastFinishedPulling="2026-02-20 12:20:18.911636029 +0000 UTC m=+923.630874270" observedRunningTime="2026-02-20 12:20:20.550902842 +0000 UTC m=+925.270141113" watchObservedRunningTime="2026-02-20 12:20:20.566460412 +0000 UTC m=+925.285698663" Feb 20 12:20:20.575241 master-0 kubenswrapper[31420]: I0220 12:20:20.573903 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7556d48dcd-b785x" event={"ID":"3b5f8852-b52b-402c-bc6b-f21c99991432","Type":"ContainerStarted","Data":"7682996cc4ca0aabb34c0e54c27bbacce47e83b2793b1113029a21a214c17252"} Feb 20 12:20:20.575241 master-0 kubenswrapper[31420]: I0220 12:20:20.573975 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7556d48dcd-b785x" event={"ID":"3b5f8852-b52b-402c-bc6b-f21c99991432","Type":"ContainerStarted","Data":"b7f17e6556dffbecfe0e85e616524c58fb2ab554cbabbf0e3b7d11b8c852c3f4"} Feb 20 12:20:20.575241 master-0 kubenswrapper[31420]: I0220 12:20:20.573992 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7556d48dcd-b785x" event={"ID":"3b5f8852-b52b-402c-bc6b-f21c99991432","Type":"ContainerStarted","Data":"89bd61052748ca0169c372476387958e2c9e4dbbf7e4b9932eb1e732cd333afe"} Feb 20 12:20:20.575241 master-0 kubenswrapper[31420]: I0220 12:20:20.574874 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:20.575241 master-0 kubenswrapper[31420]: I0220 12:20:20.574934 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:20.579541 master-0 kubenswrapper[31420]: I0220 12:20:20.578796 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-internal-api-0" event={"ID":"2092a975-d3a8-4034-9301-d99c84087164","Type":"ContainerStarted","Data":"a7973c0efdfc99ea933c9dbc97a7ec14c2ad36e9892e4b5a8fae43fb661baad8"} Feb 20 12:20:20.579541 master-0 kubenswrapper[31420]: I0220 12:20:20.578839 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-internal-api-0" event={"ID":"2092a975-d3a8-4034-9301-d99c84087164","Type":"ContainerStarted","Data":"101e552f3ad99c0eddb9a1eb81c89162720cb2fc3e3410a3f72dba0dfa7ff9b7"} Feb 20 12:20:20.581577 master-0 kubenswrapper[31420]: I0220 12:20:20.580801 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-9z9n4" event={"ID":"1caf9802-b963-4368-ac29-e47812b48ad3","Type":"ContainerStarted","Data":"704b949559673cc39aea04ffc269074517b7e2bfc091883085abd01edd13ed97"} Feb 20 12:20:20.593287 master-0 kubenswrapper[31420]: I0220 12:20:20.591891 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6c2dd" podStartSLOduration=11.59187253 podStartE2EDuration="11.59187253s" podCreationTimestamp="2026-02-20 12:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:20.573243284 +0000 UTC m=+925.292481535" watchObservedRunningTime="2026-02-20 12:20:20.59187253 +0000 UTC m=+925.311110761" Feb 20 12:20:20.607939 master-0 kubenswrapper[31420]: I0220 12:20:20.607372 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7556d48dcd-b785x" podStartSLOduration=2.607338378 podStartE2EDuration="2.607338378s" podCreationTimestamp="2026-02-20 12:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:20.602087859 +0000 UTC m=+925.321326120" watchObservedRunningTime="2026-02-20 12:20:20.607338378 +0000 UTC m=+925.326576619" Feb 20 12:20:20.636173 master-0 kubenswrapper[31420]: I0220 12:20:20.635478 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e60fa-default-internal-api-0" podStartSLOduration=17.635458812 podStartE2EDuration="17.635458812s" podCreationTimestamp="2026-02-20 12:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:20.6205123 +0000 UTC m=+925.339750551" watchObservedRunningTime="2026-02-20 12:20:20.635458812 +0000 UTC m=+925.354697053" Feb 20 12:20:21.516121 master-0 kubenswrapper[31420]: I0220 12:20:21.516050 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" path="/var/lib/kubelet/pods/359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4/volumes" Feb 20 12:20:23.622773 master-0 kubenswrapper[31420]: I0220 12:20:23.622666 31420 generic.go:334] "Generic (PLEG): container finished" podID="546ac957-a54d-45ab-aaf7-f0f22fbb5883" containerID="a837f6722373de985a0022d381f5e0bd07e868b8b2bee4a3be2b2f377ec6c72d" exitCode=0 Feb 20 12:20:23.622773 master-0 kubenswrapper[31420]: I0220 12:20:23.622750 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6c2dd" event={"ID":"546ac957-a54d-45ab-aaf7-f0f22fbb5883","Type":"ContainerDied","Data":"a837f6722373de985a0022d381f5e0bd07e868b8b2bee4a3be2b2f377ec6c72d"} Feb 20 12:20:27.318609 master-0 kubenswrapper[31420]: I0220 12:20:27.318555 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:27.420936 master-0 kubenswrapper[31420]: I0220 12:20:27.420692 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-scripts\") pod \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " Feb 20 12:20:27.421452 master-0 kubenswrapper[31420]: I0220 12:20:27.421423 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-combined-ca-bundle\") pod \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " Feb 20 12:20:27.421700 master-0 kubenswrapper[31420]: I0220 12:20:27.421680 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-credential-keys\") pod \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " Feb 20 12:20:27.421774 master-0 kubenswrapper[31420]: I0220 12:20:27.421735 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-fernet-keys\") pod \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " Feb 20 12:20:27.421810 master-0 kubenswrapper[31420]: I0220 12:20:27.421795 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8656\" (UniqueName: \"kubernetes.io/projected/546ac957-a54d-45ab-aaf7-f0f22fbb5883-kube-api-access-s8656\") pod \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " Feb 20 12:20:27.421847 master-0 kubenswrapper[31420]: I0220 12:20:27.421817 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-config-data\") pod \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\" (UID: \"546ac957-a54d-45ab-aaf7-f0f22fbb5883\") " Feb 20 12:20:27.425972 master-0 kubenswrapper[31420]: I0220 12:20:27.425867 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "546ac957-a54d-45ab-aaf7-f0f22fbb5883" (UID: "546ac957-a54d-45ab-aaf7-f0f22fbb5883"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:27.425972 master-0 kubenswrapper[31420]: I0220 12:20:27.425918 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "546ac957-a54d-45ab-aaf7-f0f22fbb5883" (UID: "546ac957-a54d-45ab-aaf7-f0f22fbb5883"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:27.426108 master-0 kubenswrapper[31420]: I0220 12:20:27.425962 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-scripts" (OuterVolumeSpecName: "scripts") pod "546ac957-a54d-45ab-aaf7-f0f22fbb5883" (UID: "546ac957-a54d-45ab-aaf7-f0f22fbb5883"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:27.439217 master-0 kubenswrapper[31420]: I0220 12:20:27.439159 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/546ac957-a54d-45ab-aaf7-f0f22fbb5883-kube-api-access-s8656" (OuterVolumeSpecName: "kube-api-access-s8656") pod "546ac957-a54d-45ab-aaf7-f0f22fbb5883" (UID: "546ac957-a54d-45ab-aaf7-f0f22fbb5883"). InnerVolumeSpecName "kube-api-access-s8656". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:27.458534 master-0 kubenswrapper[31420]: I0220 12:20:27.458479 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-config-data" (OuterVolumeSpecName: "config-data") pod "546ac957-a54d-45ab-aaf7-f0f22fbb5883" (UID: "546ac957-a54d-45ab-aaf7-f0f22fbb5883"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:27.484514 master-0 kubenswrapper[31420]: I0220 12:20:27.484463 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "546ac957-a54d-45ab-aaf7-f0f22fbb5883" (UID: "546ac957-a54d-45ab-aaf7-f0f22fbb5883"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:27.525097 master-0 kubenswrapper[31420]: I0220 12:20:27.525043 31420 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:27.525097 master-0 kubenswrapper[31420]: I0220 12:20:27.525085 31420 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:27.525097 master-0 kubenswrapper[31420]: I0220 12:20:27.525097 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8656\" (UniqueName: \"kubernetes.io/projected/546ac957-a54d-45ab-aaf7-f0f22fbb5883-kube-api-access-s8656\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:27.525234 master-0 kubenswrapper[31420]: I0220 12:20:27.525108 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:27.525234 master-0 kubenswrapper[31420]: I0220 12:20:27.525118 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:27.525234 master-0 kubenswrapper[31420]: I0220 12:20:27.525126 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546ac957-a54d-45ab-aaf7-f0f22fbb5883-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:27.683105 master-0 kubenswrapper[31420]: I0220 12:20:27.683029 31420 generic.go:334] "Generic (PLEG): container finished" podID="1caf9802-b963-4368-ac29-e47812b48ad3" containerID="6519461b2e173561b6ef562740aab62732cd443f0355bca2694104d6d4bc42f7" exitCode=0 Feb 20 12:20:27.683393 master-0 kubenswrapper[31420]: I0220 12:20:27.683173 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-9z9n4" event={"ID":"1caf9802-b963-4368-ac29-e47812b48ad3","Type":"ContainerDied","Data":"6519461b2e173561b6ef562740aab62732cd443f0355bca2694104d6d4bc42f7"} Feb 20 12:20:27.688764 master-0 kubenswrapper[31420]: I0220 12:20:27.688483 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6c2dd" event={"ID":"546ac957-a54d-45ab-aaf7-f0f22fbb5883","Type":"ContainerDied","Data":"77ad47d667ed14bb327abd5af18757734db08d405c4f07bfdf01c92d2fb0165a"} Feb 20 12:20:27.688764 master-0 kubenswrapper[31420]: I0220 12:20:27.688578 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77ad47d667ed14bb327abd5af18757734db08d405c4f07bfdf01c92d2fb0165a" Feb 20 12:20:27.688764 master-0 kubenswrapper[31420]: I0220 12:20:27.688610 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6c2dd" Feb 20 12:20:28.480353 master-0 kubenswrapper[31420]: I0220 12:20:28.477937 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5947585c67-kc792"] Feb 20 12:20:28.480353 master-0 kubenswrapper[31420]: E0220 12:20:28.478619 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" containerName="dnsmasq-dns" Feb 20 12:20:28.480353 master-0 kubenswrapper[31420]: I0220 12:20:28.478640 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" containerName="dnsmasq-dns" Feb 20 12:20:28.480353 master-0 kubenswrapper[31420]: E0220 12:20:28.478683 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="546ac957-a54d-45ab-aaf7-f0f22fbb5883" containerName="keystone-bootstrap" Feb 20 12:20:28.480353 master-0 kubenswrapper[31420]: I0220 12:20:28.478693 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="546ac957-a54d-45ab-aaf7-f0f22fbb5883" containerName="keystone-bootstrap" Feb 20 12:20:28.480353 master-0 kubenswrapper[31420]: E0220 12:20:28.478720 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" containerName="init" Feb 20 12:20:28.480353 master-0 kubenswrapper[31420]: I0220 12:20:28.478730 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" containerName="init" Feb 20 12:20:28.480353 master-0 kubenswrapper[31420]: I0220 12:20:28.479043 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="546ac957-a54d-45ab-aaf7-f0f22fbb5883" containerName="keystone-bootstrap" Feb 20 12:20:28.480353 master-0 kubenswrapper[31420]: I0220 12:20:28.479082 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="359935a2-8a2a-4d72-a9cd-2b4dbdbd4fa4" containerName="dnsmasq-dns" Feb 20 12:20:28.480353 master-0 kubenswrapper[31420]: I0220 12:20:28.479970 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.483841 master-0 kubenswrapper[31420]: I0220 12:20:28.481811 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 20 12:20:28.483841 master-0 kubenswrapper[31420]: I0220 12:20:28.482635 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 20 12:20:28.483841 master-0 kubenswrapper[31420]: I0220 12:20:28.482935 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 20 12:20:28.483841 master-0 kubenswrapper[31420]: I0220 12:20:28.483369 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 20 12:20:28.483841 master-0 kubenswrapper[31420]: I0220 12:20:28.483740 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 20 12:20:28.491410 master-0 kubenswrapper[31420]: I0220 12:20:28.491131 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5947585c67-kc792"] Feb 20 12:20:28.549361 master-0 kubenswrapper[31420]: I0220 12:20:28.549294 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-fernet-keys\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.549582 master-0 kubenswrapper[31420]: I0220 12:20:28.549382 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-scripts\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.549582 master-0 kubenswrapper[31420]: I0220 12:20:28.549418 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-config-data\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.549582 master-0 kubenswrapper[31420]: I0220 12:20:28.549470 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6mx4\" (UniqueName: \"kubernetes.io/projected/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-kube-api-access-l6mx4\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.549582 master-0 kubenswrapper[31420]: I0220 12:20:28.549511 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-internal-tls-certs\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.549721 master-0 kubenswrapper[31420]: I0220 12:20:28.549665 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-credential-keys\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.549721 master-0 kubenswrapper[31420]: I0220 12:20:28.549698 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-public-tls-certs\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.549788 master-0 kubenswrapper[31420]: I0220 12:20:28.549732 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-combined-ca-bundle\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.651871 master-0 kubenswrapper[31420]: I0220 12:20:28.651788 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-credential-keys\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.651871 master-0 kubenswrapper[31420]: I0220 12:20:28.651863 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-public-tls-certs\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.652166 master-0 kubenswrapper[31420]: I0220 12:20:28.651902 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-combined-ca-bundle\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.652166 master-0 kubenswrapper[31420]: I0220 12:20:28.651941 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-fernet-keys\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.652166 master-0 kubenswrapper[31420]: I0220 12:20:28.651978 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-scripts\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.652166 master-0 kubenswrapper[31420]: I0220 12:20:28.652014 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-config-data\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.652166 master-0 kubenswrapper[31420]: I0220 12:20:28.652065 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6mx4\" (UniqueName: \"kubernetes.io/projected/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-kube-api-access-l6mx4\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.652166 master-0 kubenswrapper[31420]: I0220 12:20:28.652109 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-internal-tls-certs\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.655659 master-0 kubenswrapper[31420]: I0220 12:20:28.655619 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-internal-tls-certs\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.655922 master-0 kubenswrapper[31420]: I0220 12:20:28.655881 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-fernet-keys\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.655979 master-0 kubenswrapper[31420]: I0220 12:20:28.655919 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-combined-ca-bundle\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.656402 master-0 kubenswrapper[31420]: I0220 12:20:28.656351 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-scripts\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.656686 master-0 kubenswrapper[31420]: I0220 12:20:28.656652 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-credential-keys\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.657092 master-0 kubenswrapper[31420]: I0220 12:20:28.657047 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-config-data\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.658097 master-0 kubenswrapper[31420]: I0220 12:20:28.658062 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-public-tls-certs\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.673561 master-0 kubenswrapper[31420]: I0220 12:20:28.673487 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6mx4\" (UniqueName: \"kubernetes.io/projected/a77b88bf-7a95-4f44-80c3-75df9a9a3c2b-kube-api-access-l6mx4\") pod \"keystone-5947585c67-kc792\" (UID: \"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b\") " pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:28.702870 master-0 kubenswrapper[31420]: I0220 12:20:28.702752 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-9z9n4" event={"ID":"1caf9802-b963-4368-ac29-e47812b48ad3","Type":"ContainerStarted","Data":"aa7560336d93a2765df2978574a3c3d391bc12dec63fac01cdb150e552294168"} Feb 20 12:20:28.704677 master-0 kubenswrapper[31420]: I0220 12:20:28.704611 31420 generic.go:334] "Generic (PLEG): container finished" podID="f85180db-2d00-4ec9-b408-813c4db2d86b" containerID="fc1b6c7301a8aa1f6470e8ecb18f25402b2f6fd16e029212b08909682ebf8d84" exitCode=0 Feb 20 12:20:28.704677 master-0 kubenswrapper[31420]: I0220 12:20:28.704671 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-db-sync-wkljp" event={"ID":"f85180db-2d00-4ec9-b408-813c4db2d86b","Type":"ContainerDied","Data":"fc1b6c7301a8aa1f6470e8ecb18f25402b2f6fd16e029212b08909682ebf8d84"} Feb 20 12:20:28.731336 master-0 kubenswrapper[31420]: I0220 12:20:28.731135 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-9z9n4" podStartSLOduration=10.091660108 podStartE2EDuration="17.731112067s" podCreationTimestamp="2026-02-20 12:20:11 +0000 UTC" firstStartedPulling="2026-02-20 12:20:19.529901075 +0000 UTC m=+924.249139326" lastFinishedPulling="2026-02-20 12:20:27.169353044 +0000 UTC m=+931.888591285" observedRunningTime="2026-02-20 12:20:28.721009992 +0000 UTC m=+933.440248243" watchObservedRunningTime="2026-02-20 12:20:28.731112067 +0000 UTC m=+933.450350308" Feb 20 12:20:28.834949 master-0 kubenswrapper[31420]: I0220 12:20:28.834849 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:29.357382 master-0 kubenswrapper[31420]: I0220 12:20:29.353461 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5947585c67-kc792"] Feb 20 12:20:29.360804 master-0 kubenswrapper[31420]: W0220 12:20:29.360752 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda77b88bf_7a95_4f44_80c3_75df9a9a3c2b.slice/crio-938dc584d445ec722b537bcbf68c8096f38d1349c48b267ef04fba803b7df43f WatchSource:0}: Error finding container 938dc584d445ec722b537bcbf68c8096f38d1349c48b267ef04fba803b7df43f: Status 404 returned error can't find the container with id 938dc584d445ec722b537bcbf68c8096f38d1349c48b267ef04fba803b7df43f Feb 20 12:20:29.447183 master-0 kubenswrapper[31420]: I0220 12:20:29.447106 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:29.447183 master-0 kubenswrapper[31420]: I0220 12:20:29.447182 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:29.479096 master-0 kubenswrapper[31420]: I0220 12:20:29.479014 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:29.524249 master-0 kubenswrapper[31420]: I0220 12:20:29.524173 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:29.718976 master-0 kubenswrapper[31420]: I0220 12:20:29.718893 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5947585c67-kc792" event={"ID":"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b","Type":"ContainerStarted","Data":"a23853f0f091e491100f7fa8886f775a0a967ad18e3800268685ecd2df8dbedc"} Feb 20 12:20:29.718976 master-0 kubenswrapper[31420]: I0220 12:20:29.718950 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5947585c67-kc792" event={"ID":"a77b88bf-7a95-4f44-80c3-75df9a9a3c2b","Type":"ContainerStarted","Data":"938dc584d445ec722b537bcbf68c8096f38d1349c48b267ef04fba803b7df43f"} Feb 20 12:20:29.722141 master-0 kubenswrapper[31420]: I0220 12:20:29.721709 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:29.722141 master-0 kubenswrapper[31420]: I0220 12:20:29.721739 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5947585c67-kc792" Feb 20 12:20:29.722141 master-0 kubenswrapper[31420]: I0220 12:20:29.721754 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:29.751351 master-0 kubenswrapper[31420]: I0220 12:20:29.751254 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5947585c67-kc792" podStartSLOduration=1.751232961 podStartE2EDuration="1.751232961s" podCreationTimestamp="2026-02-20 12:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:29.741306871 +0000 UTC m=+934.460545112" watchObservedRunningTime="2026-02-20 12:20:29.751232961 +0000 UTC m=+934.470471202" Feb 20 12:20:30.152696 master-0 kubenswrapper[31420]: I0220 12:20:30.152632 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:30.203559 master-0 kubenswrapper[31420]: I0220 12:20:30.195290 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h5s5\" (UniqueName: \"kubernetes.io/projected/f85180db-2d00-4ec9-b408-813c4db2d86b-kube-api-access-6h5s5\") pod \"f85180db-2d00-4ec9-b408-813c4db2d86b\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " Feb 20 12:20:30.203559 master-0 kubenswrapper[31420]: I0220 12:20:30.195381 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-scripts\") pod \"f85180db-2d00-4ec9-b408-813c4db2d86b\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " Feb 20 12:20:30.203559 master-0 kubenswrapper[31420]: I0220 12:20:30.195466 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-combined-ca-bundle\") pod \"f85180db-2d00-4ec9-b408-813c4db2d86b\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " Feb 20 12:20:30.203559 master-0 kubenswrapper[31420]: I0220 12:20:30.195553 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-config-data\") pod \"f85180db-2d00-4ec9-b408-813c4db2d86b\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " Feb 20 12:20:30.203559 master-0 kubenswrapper[31420]: I0220 12:20:30.195572 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-db-sync-config-data\") pod \"f85180db-2d00-4ec9-b408-813c4db2d86b\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " Feb 20 12:20:30.203559 master-0 kubenswrapper[31420]: I0220 12:20:30.195682 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f85180db-2d00-4ec9-b408-813c4db2d86b-etc-machine-id\") pod \"f85180db-2d00-4ec9-b408-813c4db2d86b\" (UID: \"f85180db-2d00-4ec9-b408-813c4db2d86b\") " Feb 20 12:20:30.203559 master-0 kubenswrapper[31420]: I0220 12:20:30.196129 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85180db-2d00-4ec9-b408-813c4db2d86b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f85180db-2d00-4ec9-b408-813c4db2d86b" (UID: "f85180db-2d00-4ec9-b408-813c4db2d86b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:30.203559 master-0 kubenswrapper[31420]: I0220 12:20:30.202622 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85180db-2d00-4ec9-b408-813c4db2d86b-kube-api-access-6h5s5" (OuterVolumeSpecName: "kube-api-access-6h5s5") pod "f85180db-2d00-4ec9-b408-813c4db2d86b" (UID: "f85180db-2d00-4ec9-b408-813c4db2d86b"). InnerVolumeSpecName "kube-api-access-6h5s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:30.203559 master-0 kubenswrapper[31420]: I0220 12:20:30.203412 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f85180db-2d00-4ec9-b408-813c4db2d86b" (UID: "f85180db-2d00-4ec9-b408-813c4db2d86b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:30.210498 master-0 kubenswrapper[31420]: I0220 12:20:30.210424 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-scripts" (OuterVolumeSpecName: "scripts") pod "f85180db-2d00-4ec9-b408-813c4db2d86b" (UID: "f85180db-2d00-4ec9-b408-813c4db2d86b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:30.228307 master-0 kubenswrapper[31420]: I0220 12:20:30.228240 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f85180db-2d00-4ec9-b408-813c4db2d86b" (UID: "f85180db-2d00-4ec9-b408-813c4db2d86b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:30.251135 master-0 kubenswrapper[31420]: I0220 12:20:30.251071 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-config-data" (OuterVolumeSpecName: "config-data") pod "f85180db-2d00-4ec9-b408-813c4db2d86b" (UID: "f85180db-2d00-4ec9-b408-813c4db2d86b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:30.297797 master-0 kubenswrapper[31420]: I0220 12:20:30.297727 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:30.297797 master-0 kubenswrapper[31420]: I0220 12:20:30.297781 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:30.297797 master-0 kubenswrapper[31420]: I0220 12:20:30.297799 31420 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:30.298119 master-0 kubenswrapper[31420]: I0220 12:20:30.297815 31420 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f85180db-2d00-4ec9-b408-813c4db2d86b-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:30.298119 master-0 kubenswrapper[31420]: I0220 12:20:30.297830 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h5s5\" (UniqueName: \"kubernetes.io/projected/f85180db-2d00-4ec9-b408-813c4db2d86b-kube-api-access-6h5s5\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:30.298119 master-0 kubenswrapper[31420]: I0220 12:20:30.297844 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85180db-2d00-4ec9-b408-813c4db2d86b-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:30.745625 master-0 kubenswrapper[31420]: I0220 12:20:30.745388 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-db-sync-wkljp" event={"ID":"f85180db-2d00-4ec9-b408-813c4db2d86b","Type":"ContainerDied","Data":"6d238b393f7c900a53df45f0eaa5cfd2a7d934f9929ad4f3cf7ea5d3bf94a8e8"} Feb 20 12:20:30.745625 master-0 kubenswrapper[31420]: I0220 12:20:30.745481 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d238b393f7c900a53df45f0eaa5cfd2a7d934f9929ad4f3cf7ea5d3bf94a8e8" Feb 20 12:20:30.746285 master-0 kubenswrapper[31420]: I0220 12:20:30.745641 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-db-sync-wkljp" Feb 20 12:20:31.036482 master-0 kubenswrapper[31420]: I0220 12:20:31.036420 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d44a4-scheduler-0"] Feb 20 12:20:31.036921 master-0 kubenswrapper[31420]: E0220 12:20:31.036900 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85180db-2d00-4ec9-b408-813c4db2d86b" containerName="cinder-d44a4-db-sync" Feb 20 12:20:31.036921 master-0 kubenswrapper[31420]: I0220 12:20:31.036918 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85180db-2d00-4ec9-b408-813c4db2d86b" containerName="cinder-d44a4-db-sync" Feb 20 12:20:31.037740 master-0 kubenswrapper[31420]: I0220 12:20:31.037715 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85180db-2d00-4ec9-b408-813c4db2d86b" containerName="cinder-d44a4-db-sync" Feb 20 12:20:31.043453 master-0 kubenswrapper[31420]: I0220 12:20:31.043399 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.054071 master-0 kubenswrapper[31420]: I0220 12:20:31.052037 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-d44a4-scripts" Feb 20 12:20:31.054071 master-0 kubenswrapper[31420]: I0220 12:20:31.052437 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-d44a4-config-data" Feb 20 12:20:31.054071 master-0 kubenswrapper[31420]: I0220 12:20:31.052621 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-d44a4-scheduler-config-data" Feb 20 12:20:31.089367 master-0 kubenswrapper[31420]: I0220 12:20:31.087252 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-scheduler-0"] Feb 20 12:20:31.140410 master-0 kubenswrapper[31420]: I0220 12:20:31.140348 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-combined-ca-bundle\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.140670 master-0 kubenswrapper[31420]: I0220 12:20:31.140451 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qpzz\" (UniqueName: \"kubernetes.io/projected/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-kube-api-access-5qpzz\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.140670 master-0 kubenswrapper[31420]: I0220 12:20:31.140501 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-etc-machine-id\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.140670 master-0 kubenswrapper[31420]: I0220 12:20:31.140548 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-config-data-custom\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.140670 master-0 kubenswrapper[31420]: I0220 12:20:31.140653 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-scripts\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.140924 master-0 kubenswrapper[31420]: I0220 12:20:31.140872 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-config-data\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.207893 master-0 kubenswrapper[31420]: I0220 12:20:31.206795 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb58d95b7-9pzpf"] Feb 20 12:20:31.212189 master-0 kubenswrapper[31420]: I0220 12:20:31.208561 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.243789 master-0 kubenswrapper[31420]: I0220 12:20:31.242314 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-ovsdbserver-nb\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.243789 master-0 kubenswrapper[31420]: I0220 12:20:31.242428 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-dns-svc\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.243789 master-0 kubenswrapper[31420]: I0220 12:20:31.242468 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-scripts\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.243789 master-0 kubenswrapper[31420]: I0220 12:20:31.242510 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-config\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.243789 master-0 kubenswrapper[31420]: I0220 12:20:31.242628 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-config-data\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.243789 master-0 kubenswrapper[31420]: I0220 12:20:31.242651 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-ovsdbserver-sb\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.243789 master-0 kubenswrapper[31420]: I0220 12:20:31.242691 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-combined-ca-bundle\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.243789 master-0 kubenswrapper[31420]: I0220 12:20:31.242716 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-dns-swift-storage-0\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.243789 master-0 kubenswrapper[31420]: I0220 12:20:31.242754 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qpzz\" (UniqueName: \"kubernetes.io/projected/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-kube-api-access-5qpzz\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.243789 master-0 kubenswrapper[31420]: I0220 12:20:31.242789 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-etc-machine-id\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.243789 master-0 kubenswrapper[31420]: I0220 12:20:31.242810 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-config-data-custom\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.243789 master-0 kubenswrapper[31420]: I0220 12:20:31.242828 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq4zd\" (UniqueName: \"kubernetes.io/projected/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-kube-api-access-pq4zd\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.250156 master-0 kubenswrapper[31420]: I0220 12:20:31.246484 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-scripts\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.251613 master-0 kubenswrapper[31420]: I0220 12:20:31.251545 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-config-data\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.256957 master-0 kubenswrapper[31420]: I0220 12:20:31.254961 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-combined-ca-bundle\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.256957 master-0 kubenswrapper[31420]: I0220 12:20:31.255338 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-etc-machine-id\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.256957 master-0 kubenswrapper[31420]: I0220 12:20:31.255375 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d44a4-volume-lvm-iscsi-0"] Feb 20 12:20:31.257294 master-0 kubenswrapper[31420]: I0220 12:20:31.257265 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.262783 master-0 kubenswrapper[31420]: I0220 12:20:31.262733 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb58d95b7-9pzpf"] Feb 20 12:20:31.284246 master-0 kubenswrapper[31420]: I0220 12:20:31.284194 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-config-data-custom\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.284583 master-0 kubenswrapper[31420]: I0220 12:20:31.284563 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-d44a4-volume-lvm-iscsi-config-data" Feb 20 12:20:31.295414 master-0 kubenswrapper[31420]: I0220 12:20:31.294030 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qpzz\" (UniqueName: \"kubernetes.io/projected/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-kube-api-access-5qpzz\") pod \"cinder-d44a4-scheduler-0\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.295414 master-0 kubenswrapper[31420]: I0220 12:20:31.294839 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-volume-lvm-iscsi-0"] Feb 20 12:20:31.310245 master-0 kubenswrapper[31420]: I0220 12:20:31.310182 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d44a4-backup-0"] Feb 20 12:20:31.312641 master-0 kubenswrapper[31420]: I0220 12:20:31.312608 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.315791 master-0 kubenswrapper[31420]: I0220 12:20:31.315736 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-d44a4-backup-config-data" Feb 20 12:20:31.321862 master-0 kubenswrapper[31420]: I0220 12:20:31.321809 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-backup-0"] Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343556 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-machine-id\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343596 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-iscsi\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343619 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-run\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343635 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-sys\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343656 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-iscsi\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343673 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-config-data\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343691 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-config-data-custom\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343720 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq4zd\" (UniqueName: \"kubernetes.io/projected/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-kube-api-access-pq4zd\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343737 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-locks-cinder\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343755 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-lib-cinder\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343780 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-ovsdbserver-nb\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343801 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-nvme\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343819 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-locks-cinder\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343836 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-locks-brick\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343854 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-lib-modules\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343876 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prj2f\" (UniqueName: \"kubernetes.io/projected/1c9aaa04-a36f-4409-85d4-9199555742bb-kube-api-access-prj2f\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343894 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-sys\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343913 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-machine-id\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343942 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-dns-svc\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343958 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-locks-brick\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.343982 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-config-data\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.344605 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-config\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.344719 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-ovsdbserver-nb\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.345635 master-0 kubenswrapper[31420]: I0220 12:20:31.344741 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h67gl\" (UniqueName: \"kubernetes.io/projected/0c21113a-3e64-4d0b-861f-16aac0d2828e-kube-api-access-h67gl\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.347061 master-0 kubenswrapper[31420]: I0220 12:20:31.346657 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-dns-svc\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.347061 master-0 kubenswrapper[31420]: I0220 12:20:31.346920 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-nvme\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.347061 master-0 kubenswrapper[31420]: I0220 12:20:31.346987 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-scripts\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.347061 master-0 kubenswrapper[31420]: I0220 12:20:31.347018 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-config-data-custom\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.347061 master-0 kubenswrapper[31420]: I0220 12:20:31.347045 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-scripts\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.347228 master-0 kubenswrapper[31420]: I0220 12:20:31.347082 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-ovsdbserver-sb\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.347228 master-0 kubenswrapper[31420]: I0220 12:20:31.347129 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-combined-ca-bundle\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.347228 master-0 kubenswrapper[31420]: I0220 12:20:31.347152 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-run\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.347228 master-0 kubenswrapper[31420]: I0220 12:20:31.347171 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-dev\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.347228 master-0 kubenswrapper[31420]: I0220 12:20:31.347227 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-lib-modules\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.347377 master-0 kubenswrapper[31420]: I0220 12:20:31.347267 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-dns-swift-storage-0\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.347377 master-0 kubenswrapper[31420]: I0220 12:20:31.346941 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-config\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.348769 master-0 kubenswrapper[31420]: I0220 12:20:31.347293 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-lib-cinder\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.348926 master-0 kubenswrapper[31420]: I0220 12:20:31.348875 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-dev\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.362918 master-0 kubenswrapper[31420]: I0220 12:20:31.348941 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-combined-ca-bundle\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.362918 master-0 kubenswrapper[31420]: I0220 12:20:31.349708 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-ovsdbserver-sb\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.362918 master-0 kubenswrapper[31420]: I0220 12:20:31.349964 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-dns-swift-storage-0\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.372247 master-0 kubenswrapper[31420]: I0220 12:20:31.372196 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq4zd\" (UniqueName: \"kubernetes.io/projected/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-kube-api-access-pq4zd\") pod \"dnsmasq-dns-bb58d95b7-9pzpf\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.407247 master-0 kubenswrapper[31420]: I0220 12:20:31.402090 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:31.423257 master-0 kubenswrapper[31420]: I0220 12:20:31.423211 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d44a4-api-0"] Feb 20 12:20:31.427735 master-0 kubenswrapper[31420]: I0220 12:20:31.427704 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.429784 master-0 kubenswrapper[31420]: I0220 12:20:31.429752 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-d44a4-api-config-data" Feb 20 12:20:31.442821 master-0 kubenswrapper[31420]: I0220 12:20:31.442569 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-api-0"] Feb 20 12:20:31.450801 master-0 kubenswrapper[31420]: I0220 12:20:31.450433 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-sys\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.450801 master-0 kubenswrapper[31420]: I0220 12:20:31.450488 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-machine-id\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.450801 master-0 kubenswrapper[31420]: I0220 12:20:31.450561 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-locks-brick\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.450801 master-0 kubenswrapper[31420]: I0220 12:20:31.450605 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-config-data\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.450801 master-0 kubenswrapper[31420]: I0220 12:20:31.450645 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h67gl\" (UniqueName: \"kubernetes.io/projected/0c21113a-3e64-4d0b-861f-16aac0d2828e-kube-api-access-h67gl\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.450801 master-0 kubenswrapper[31420]: I0220 12:20:31.450675 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-nvme\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.455278 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-scripts\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.450981 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-locks-brick\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.451023 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-machine-id\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.452144 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-nvme\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.455331 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1640ca84-7fa2-44a9-89ff-da78a4b357df-etc-machine-id\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.455472 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-config-data-custom\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.455520 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-scripts\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.455571 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-config-data-custom\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.455629 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-combined-ca-bundle\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.455649 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-run\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.455669 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-dev\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.455692 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1640ca84-7fa2-44a9-89ff-da78a4b357df-logs\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.455716 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-combined-ca-bundle\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.451004 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-sys\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.455747 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5txtl\" (UniqueName: \"kubernetes.io/projected/1640ca84-7fa2-44a9-89ff-da78a4b357df-kube-api-access-5txtl\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.455793 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-lib-modules\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.455866 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-run\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.455896 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-lib-modules\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.455889 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-dev\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.457125 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-lib-cinder\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.457221 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-dev\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.457280 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-combined-ca-bundle\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.457320 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-machine-id\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.457343 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-iscsi\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.457367 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-run\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.457484 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-sys\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.457998 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-scripts\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.458054 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-iscsi\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.458099 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-config-data\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.458172 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-config-data-custom\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.458280 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-locks-cinder\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.458333 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-lib-cinder\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.458590 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-nvme\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.458626 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-config-data\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.458746 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-machine-id\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.458763 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-run\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.458798 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-iscsi\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.458883 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-sys\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.459052 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-locks-cinder\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.459167 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-lib-cinder\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.459225 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-nvme\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.459991 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-scripts\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.460437 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-locks-cinder\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.460490 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-locks-brick\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.460541 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-lib-modules\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.460591 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prj2f\" (UniqueName: \"kubernetes.io/projected/1c9aaa04-a36f-4409-85d4-9199555742bb-kube-api-access-prj2f\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.460723 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-config-data\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.460802 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-iscsi\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.460828 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-dev\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.460852 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-locks-brick\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.460882 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-locks-cinder\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.460888 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-lib-cinder\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.464168 master-0 kubenswrapper[31420]: I0220 12:20:31.460807 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-lib-modules\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.475192 master-0 kubenswrapper[31420]: I0220 12:20:31.470394 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-config-data-custom\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.475192 master-0 kubenswrapper[31420]: I0220 12:20:31.470395 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-combined-ca-bundle\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.475192 master-0 kubenswrapper[31420]: I0220 12:20:31.471152 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-config-data\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.475192 master-0 kubenswrapper[31420]: I0220 12:20:31.471425 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-combined-ca-bundle\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.475192 master-0 kubenswrapper[31420]: I0220 12:20:31.471994 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-scripts\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.475192 master-0 kubenswrapper[31420]: I0220 12:20:31.475021 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h67gl\" (UniqueName: \"kubernetes.io/projected/0c21113a-3e64-4d0b-861f-16aac0d2828e-kube-api-access-h67gl\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.482135 master-0 kubenswrapper[31420]: I0220 12:20:31.482100 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-config-data-custom\") pod \"cinder-d44a4-backup-0\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.482296 master-0 kubenswrapper[31420]: I0220 12:20:31.482149 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prj2f\" (UniqueName: \"kubernetes.io/projected/1c9aaa04-a36f-4409-85d4-9199555742bb-kube-api-access-prj2f\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.542328 master-0 kubenswrapper[31420]: I0220 12:20:31.540892 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:31.546343 master-0 kubenswrapper[31420]: I0220 12:20:31.545397 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:31.566754 master-0 kubenswrapper[31420]: I0220 12:20:31.562262 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:31.566754 master-0 kubenswrapper[31420]: I0220 12:20:31.562867 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1640ca84-7fa2-44a9-89ff-da78a4b357df-etc-machine-id\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.566754 master-0 kubenswrapper[31420]: I0220 12:20:31.562926 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-config-data-custom\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.566754 master-0 kubenswrapper[31420]: I0220 12:20:31.562997 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1640ca84-7fa2-44a9-89ff-da78a4b357df-logs\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.566754 master-0 kubenswrapper[31420]: I0220 12:20:31.563750 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1640ca84-7fa2-44a9-89ff-da78a4b357df-etc-machine-id\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.566754 master-0 kubenswrapper[31420]: I0220 12:20:31.563814 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-combined-ca-bundle\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.566754 master-0 kubenswrapper[31420]: I0220 12:20:31.563845 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5txtl\" (UniqueName: \"kubernetes.io/projected/1640ca84-7fa2-44a9-89ff-da78a4b357df-kube-api-access-5txtl\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.566754 master-0 kubenswrapper[31420]: I0220 12:20:31.564054 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-scripts\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.566754 master-0 kubenswrapper[31420]: I0220 12:20:31.564268 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-config-data\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.566754 master-0 kubenswrapper[31420]: I0220 12:20:31.564276 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1640ca84-7fa2-44a9-89ff-da78a4b357df-logs\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.572651 master-0 kubenswrapper[31420]: I0220 12:20:31.572616 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-config-data\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.575152 master-0 kubenswrapper[31420]: I0220 12:20:31.575120 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-scripts\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.585566 master-0 kubenswrapper[31420]: I0220 12:20:31.583468 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-config-data-custom\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.587234 master-0 kubenswrapper[31420]: I0220 12:20:31.587189 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5txtl\" (UniqueName: \"kubernetes.io/projected/1640ca84-7fa2-44a9-89ff-da78a4b357df-kube-api-access-5txtl\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.645941 master-0 kubenswrapper[31420]: I0220 12:20:31.645872 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-combined-ca-bundle\") pod \"cinder-d44a4-api-0\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.751782 master-0 kubenswrapper[31420]: I0220 12:20:31.751734 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:31.768542 master-0 kubenswrapper[31420]: I0220 12:20:31.768060 31420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 12:20:31.768542 master-0 kubenswrapper[31420]: I0220 12:20:31.768091 31420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 12:20:32.025440 master-0 kubenswrapper[31420]: I0220 12:20:32.025319 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:32.065771 master-0 kubenswrapper[31420]: I0220 12:20:32.065727 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-scheduler-0"] Feb 20 12:20:32.110930 master-0 kubenswrapper[31420]: I0220 12:20:32.110821 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:20:32.504269 master-0 kubenswrapper[31420]: I0220 12:20:32.504021 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-volume-lvm-iscsi-0"] Feb 20 12:20:32.522312 master-0 kubenswrapper[31420]: I0220 12:20:32.520869 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb58d95b7-9pzpf"] Feb 20 12:20:32.531876 master-0 kubenswrapper[31420]: I0220 12:20:32.531583 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-api-0"] Feb 20 12:20:32.533223 master-0 kubenswrapper[31420]: W0220 12:20:32.533173 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c9aaa04_a36f_4409_85d4_9199555742bb.slice/crio-d2a5128d49beb16a0ccbc451edb50e285f00e100d2ce784efe2df0f046754dea WatchSource:0}: Error finding container d2a5128d49beb16a0ccbc451edb50e285f00e100d2ce784efe2df0f046754dea: Status 404 returned error can't find the container with id d2a5128d49beb16a0ccbc451edb50e285f00e100d2ce784efe2df0f046754dea Feb 20 12:20:32.540050 master-0 kubenswrapper[31420]: W0220 12:20:32.539909 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1640ca84_7fa2_44a9_89ff_da78a4b357df.slice/crio-9756e1aaae5d8217b6ce204a817c72d36ef66c90ddd7e2b5502ca7198f417621 WatchSource:0}: Error finding container 9756e1aaae5d8217b6ce204a817c72d36ef66c90ddd7e2b5502ca7198f417621: Status 404 returned error can't find the container with id 9756e1aaae5d8217b6ce204a817c72d36ef66c90ddd7e2b5502ca7198f417621 Feb 20 12:20:32.616019 master-0 kubenswrapper[31420]: I0220 12:20:32.615962 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-backup-0"] Feb 20 12:20:32.650716 master-0 kubenswrapper[31420]: W0220 12:20:32.650657 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c21113a_3e64_4d0b_861f_16aac0d2828e.slice/crio-74b35e21dfedd908fb52194f379158ab37462b04efc99b30f298e6b4a46584dd WatchSource:0}: Error finding container 74b35e21dfedd908fb52194f379158ab37462b04efc99b30f298e6b4a46584dd: Status 404 returned error can't find the container with id 74b35e21dfedd908fb52194f379158ab37462b04efc99b30f298e6b4a46584dd Feb 20 12:20:32.784037 master-0 kubenswrapper[31420]: I0220 12:20:32.780191 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-backup-0" event={"ID":"0c21113a-3e64-4d0b-861f-16aac0d2828e","Type":"ContainerStarted","Data":"74b35e21dfedd908fb52194f379158ab37462b04efc99b30f298e6b4a46584dd"} Feb 20 12:20:32.784037 master-0 kubenswrapper[31420]: I0220 12:20:32.782454 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" event={"ID":"1c9aaa04-a36f-4409-85d4-9199555742bb","Type":"ContainerStarted","Data":"d2a5128d49beb16a0ccbc451edb50e285f00e100d2ce784efe2df0f046754dea"} Feb 20 12:20:32.784037 master-0 kubenswrapper[31420]: I0220 12:20:32.783683 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-api-0" event={"ID":"1640ca84-7fa2-44a9-89ff-da78a4b357df","Type":"ContainerStarted","Data":"9756e1aaae5d8217b6ce204a817c72d36ef66c90ddd7e2b5502ca7198f417621"} Feb 20 12:20:32.787992 master-0 kubenswrapper[31420]: I0220 12:20:32.787604 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" event={"ID":"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93","Type":"ContainerStarted","Data":"1db8be37c233d4b3b05de091a56580f3032a95dd659214bdff23afa20c807f2e"} Feb 20 12:20:32.790480 master-0 kubenswrapper[31420]: I0220 12:20:32.790395 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-scheduler-0" event={"ID":"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c","Type":"ContainerStarted","Data":"87a4fa58cd182c90e5acb960d708b66f44136fa35a70d72c73fe871b4e8af3c1"} Feb 20 12:20:33.818368 master-0 kubenswrapper[31420]: I0220 12:20:33.818305 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-scheduler-0" event={"ID":"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c","Type":"ContainerStarted","Data":"fe2b648170fe1bb7db5faad7bf9580cd0581a232b17bbf9188d51410a783040c"} Feb 20 12:20:33.868759 master-0 kubenswrapper[31420]: I0220 12:20:33.853255 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-api-0" event={"ID":"1640ca84-7fa2-44a9-89ff-da78a4b357df","Type":"ContainerStarted","Data":"418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300"} Feb 20 12:20:33.872934 master-0 kubenswrapper[31420]: I0220 12:20:33.872861 31420 generic.go:334] "Generic (PLEG): container finished" podID="e0cb8d62-4ffd-4364-baaf-a93aa6d95d93" containerID="0339bdbae22ccf7db0ffb560e99b0ec908249c0758528d63c04ea42f50bbf583" exitCode=0 Feb 20 12:20:33.873285 master-0 kubenswrapper[31420]: I0220 12:20:33.873223 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" event={"ID":"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93","Type":"ContainerDied","Data":"0339bdbae22ccf7db0ffb560e99b0ec908249c0758528d63c04ea42f50bbf583"} Feb 20 12:20:33.976020 master-0 kubenswrapper[31420]: I0220 12:20:33.972245 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d44a4-api-0"] Feb 20 12:20:34.887225 master-0 kubenswrapper[31420]: I0220 12:20:34.887154 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" event={"ID":"1c9aaa04-a36f-4409-85d4-9199555742bb","Type":"ContainerStarted","Data":"b6c0206c6145effc16fdb8403acedaa44d75fe3936abb334b1cd205396cd982c"} Feb 20 12:20:34.887225 master-0 kubenswrapper[31420]: I0220 12:20:34.887215 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" event={"ID":"1c9aaa04-a36f-4409-85d4-9199555742bb","Type":"ContainerStarted","Data":"bbdaf214a60b249dbdce79322999fb549904dfeb3a103943f978a29d1da1ecca"} Feb 20 12:20:34.892774 master-0 kubenswrapper[31420]: I0220 12:20:34.892687 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-api-0" event={"ID":"1640ca84-7fa2-44a9-89ff-da78a4b357df","Type":"ContainerStarted","Data":"ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4"} Feb 20 12:20:34.892959 master-0 kubenswrapper[31420]: I0220 12:20:34.892791 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:34.892959 master-0 kubenswrapper[31420]: I0220 12:20:34.892795 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-d44a4-api-0" podUID="1640ca84-7fa2-44a9-89ff-da78a4b357df" containerName="cinder-d44a4-api-log" containerID="cri-o://418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300" gracePeriod=30 Feb 20 12:20:34.892959 master-0 kubenswrapper[31420]: I0220 12:20:34.892822 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-d44a4-api-0" podUID="1640ca84-7fa2-44a9-89ff-da78a4b357df" containerName="cinder-api" containerID="cri-o://ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4" gracePeriod=30 Feb 20 12:20:34.896373 master-0 kubenswrapper[31420]: I0220 12:20:34.896167 31420 generic.go:334] "Generic (PLEG): container finished" podID="e730e756-3c53-48ff-a27d-5ddbf042a996" containerID="af7bcd0389da6ecbf387e067862097cbdde12c6359b8812eb3086092ba104b4a" exitCode=0 Feb 20 12:20:34.896373 master-0 kubenswrapper[31420]: I0220 12:20:34.896236 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-v9jsf" event={"ID":"e730e756-3c53-48ff-a27d-5ddbf042a996","Type":"ContainerDied","Data":"af7bcd0389da6ecbf387e067862097cbdde12c6359b8812eb3086092ba104b4a"} Feb 20 12:20:34.901860 master-0 kubenswrapper[31420]: I0220 12:20:34.901510 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" event={"ID":"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93","Type":"ContainerStarted","Data":"7ccc2549d1d3a42a3ce2c1ceb522f74b64d59bfc690fa593b2b426ce71539fe6"} Feb 20 12:20:34.902704 master-0 kubenswrapper[31420]: I0220 12:20:34.902607 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:34.906071 master-0 kubenswrapper[31420]: I0220 12:20:34.906023 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-scheduler-0" event={"ID":"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c","Type":"ContainerStarted","Data":"6a5f20015d60f0caf4a3c90f7a11a9bb69d3e34616ebeb3bf9ad1e2b81e33cd4"} Feb 20 12:20:34.908810 master-0 kubenswrapper[31420]: I0220 12:20:34.908755 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-backup-0" event={"ID":"0c21113a-3e64-4d0b-861f-16aac0d2828e","Type":"ContainerStarted","Data":"a7675feb7bac5f97d669dd031b106a1466591362c68c21e5915e5afd8e6ff1bc"} Feb 20 12:20:34.908935 master-0 kubenswrapper[31420]: I0220 12:20:34.908822 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-backup-0" event={"ID":"0c21113a-3e64-4d0b-861f-16aac0d2828e","Type":"ContainerStarted","Data":"881fe204546a65081ad238b04d163d16275a68fda01f05651ed23a032ffd1bd4"} Feb 20 12:20:34.934278 master-0 kubenswrapper[31420]: I0220 12:20:34.933659 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" podStartSLOduration=3.183846648 podStartE2EDuration="3.933635851s" podCreationTimestamp="2026-02-20 12:20:31 +0000 UTC" firstStartedPulling="2026-02-20 12:20:32.536706623 +0000 UTC m=+937.255944864" lastFinishedPulling="2026-02-20 12:20:33.286495825 +0000 UTC m=+938.005734067" observedRunningTime="2026-02-20 12:20:34.921493848 +0000 UTC m=+939.640732119" watchObservedRunningTime="2026-02-20 12:20:34.933635851 +0000 UTC m=+939.652874102" Feb 20 12:20:34.950047 master-0 kubenswrapper[31420]: I0220 12:20:34.949932 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" podStartSLOduration=3.949910521 podStartE2EDuration="3.949910521s" podCreationTimestamp="2026-02-20 12:20:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:34.945898988 +0000 UTC m=+939.665137239" watchObservedRunningTime="2026-02-20 12:20:34.949910521 +0000 UTC m=+939.669148762" Feb 20 12:20:35.056457 master-0 kubenswrapper[31420]: I0220 12:20:35.056247 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-d44a4-backup-0" podStartSLOduration=3.14819085 podStartE2EDuration="4.056225776s" podCreationTimestamp="2026-02-20 12:20:31 +0000 UTC" firstStartedPulling="2026-02-20 12:20:32.656865339 +0000 UTC m=+937.376103580" lastFinishedPulling="2026-02-20 12:20:33.564900265 +0000 UTC m=+938.284138506" observedRunningTime="2026-02-20 12:20:35.052874662 +0000 UTC m=+939.772112903" watchObservedRunningTime="2026-02-20 12:20:35.056225776 +0000 UTC m=+939.775464017" Feb 20 12:20:35.056926 master-0 kubenswrapper[31420]: I0220 12:20:35.056841 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-d44a4-scheduler-0" podStartSLOduration=3.338568962 podStartE2EDuration="4.056836174s" podCreationTimestamp="2026-02-20 12:20:31 +0000 UTC" firstStartedPulling="2026-02-20 12:20:32.069290891 +0000 UTC m=+936.788529122" lastFinishedPulling="2026-02-20 12:20:32.787558103 +0000 UTC m=+937.506796334" observedRunningTime="2026-02-20 12:20:35.010093562 +0000 UTC m=+939.729331823" watchObservedRunningTime="2026-02-20 12:20:35.056836174 +0000 UTC m=+939.776074415" Feb 20 12:20:35.095320 master-0 kubenswrapper[31420]: I0220 12:20:35.095215 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-d44a4-api-0" podStartSLOduration=4.095195528 podStartE2EDuration="4.095195528s" podCreationTimestamp="2026-02-20 12:20:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:35.093079508 +0000 UTC m=+939.812317769" watchObservedRunningTime="2026-02-20 12:20:35.095195528 +0000 UTC m=+939.814433769" Feb 20 12:20:35.658824 master-0 kubenswrapper[31420]: I0220 12:20:35.658772 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:35.790716 master-0 kubenswrapper[31420]: I0220 12:20:35.790652 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-scripts\") pod \"1640ca84-7fa2-44a9-89ff-da78a4b357df\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " Feb 20 12:20:35.790862 master-0 kubenswrapper[31420]: I0220 12:20:35.790736 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-config-data\") pod \"1640ca84-7fa2-44a9-89ff-da78a4b357df\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " Feb 20 12:20:35.790917 master-0 kubenswrapper[31420]: I0220 12:20:35.790854 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-combined-ca-bundle\") pod \"1640ca84-7fa2-44a9-89ff-da78a4b357df\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " Feb 20 12:20:35.790978 master-0 kubenswrapper[31420]: I0220 12:20:35.790961 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1640ca84-7fa2-44a9-89ff-da78a4b357df-logs\") pod \"1640ca84-7fa2-44a9-89ff-da78a4b357df\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " Feb 20 12:20:35.791028 master-0 kubenswrapper[31420]: I0220 12:20:35.791005 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-config-data-custom\") pod \"1640ca84-7fa2-44a9-89ff-da78a4b357df\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " Feb 20 12:20:35.791077 master-0 kubenswrapper[31420]: I0220 12:20:35.791054 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5txtl\" (UniqueName: \"kubernetes.io/projected/1640ca84-7fa2-44a9-89ff-da78a4b357df-kube-api-access-5txtl\") pod \"1640ca84-7fa2-44a9-89ff-da78a4b357df\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " Feb 20 12:20:35.791127 master-0 kubenswrapper[31420]: I0220 12:20:35.791109 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1640ca84-7fa2-44a9-89ff-da78a4b357df-etc-machine-id\") pod \"1640ca84-7fa2-44a9-89ff-da78a4b357df\" (UID: \"1640ca84-7fa2-44a9-89ff-da78a4b357df\") " Feb 20 12:20:35.791475 master-0 kubenswrapper[31420]: I0220 12:20:35.791387 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1640ca84-7fa2-44a9-89ff-da78a4b357df-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1640ca84-7fa2-44a9-89ff-da78a4b357df" (UID: "1640ca84-7fa2-44a9-89ff-da78a4b357df"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:35.791659 master-0 kubenswrapper[31420]: I0220 12:20:35.791503 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1640ca84-7fa2-44a9-89ff-da78a4b357df-logs" (OuterVolumeSpecName: "logs") pod "1640ca84-7fa2-44a9-89ff-da78a4b357df" (UID: "1640ca84-7fa2-44a9-89ff-da78a4b357df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:20:35.791908 master-0 kubenswrapper[31420]: I0220 12:20:35.791872 31420 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1640ca84-7fa2-44a9-89ff-da78a4b357df-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:35.791908 master-0 kubenswrapper[31420]: I0220 12:20:35.791896 31420 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1640ca84-7fa2-44a9-89ff-da78a4b357df-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:35.795404 master-0 kubenswrapper[31420]: I0220 12:20:35.795337 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1640ca84-7fa2-44a9-89ff-da78a4b357df-kube-api-access-5txtl" (OuterVolumeSpecName: "kube-api-access-5txtl") pod "1640ca84-7fa2-44a9-89ff-da78a4b357df" (UID: "1640ca84-7fa2-44a9-89ff-da78a4b357df"). InnerVolumeSpecName "kube-api-access-5txtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:35.796121 master-0 kubenswrapper[31420]: I0220 12:20:35.796076 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1640ca84-7fa2-44a9-89ff-da78a4b357df" (UID: "1640ca84-7fa2-44a9-89ff-da78a4b357df"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:35.796600 master-0 kubenswrapper[31420]: I0220 12:20:35.796548 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-scripts" (OuterVolumeSpecName: "scripts") pod "1640ca84-7fa2-44a9-89ff-da78a4b357df" (UID: "1640ca84-7fa2-44a9-89ff-da78a4b357df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:35.831691 master-0 kubenswrapper[31420]: I0220 12:20:35.831594 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1640ca84-7fa2-44a9-89ff-da78a4b357df" (UID: "1640ca84-7fa2-44a9-89ff-da78a4b357df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:35.855766 master-0 kubenswrapper[31420]: I0220 12:20:35.853319 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-config-data" (OuterVolumeSpecName: "config-data") pod "1640ca84-7fa2-44a9-89ff-da78a4b357df" (UID: "1640ca84-7fa2-44a9-89ff-da78a4b357df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:35.894317 master-0 kubenswrapper[31420]: I0220 12:20:35.894260 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:35.894981 master-0 kubenswrapper[31420]: I0220 12:20:35.894960 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:35.895118 master-0 kubenswrapper[31420]: I0220 12:20:35.895102 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:35.895209 master-0 kubenswrapper[31420]: I0220 12:20:35.895195 31420 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1640ca84-7fa2-44a9-89ff-da78a4b357df-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:35.895298 master-0 kubenswrapper[31420]: I0220 12:20:35.895284 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5txtl\" (UniqueName: \"kubernetes.io/projected/1640ca84-7fa2-44a9-89ff-da78a4b357df-kube-api-access-5txtl\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:35.929437 master-0 kubenswrapper[31420]: I0220 12:20:35.929360 31420 generic.go:334] "Generic (PLEG): container finished" podID="1640ca84-7fa2-44a9-89ff-da78a4b357df" containerID="ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4" exitCode=0 Feb 20 12:20:35.929437 master-0 kubenswrapper[31420]: I0220 12:20:35.929408 31420 generic.go:334] "Generic (PLEG): container finished" podID="1640ca84-7fa2-44a9-89ff-da78a4b357df" containerID="418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300" exitCode=143 Feb 20 12:20:35.929437 master-0 kubenswrapper[31420]: I0220 12:20:35.929428 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:35.929813 master-0 kubenswrapper[31420]: I0220 12:20:35.929519 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-api-0" event={"ID":"1640ca84-7fa2-44a9-89ff-da78a4b357df","Type":"ContainerDied","Data":"ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4"} Feb 20 12:20:35.929813 master-0 kubenswrapper[31420]: I0220 12:20:35.929572 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-api-0" event={"ID":"1640ca84-7fa2-44a9-89ff-da78a4b357df","Type":"ContainerDied","Data":"418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300"} Feb 20 12:20:35.929813 master-0 kubenswrapper[31420]: I0220 12:20:35.929586 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-api-0" event={"ID":"1640ca84-7fa2-44a9-89ff-da78a4b357df","Type":"ContainerDied","Data":"9756e1aaae5d8217b6ce204a817c72d36ef66c90ddd7e2b5502ca7198f417621"} Feb 20 12:20:35.929813 master-0 kubenswrapper[31420]: I0220 12:20:35.929605 31420 scope.go:117] "RemoveContainer" containerID="ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4" Feb 20 12:20:35.983660 master-0 kubenswrapper[31420]: I0220 12:20:35.983008 31420 scope.go:117] "RemoveContainer" containerID="418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300" Feb 20 12:20:35.995601 master-0 kubenswrapper[31420]: I0220 12:20:35.995568 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d44a4-api-0"] Feb 20 12:20:36.026887 master-0 kubenswrapper[31420]: I0220 12:20:36.026842 31420 scope.go:117] "RemoveContainer" containerID="ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4" Feb 20 12:20:36.028091 master-0 kubenswrapper[31420]: E0220 12:20:36.027436 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4\": container with ID starting with ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4 not found: ID does not exist" containerID="ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4" Feb 20 12:20:36.028091 master-0 kubenswrapper[31420]: I0220 12:20:36.027470 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4"} err="failed to get container status \"ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4\": rpc error: code = NotFound desc = could not find container \"ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4\": container with ID starting with ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4 not found: ID does not exist" Feb 20 12:20:36.028091 master-0 kubenswrapper[31420]: I0220 12:20:36.027493 31420 scope.go:117] "RemoveContainer" containerID="418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300" Feb 20 12:20:36.029513 master-0 kubenswrapper[31420]: E0220 12:20:36.028948 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300\": container with ID starting with 418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300 not found: ID does not exist" containerID="418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300" Feb 20 12:20:36.029513 master-0 kubenswrapper[31420]: I0220 12:20:36.028974 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300"} err="failed to get container status \"418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300\": rpc error: code = NotFound desc = could not find container \"418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300\": container with ID starting with 418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300 not found: ID does not exist" Feb 20 12:20:36.029513 master-0 kubenswrapper[31420]: I0220 12:20:36.029027 31420 scope.go:117] "RemoveContainer" containerID="ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4" Feb 20 12:20:36.033643 master-0 kubenswrapper[31420]: I0220 12:20:36.033567 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d44a4-api-0"] Feb 20 12:20:36.063248 master-0 kubenswrapper[31420]: I0220 12:20:36.057257 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4"} err="failed to get container status \"ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4\": rpc error: code = NotFound desc = could not find container \"ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4\": container with ID starting with ec924ed21001ff5fbb4675a8ba7a1ca4cfceb4cb890c7bfc1a56df8c2ce98ed4 not found: ID does not exist" Feb 20 12:20:36.063248 master-0 kubenswrapper[31420]: I0220 12:20:36.057331 31420 scope.go:117] "RemoveContainer" containerID="418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300" Feb 20 12:20:36.063248 master-0 kubenswrapper[31420]: I0220 12:20:36.057680 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300"} err="failed to get container status \"418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300\": rpc error: code = NotFound desc = could not find container \"418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300\": container with ID starting with 418eb4c500a0c6bfedc72f82d9e972d876fae5ca165bcd8cfa7721c405e08300 not found: ID does not exist" Feb 20 12:20:36.064875 master-0 kubenswrapper[31420]: I0220 12:20:36.064586 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d44a4-api-0"] Feb 20 12:20:36.071609 master-0 kubenswrapper[31420]: E0220 12:20:36.065150 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1640ca84-7fa2-44a9-89ff-da78a4b357df" containerName="cinder-api" Feb 20 12:20:36.071823 master-0 kubenswrapper[31420]: I0220 12:20:36.071614 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="1640ca84-7fa2-44a9-89ff-da78a4b357df" containerName="cinder-api" Feb 20 12:20:36.071823 master-0 kubenswrapper[31420]: E0220 12:20:36.071711 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1640ca84-7fa2-44a9-89ff-da78a4b357df" containerName="cinder-d44a4-api-log" Feb 20 12:20:36.071823 master-0 kubenswrapper[31420]: I0220 12:20:36.071721 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="1640ca84-7fa2-44a9-89ff-da78a4b357df" containerName="cinder-d44a4-api-log" Feb 20 12:20:36.072131 master-0 kubenswrapper[31420]: I0220 12:20:36.072095 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="1640ca84-7fa2-44a9-89ff-da78a4b357df" containerName="cinder-d44a4-api-log" Feb 20 12:20:36.072204 master-0 kubenswrapper[31420]: I0220 12:20:36.072150 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="1640ca84-7fa2-44a9-89ff-da78a4b357df" containerName="cinder-api" Feb 20 12:20:36.073356 master-0 kubenswrapper[31420]: I0220 12:20:36.073323 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.075943 master-0 kubenswrapper[31420]: I0220 12:20:36.075847 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 20 12:20:36.076289 master-0 kubenswrapper[31420]: I0220 12:20:36.075912 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 20 12:20:36.076289 master-0 kubenswrapper[31420]: I0220 12:20:36.076241 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-d44a4-api-config-data" Feb 20 12:20:36.080195 master-0 kubenswrapper[31420]: I0220 12:20:36.080146 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-api-0"] Feb 20 12:20:36.203114 master-0 kubenswrapper[31420]: I0220 12:20:36.203045 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-internal-tls-certs\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.203396 master-0 kubenswrapper[31420]: I0220 12:20:36.203138 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-public-tls-certs\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.203396 master-0 kubenswrapper[31420]: I0220 12:20:36.203195 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-scripts\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.203396 master-0 kubenswrapper[31420]: I0220 12:20:36.203267 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-config-data-custom\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.203396 master-0 kubenswrapper[31420]: I0220 12:20:36.203299 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-config-data\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.203396 master-0 kubenswrapper[31420]: I0220 12:20:36.203326 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5db8e133-5ad8-492f-8dda-44e70f29dd4d-logs\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.203756 master-0 kubenswrapper[31420]: I0220 12:20:36.203696 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5db8e133-5ad8-492f-8dda-44e70f29dd4d-etc-machine-id\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.203899 master-0 kubenswrapper[31420]: I0220 12:20:36.203868 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-combined-ca-bundle\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.203965 master-0 kubenswrapper[31420]: I0220 12:20:36.203942 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gszc4\" (UniqueName: \"kubernetes.io/projected/5db8e133-5ad8-492f-8dda-44e70f29dd4d-kube-api-access-gszc4\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.306303 master-0 kubenswrapper[31420]: I0220 12:20:36.306192 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-public-tls-certs\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.306303 master-0 kubenswrapper[31420]: I0220 12:20:36.306289 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-scripts\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.306497 master-0 kubenswrapper[31420]: I0220 12:20:36.306340 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-config-data-custom\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.306497 master-0 kubenswrapper[31420]: I0220 12:20:36.306383 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-config-data\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.306497 master-0 kubenswrapper[31420]: I0220 12:20:36.306407 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5db8e133-5ad8-492f-8dda-44e70f29dd4d-logs\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.306766 master-0 kubenswrapper[31420]: I0220 12:20:36.306612 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5db8e133-5ad8-492f-8dda-44e70f29dd4d-etc-machine-id\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.306766 master-0 kubenswrapper[31420]: I0220 12:20:36.306666 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-combined-ca-bundle\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.306766 master-0 kubenswrapper[31420]: I0220 12:20:36.306706 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gszc4\" (UniqueName: \"kubernetes.io/projected/5db8e133-5ad8-492f-8dda-44e70f29dd4d-kube-api-access-gszc4\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.306766 master-0 kubenswrapper[31420]: I0220 12:20:36.306764 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-internal-tls-certs\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.307145 master-0 kubenswrapper[31420]: I0220 12:20:36.307089 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5db8e133-5ad8-492f-8dda-44e70f29dd4d-etc-machine-id\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.307552 master-0 kubenswrapper[31420]: I0220 12:20:36.307499 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5db8e133-5ad8-492f-8dda-44e70f29dd4d-logs\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.309782 master-0 kubenswrapper[31420]: I0220 12:20:36.309746 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-scripts\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.311022 master-0 kubenswrapper[31420]: I0220 12:20:36.310974 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-config-data\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.312063 master-0 kubenswrapper[31420]: I0220 12:20:36.311998 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-public-tls-certs\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.312194 master-0 kubenswrapper[31420]: I0220 12:20:36.312168 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-config-data-custom\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.312567 master-0 kubenswrapper[31420]: I0220 12:20:36.312514 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-internal-tls-certs\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.319460 master-0 kubenswrapper[31420]: I0220 12:20:36.319358 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db8e133-5ad8-492f-8dda-44e70f29dd4d-combined-ca-bundle\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.329927 master-0 kubenswrapper[31420]: I0220 12:20:36.329873 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gszc4\" (UniqueName: \"kubernetes.io/projected/5db8e133-5ad8-492f-8dda-44e70f29dd4d-kube-api-access-gszc4\") pod \"cinder-d44a4-api-0\" (UID: \"5db8e133-5ad8-492f-8dda-44e70f29dd4d\") " pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.354619 master-0 kubenswrapper[31420]: I0220 12:20:36.354553 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-v9jsf" Feb 20 12:20:36.406771 master-0 kubenswrapper[31420]: I0220 12:20:36.403106 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:36.423976 master-0 kubenswrapper[31420]: I0220 12:20:36.421504 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:36.510367 master-0 kubenswrapper[31420]: I0220 12:20:36.510223 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e730e756-3c53-48ff-a27d-5ddbf042a996-config\") pod \"e730e756-3c53-48ff-a27d-5ddbf042a996\" (UID: \"e730e756-3c53-48ff-a27d-5ddbf042a996\") " Feb 20 12:20:36.510367 master-0 kubenswrapper[31420]: I0220 12:20:36.510305 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e730e756-3c53-48ff-a27d-5ddbf042a996-combined-ca-bundle\") pod \"e730e756-3c53-48ff-a27d-5ddbf042a996\" (UID: \"e730e756-3c53-48ff-a27d-5ddbf042a996\") " Feb 20 12:20:36.510497 master-0 kubenswrapper[31420]: I0220 12:20:36.510447 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4hvr\" (UniqueName: \"kubernetes.io/projected/e730e756-3c53-48ff-a27d-5ddbf042a996-kube-api-access-z4hvr\") pod \"e730e756-3c53-48ff-a27d-5ddbf042a996\" (UID: \"e730e756-3c53-48ff-a27d-5ddbf042a996\") " Feb 20 12:20:36.515167 master-0 kubenswrapper[31420]: I0220 12:20:36.515081 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e730e756-3c53-48ff-a27d-5ddbf042a996-kube-api-access-z4hvr" (OuterVolumeSpecName: "kube-api-access-z4hvr") pod "e730e756-3c53-48ff-a27d-5ddbf042a996" (UID: "e730e756-3c53-48ff-a27d-5ddbf042a996"). InnerVolumeSpecName "kube-api-access-z4hvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:36.545707 master-0 kubenswrapper[31420]: I0220 12:20:36.545663 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:36.561554 master-0 kubenswrapper[31420]: I0220 12:20:36.561476 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e730e756-3c53-48ff-a27d-5ddbf042a996-config" (OuterVolumeSpecName: "config") pod "e730e756-3c53-48ff-a27d-5ddbf042a996" (UID: "e730e756-3c53-48ff-a27d-5ddbf042a996"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:36.567507 master-0 kubenswrapper[31420]: I0220 12:20:36.567450 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:36.587726 master-0 kubenswrapper[31420]: I0220 12:20:36.584167 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e730e756-3c53-48ff-a27d-5ddbf042a996-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e730e756-3c53-48ff-a27d-5ddbf042a996" (UID: "e730e756-3c53-48ff-a27d-5ddbf042a996"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:36.614234 master-0 kubenswrapper[31420]: I0220 12:20:36.614141 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e730e756-3c53-48ff-a27d-5ddbf042a996-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:36.614234 master-0 kubenswrapper[31420]: I0220 12:20:36.614181 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e730e756-3c53-48ff-a27d-5ddbf042a996-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:36.614234 master-0 kubenswrapper[31420]: I0220 12:20:36.614193 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4hvr\" (UniqueName: \"kubernetes.io/projected/e730e756-3c53-48ff-a27d-5ddbf042a996-kube-api-access-z4hvr\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:36.880908 master-0 kubenswrapper[31420]: W0220 12:20:36.875072 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5db8e133_5ad8_492f_8dda_44e70f29dd4d.slice/crio-2d93f8bfea8795832f02964b9233bc244780b09bb40f263399e0726b3e1dbdcb WatchSource:0}: Error finding container 2d93f8bfea8795832f02964b9233bc244780b09bb40f263399e0726b3e1dbdcb: Status 404 returned error can't find the container with id 2d93f8bfea8795832f02964b9233bc244780b09bb40f263399e0726b3e1dbdcb Feb 20 12:20:36.880908 master-0 kubenswrapper[31420]: I0220 12:20:36.877215 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-api-0"] Feb 20 12:20:36.957130 master-0 kubenswrapper[31420]: I0220 12:20:36.956977 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-v9jsf" event={"ID":"e730e756-3c53-48ff-a27d-5ddbf042a996","Type":"ContainerDied","Data":"7681679c5d482adba1ea08795c751d392ab5e721368ff0cf5889cca74ac8e48e"} Feb 20 12:20:36.957130 master-0 kubenswrapper[31420]: I0220 12:20:36.957069 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7681679c5d482adba1ea08795c751d392ab5e721368ff0cf5889cca74ac8e48e" Feb 20 12:20:36.958027 master-0 kubenswrapper[31420]: I0220 12:20:36.957182 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-v9jsf" Feb 20 12:20:36.970126 master-0 kubenswrapper[31420]: I0220 12:20:36.970036 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-api-0" event={"ID":"5db8e133-5ad8-492f-8dda-44e70f29dd4d","Type":"ContainerStarted","Data":"2d93f8bfea8795832f02964b9233bc244780b09bb40f263399e0726b3e1dbdcb"} Feb 20 12:20:37.314313 master-0 kubenswrapper[31420]: I0220 12:20:37.309958 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb58d95b7-9pzpf"] Feb 20 12:20:37.314313 master-0 kubenswrapper[31420]: I0220 12:20:37.310188 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" podUID="e0cb8d62-4ffd-4364-baaf-a93aa6d95d93" containerName="dnsmasq-dns" containerID="cri-o://7ccc2549d1d3a42a3ce2c1ceb522f74b64d59bfc690fa593b2b426ce71539fe6" gracePeriod=10 Feb 20 12:20:37.395024 master-0 kubenswrapper[31420]: I0220 12:20:37.393193 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8446c48bd9-jqsfx"] Feb 20 12:20:37.395024 master-0 kubenswrapper[31420]: E0220 12:20:37.393736 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e730e756-3c53-48ff-a27d-5ddbf042a996" containerName="neutron-db-sync" Feb 20 12:20:37.395024 master-0 kubenswrapper[31420]: I0220 12:20:37.393751 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="e730e756-3c53-48ff-a27d-5ddbf042a996" containerName="neutron-db-sync" Feb 20 12:20:37.395024 master-0 kubenswrapper[31420]: I0220 12:20:37.394003 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="e730e756-3c53-48ff-a27d-5ddbf042a996" containerName="neutron-db-sync" Feb 20 12:20:37.400598 master-0 kubenswrapper[31420]: I0220 12:20:37.395512 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.409940 master-0 kubenswrapper[31420]: I0220 12:20:37.409881 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8446c48bd9-jqsfx"] Feb 20 12:20:37.422891 master-0 kubenswrapper[31420]: I0220 12:20:37.421992 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c69cf75d-mfgck"] Feb 20 12:20:37.484659 master-0 kubenswrapper[31420]: I0220 12:20:37.483944 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.485510 master-0 kubenswrapper[31420]: I0220 12:20:37.485451 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c69cf75d-mfgck"] Feb 20 12:20:37.488943 master-0 kubenswrapper[31420]: I0220 12:20:37.488812 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 20 12:20:37.489113 master-0 kubenswrapper[31420]: I0220 12:20:37.489093 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 20 12:20:37.489492 master-0 kubenswrapper[31420]: I0220 12:20:37.489197 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 20 12:20:37.554553 master-0 kubenswrapper[31420]: I0220 12:20:37.552436 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-dns-svc\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.554553 master-0 kubenswrapper[31420]: I0220 12:20:37.552511 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-config\") pod \"neutron-7c69cf75d-mfgck\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.554553 master-0 kubenswrapper[31420]: I0220 12:20:37.552691 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5bfk\" (UniqueName: \"kubernetes.io/projected/4f6becc0-4062-4971-9300-fe40c0538d25-kube-api-access-l5bfk\") pod \"neutron-7c69cf75d-mfgck\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.554553 master-0 kubenswrapper[31420]: I0220 12:20:37.552736 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-config\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.554553 master-0 kubenswrapper[31420]: I0220 12:20:37.552763 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-ovndb-tls-certs\") pod \"neutron-7c69cf75d-mfgck\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.554553 master-0 kubenswrapper[31420]: I0220 12:20:37.552839 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-ovsdbserver-sb\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.554553 master-0 kubenswrapper[31420]: I0220 12:20:37.552867 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9564d\" (UniqueName: \"kubernetes.io/projected/4e04e8c1-0541-4161-bb09-f4250f360d61-kube-api-access-9564d\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.554553 master-0 kubenswrapper[31420]: I0220 12:20:37.552885 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-httpd-config\") pod \"neutron-7c69cf75d-mfgck\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.554553 master-0 kubenswrapper[31420]: I0220 12:20:37.552904 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-combined-ca-bundle\") pod \"neutron-7c69cf75d-mfgck\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.554553 master-0 kubenswrapper[31420]: I0220 12:20:37.552928 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-dns-swift-storage-0\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.554553 master-0 kubenswrapper[31420]: I0220 12:20:37.552950 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-ovsdbserver-nb\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.618550 master-0 kubenswrapper[31420]: I0220 12:20:37.608936 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1640ca84-7fa2-44a9-89ff-da78a4b357df" path="/var/lib/kubelet/pods/1640ca84-7fa2-44a9-89ff-da78a4b357df/volumes" Feb 20 12:20:37.686554 master-0 kubenswrapper[31420]: I0220 12:20:37.675856 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-dns-svc\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.686554 master-0 kubenswrapper[31420]: I0220 12:20:37.675947 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-config\") pod \"neutron-7c69cf75d-mfgck\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.686554 master-0 kubenswrapper[31420]: I0220 12:20:37.676028 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5bfk\" (UniqueName: \"kubernetes.io/projected/4f6becc0-4062-4971-9300-fe40c0538d25-kube-api-access-l5bfk\") pod \"neutron-7c69cf75d-mfgck\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.686554 master-0 kubenswrapper[31420]: I0220 12:20:37.676094 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-config\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.686554 master-0 kubenswrapper[31420]: I0220 12:20:37.676131 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-ovndb-tls-certs\") pod \"neutron-7c69cf75d-mfgck\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.686554 master-0 kubenswrapper[31420]: I0220 12:20:37.676349 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-ovsdbserver-sb\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.686554 master-0 kubenswrapper[31420]: I0220 12:20:37.676378 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9564d\" (UniqueName: \"kubernetes.io/projected/4e04e8c1-0541-4161-bb09-f4250f360d61-kube-api-access-9564d\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.686554 master-0 kubenswrapper[31420]: I0220 12:20:37.676403 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-httpd-config\") pod \"neutron-7c69cf75d-mfgck\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.686554 master-0 kubenswrapper[31420]: I0220 12:20:37.676419 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-combined-ca-bundle\") pod \"neutron-7c69cf75d-mfgck\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.686554 master-0 kubenswrapper[31420]: I0220 12:20:37.676456 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-dns-swift-storage-0\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.686554 master-0 kubenswrapper[31420]: I0220 12:20:37.676494 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-ovsdbserver-nb\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.686554 master-0 kubenswrapper[31420]: I0220 12:20:37.680110 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-dns-svc\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.686554 master-0 kubenswrapper[31420]: I0220 12:20:37.685171 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-httpd-config\") pod \"neutron-7c69cf75d-mfgck\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.686554 master-0 kubenswrapper[31420]: I0220 12:20:37.685898 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-ovsdbserver-sb\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.694548 master-0 kubenswrapper[31420]: I0220 12:20:37.687726 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-config\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.694548 master-0 kubenswrapper[31420]: I0220 12:20:37.687836 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-dns-swift-storage-0\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.694548 master-0 kubenswrapper[31420]: I0220 12:20:37.688718 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-ovsdbserver-nb\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.701575 master-0 kubenswrapper[31420]: I0220 12:20:37.699485 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-config\") pod \"neutron-7c69cf75d-mfgck\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.715307 master-0 kubenswrapper[31420]: I0220 12:20:37.715233 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-ovndb-tls-certs\") pod \"neutron-7c69cf75d-mfgck\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.737410 master-0 kubenswrapper[31420]: I0220 12:20:37.737287 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-combined-ca-bundle\") pod \"neutron-7c69cf75d-mfgck\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.757558 master-0 kubenswrapper[31420]: I0220 12:20:37.743718 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9564d\" (UniqueName: \"kubernetes.io/projected/4e04e8c1-0541-4161-bb09-f4250f360d61-kube-api-access-9564d\") pod \"dnsmasq-dns-8446c48bd9-jqsfx\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.757558 master-0 kubenswrapper[31420]: I0220 12:20:37.746323 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5bfk\" (UniqueName: \"kubernetes.io/projected/4f6becc0-4062-4971-9300-fe40c0538d25-kube-api-access-l5bfk\") pod \"neutron-7c69cf75d-mfgck\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:37.757558 master-0 kubenswrapper[31420]: I0220 12:20:37.747653 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:37.802392 master-0 kubenswrapper[31420]: I0220 12:20:37.801791 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:38.028468 master-0 kubenswrapper[31420]: I0220 12:20:38.028263 31420 generic.go:334] "Generic (PLEG): container finished" podID="e0cb8d62-4ffd-4364-baaf-a93aa6d95d93" containerID="7ccc2549d1d3a42a3ce2c1ceb522f74b64d59bfc690fa593b2b426ce71539fe6" exitCode=0 Feb 20 12:20:38.028468 master-0 kubenswrapper[31420]: I0220 12:20:38.028343 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" event={"ID":"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93","Type":"ContainerDied","Data":"7ccc2549d1d3a42a3ce2c1ceb522f74b64d59bfc690fa593b2b426ce71539fe6"} Feb 20 12:20:38.235146 master-0 kubenswrapper[31420]: I0220 12:20:38.234743 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:38.401501 master-0 kubenswrapper[31420]: I0220 12:20:38.401243 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-dns-svc\") pod \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " Feb 20 12:20:38.401501 master-0 kubenswrapper[31420]: I0220 12:20:38.401321 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-ovsdbserver-nb\") pod \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " Feb 20 12:20:38.401501 master-0 kubenswrapper[31420]: I0220 12:20:38.401380 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-config\") pod \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " Feb 20 12:20:38.401501 master-0 kubenswrapper[31420]: I0220 12:20:38.401479 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-ovsdbserver-sb\") pod \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " Feb 20 12:20:38.401868 master-0 kubenswrapper[31420]: I0220 12:20:38.401563 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq4zd\" (UniqueName: \"kubernetes.io/projected/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-kube-api-access-pq4zd\") pod \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " Feb 20 12:20:38.401868 master-0 kubenswrapper[31420]: I0220 12:20:38.401654 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-dns-swift-storage-0\") pod \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\" (UID: \"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93\") " Feb 20 12:20:38.421567 master-0 kubenswrapper[31420]: I0220 12:20:38.420718 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-kube-api-access-pq4zd" (OuterVolumeSpecName: "kube-api-access-pq4zd") pod "e0cb8d62-4ffd-4364-baaf-a93aa6d95d93" (UID: "e0cb8d62-4ffd-4364-baaf-a93aa6d95d93"). InnerVolumeSpecName "kube-api-access-pq4zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:38.436767 master-0 kubenswrapper[31420]: W0220 12:20:38.436683 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e04e8c1_0541_4161_bb09_f4250f360d61.slice/crio-84e5bd7ee1ab1ac4242dd31defd56143150212bf5b6dceb00ce042bff5acad13 WatchSource:0}: Error finding container 84e5bd7ee1ab1ac4242dd31defd56143150212bf5b6dceb00ce042bff5acad13: Status 404 returned error can't find the container with id 84e5bd7ee1ab1ac4242dd31defd56143150212bf5b6dceb00ce042bff5acad13 Feb 20 12:20:38.499284 master-0 kubenswrapper[31420]: I0220 12:20:38.495814 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8446c48bd9-jqsfx"] Feb 20 12:20:38.499284 master-0 kubenswrapper[31420]: I0220 12:20:38.495874 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e0cb8d62-4ffd-4364-baaf-a93aa6d95d93" (UID: "e0cb8d62-4ffd-4364-baaf-a93aa6d95d93"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:38.505312 master-0 kubenswrapper[31420]: I0220 12:20:38.504249 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq4zd\" (UniqueName: \"kubernetes.io/projected/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-kube-api-access-pq4zd\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:38.505312 master-0 kubenswrapper[31420]: I0220 12:20:38.504294 31420 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:38.521707 master-0 kubenswrapper[31420]: I0220 12:20:38.519771 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-config" (OuterVolumeSpecName: "config") pod "e0cb8d62-4ffd-4364-baaf-a93aa6d95d93" (UID: "e0cb8d62-4ffd-4364-baaf-a93aa6d95d93"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:38.547332 master-0 kubenswrapper[31420]: I0220 12:20:38.547187 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e0cb8d62-4ffd-4364-baaf-a93aa6d95d93" (UID: "e0cb8d62-4ffd-4364-baaf-a93aa6d95d93"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:38.553665 master-0 kubenswrapper[31420]: I0220 12:20:38.553266 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e0cb8d62-4ffd-4364-baaf-a93aa6d95d93" (UID: "e0cb8d62-4ffd-4364-baaf-a93aa6d95d93"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:38.562723 master-0 kubenswrapper[31420]: I0220 12:20:38.562122 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e0cb8d62-4ffd-4364-baaf-a93aa6d95d93" (UID: "e0cb8d62-4ffd-4364-baaf-a93aa6d95d93"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:38.606590 master-0 kubenswrapper[31420]: I0220 12:20:38.606430 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:38.606590 master-0 kubenswrapper[31420]: I0220 12:20:38.606502 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:38.606590 master-0 kubenswrapper[31420]: I0220 12:20:38.606515 31420 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:38.606590 master-0 kubenswrapper[31420]: I0220 12:20:38.606562 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:38.662314 master-0 kubenswrapper[31420]: I0220 12:20:38.661028 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c69cf75d-mfgck"] Feb 20 12:20:38.669198 master-0 kubenswrapper[31420]: W0220 12:20:38.669130 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f6becc0_4062_4971_9300_fe40c0538d25.slice/crio-437ec292af3d31dca563cc2b357a655612ea9a1a6c3727e13becc7ee8d483d78 WatchSource:0}: Error finding container 437ec292af3d31dca563cc2b357a655612ea9a1a6c3727e13becc7ee8d483d78: Status 404 returned error can't find the container with id 437ec292af3d31dca563cc2b357a655612ea9a1a6c3727e13becc7ee8d483d78 Feb 20 12:20:39.045336 master-0 kubenswrapper[31420]: I0220 12:20:39.045257 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c69cf75d-mfgck" event={"ID":"4f6becc0-4062-4971-9300-fe40c0538d25","Type":"ContainerStarted","Data":"437ec292af3d31dca563cc2b357a655612ea9a1a6c3727e13becc7ee8d483d78"} Feb 20 12:20:39.049566 master-0 kubenswrapper[31420]: I0220 12:20:39.047113 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-api-0" event={"ID":"5db8e133-5ad8-492f-8dda-44e70f29dd4d","Type":"ContainerStarted","Data":"a0d8d40c9f85837a55c664e83dfa018616ffdc2d9300101212b9a87ff3255cf3"} Feb 20 12:20:39.051632 master-0 kubenswrapper[31420]: I0220 12:20:39.051504 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" event={"ID":"4e04e8c1-0541-4161-bb09-f4250f360d61","Type":"ContainerStarted","Data":"84e5bd7ee1ab1ac4242dd31defd56143150212bf5b6dceb00ce042bff5acad13"} Feb 20 12:20:39.058010 master-0 kubenswrapper[31420]: I0220 12:20:39.053414 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" event={"ID":"e0cb8d62-4ffd-4364-baaf-a93aa6d95d93","Type":"ContainerDied","Data":"1db8be37c233d4b3b05de091a56580f3032a95dd659214bdff23afa20c807f2e"} Feb 20 12:20:39.058010 master-0 kubenswrapper[31420]: I0220 12:20:39.053487 31420 scope.go:117] "RemoveContainer" containerID="7ccc2549d1d3a42a3ce2c1ceb522f74b64d59bfc690fa593b2b426ce71539fe6" Feb 20 12:20:39.058010 master-0 kubenswrapper[31420]: I0220 12:20:39.053514 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb58d95b7-9pzpf" Feb 20 12:20:39.085641 master-0 kubenswrapper[31420]: I0220 12:20:39.085537 31420 scope.go:117] "RemoveContainer" containerID="0339bdbae22ccf7db0ffb560e99b0ec908249c0758528d63c04ea42f50bbf583" Feb 20 12:20:39.259979 master-0 kubenswrapper[31420]: I0220 12:20:39.259372 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb58d95b7-9pzpf"] Feb 20 12:20:39.276605 master-0 kubenswrapper[31420]: I0220 12:20:39.273438 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bb58d95b7-9pzpf"] Feb 20 12:20:39.558375 master-0 kubenswrapper[31420]: I0220 12:20:39.558296 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0cb8d62-4ffd-4364-baaf-a93aa6d95d93" path="/var/lib/kubelet/pods/e0cb8d62-4ffd-4364-baaf-a93aa6d95d93/volumes" Feb 20 12:20:40.091762 master-0 kubenswrapper[31420]: I0220 12:20:40.091650 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c69cf75d-mfgck" event={"ID":"4f6becc0-4062-4971-9300-fe40c0538d25","Type":"ContainerStarted","Data":"d9b044a68317b3d3f5d929b39f6e6e102e65dc56db00f41d411702628c62dc1c"} Feb 20 12:20:40.091762 master-0 kubenswrapper[31420]: I0220 12:20:40.091717 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c69cf75d-mfgck" event={"ID":"4f6becc0-4062-4971-9300-fe40c0538d25","Type":"ContainerStarted","Data":"63c35f90eed454fde6afe3d0e77f139ad569d08726242fa8e3eec960ee4204cf"} Feb 20 12:20:40.092300 master-0 kubenswrapper[31420]: I0220 12:20:40.091836 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:20:40.096753 master-0 kubenswrapper[31420]: I0220 12:20:40.096683 31420 generic.go:334] "Generic (PLEG): container finished" podID="4e04e8c1-0541-4161-bb09-f4250f360d61" containerID="595b10fbb07757b8f9b2b7f36f5bab4b926fd1e1b08d71d444820209ebcc29cd" exitCode=0 Feb 20 12:20:40.096861 master-0 kubenswrapper[31420]: I0220 12:20:40.096771 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" event={"ID":"4e04e8c1-0541-4161-bb09-f4250f360d61","Type":"ContainerDied","Data":"595b10fbb07757b8f9b2b7f36f5bab4b926fd1e1b08d71d444820209ebcc29cd"} Feb 20 12:20:40.102678 master-0 kubenswrapper[31420]: I0220 12:20:40.101482 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-api-0" event={"ID":"5db8e133-5ad8-492f-8dda-44e70f29dd4d","Type":"ContainerStarted","Data":"4a390aabc27d6c6faac23fedd17ca75cb1ea2771e952163ed414b91fb4193f61"} Feb 20 12:20:40.102678 master-0 kubenswrapper[31420]: I0220 12:20:40.101698 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:40.131726 master-0 kubenswrapper[31420]: I0220 12:20:40.131658 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c69cf75d-mfgck" podStartSLOduration=3.131642553 podStartE2EDuration="3.131642553s" podCreationTimestamp="2026-02-20 12:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:40.130961164 +0000 UTC m=+944.850199445" watchObservedRunningTime="2026-02-20 12:20:40.131642553 +0000 UTC m=+944.850880794" Feb 20 12:20:40.164271 master-0 kubenswrapper[31420]: I0220 12:20:40.164190 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-d44a4-api-0" podStartSLOduration=4.164170623 podStartE2EDuration="4.164170623s" podCreationTimestamp="2026-02-20 12:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:40.160039486 +0000 UTC m=+944.879277727" watchObservedRunningTime="2026-02-20 12:20:40.164170623 +0000 UTC m=+944.883408864" Feb 20 12:20:40.192249 master-0 kubenswrapper[31420]: I0220 12:20:40.192186 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c4cff4645-lz9x7"] Feb 20 12:20:40.192879 master-0 kubenswrapper[31420]: E0220 12:20:40.192862 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0cb8d62-4ffd-4364-baaf-a93aa6d95d93" containerName="init" Feb 20 12:20:40.192953 master-0 kubenswrapper[31420]: I0220 12:20:40.192942 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0cb8d62-4ffd-4364-baaf-a93aa6d95d93" containerName="init" Feb 20 12:20:40.193076 master-0 kubenswrapper[31420]: E0220 12:20:40.193065 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0cb8d62-4ffd-4364-baaf-a93aa6d95d93" containerName="dnsmasq-dns" Feb 20 12:20:40.193140 master-0 kubenswrapper[31420]: I0220 12:20:40.193130 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0cb8d62-4ffd-4364-baaf-a93aa6d95d93" containerName="dnsmasq-dns" Feb 20 12:20:40.193438 master-0 kubenswrapper[31420]: I0220 12:20:40.193425 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0cb8d62-4ffd-4364-baaf-a93aa6d95d93" containerName="dnsmasq-dns" Feb 20 12:20:40.194563 master-0 kubenswrapper[31420]: I0220 12:20:40.194545 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.199850 master-0 kubenswrapper[31420]: I0220 12:20:40.198834 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 20 12:20:40.203724 master-0 kubenswrapper[31420]: I0220 12:20:40.203472 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 20 12:20:40.228457 master-0 kubenswrapper[31420]: I0220 12:20:40.228377 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c4cff4645-lz9x7"] Feb 20 12:20:40.360224 master-0 kubenswrapper[31420]: I0220 12:20:40.359170 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-public-tls-certs\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.360224 master-0 kubenswrapper[31420]: I0220 12:20:40.359315 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-internal-tls-certs\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.360224 master-0 kubenswrapper[31420]: I0220 12:20:40.359485 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-httpd-config\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.360224 master-0 kubenswrapper[31420]: I0220 12:20:40.359572 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-combined-ca-bundle\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.360224 master-0 kubenswrapper[31420]: I0220 12:20:40.359649 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-ovndb-tls-certs\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.360224 master-0 kubenswrapper[31420]: I0220 12:20:40.359729 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bzll\" (UniqueName: \"kubernetes.io/projected/9a420cc4-49a1-449c-8180-213048aef749-kube-api-access-2bzll\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.360224 master-0 kubenswrapper[31420]: I0220 12:20:40.359915 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-config\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.462082 master-0 kubenswrapper[31420]: I0220 12:20:40.461990 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-config\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.462348 master-0 kubenswrapper[31420]: I0220 12:20:40.462144 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-public-tls-certs\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.462405 master-0 kubenswrapper[31420]: I0220 12:20:40.462353 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-internal-tls-certs\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.462455 master-0 kubenswrapper[31420]: I0220 12:20:40.462422 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-httpd-config\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.462505 master-0 kubenswrapper[31420]: I0220 12:20:40.462456 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-combined-ca-bundle\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.462675 master-0 kubenswrapper[31420]: I0220 12:20:40.462503 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-ovndb-tls-certs\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.462675 master-0 kubenswrapper[31420]: I0220 12:20:40.462589 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bzll\" (UniqueName: \"kubernetes.io/projected/9a420cc4-49a1-449c-8180-213048aef749-kube-api-access-2bzll\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.469599 master-0 kubenswrapper[31420]: I0220 12:20:40.469553 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-public-tls-certs\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.469871 master-0 kubenswrapper[31420]: I0220 12:20:40.469651 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-httpd-config\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.469970 master-0 kubenswrapper[31420]: I0220 12:20:40.469785 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-config\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.470338 master-0 kubenswrapper[31420]: I0220 12:20:40.470282 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-combined-ca-bundle\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.470621 master-0 kubenswrapper[31420]: I0220 12:20:40.470569 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-internal-tls-certs\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.471702 master-0 kubenswrapper[31420]: I0220 12:20:40.471667 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a420cc4-49a1-449c-8180-213048aef749-ovndb-tls-certs\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.485907 master-0 kubenswrapper[31420]: I0220 12:20:40.482561 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bzll\" (UniqueName: \"kubernetes.io/projected/9a420cc4-49a1-449c-8180-213048aef749-kube-api-access-2bzll\") pod \"neutron-6c4cff4645-lz9x7\" (UID: \"9a420cc4-49a1-449c-8180-213048aef749\") " pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:40.583258 master-0 kubenswrapper[31420]: I0220 12:20:40.583106 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:41.120070 master-0 kubenswrapper[31420]: I0220 12:20:41.119959 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" event={"ID":"4e04e8c1-0541-4161-bb09-f4250f360d61","Type":"ContainerStarted","Data":"04675b63566a6d3105ec458d7992db2d262a3d29955b1443f3008073c61023e3"} Feb 20 12:20:41.120676 master-0 kubenswrapper[31420]: I0220 12:20:41.120206 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:41.153075 master-0 kubenswrapper[31420]: I0220 12:20:41.152955 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" podStartSLOduration=4.152873058 podStartE2EDuration="4.152873058s" podCreationTimestamp="2026-02-20 12:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:41.149206854 +0000 UTC m=+945.868445105" watchObservedRunningTime="2026-02-20 12:20:41.152873058 +0000 UTC m=+945.872111319" Feb 20 12:20:41.217997 master-0 kubenswrapper[31420]: I0220 12:20:41.216113 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c4cff4645-lz9x7"] Feb 20 12:20:41.237683 master-0 kubenswrapper[31420]: W0220 12:20:41.236828 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a420cc4_49a1_449c_8180_213048aef749.slice/crio-e54a96686611769e6d70edc907fd7985191aedb15fe29dbff933e2d020471d89 WatchSource:0}: Error finding container e54a96686611769e6d70edc907fd7985191aedb15fe29dbff933e2d020471d89: Status 404 returned error can't find the container with id e54a96686611769e6d70edc907fd7985191aedb15fe29dbff933e2d020471d89 Feb 20 12:20:41.708835 master-0 kubenswrapper[31420]: I0220 12:20:41.708741 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:41.777401 master-0 kubenswrapper[31420]: I0220 12:20:41.774580 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d44a4-scheduler-0"] Feb 20 12:20:41.798864 master-0 kubenswrapper[31420]: I0220 12:20:41.798662 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:41.808816 master-0 kubenswrapper[31420]: I0220 12:20:41.808765 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:41.876275 master-0 kubenswrapper[31420]: I0220 12:20:41.876230 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d44a4-backup-0"] Feb 20 12:20:41.928261 master-0 kubenswrapper[31420]: I0220 12:20:41.928191 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d44a4-volume-lvm-iscsi-0"] Feb 20 12:20:42.136662 master-0 kubenswrapper[31420]: I0220 12:20:42.133133 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4cff4645-lz9x7" event={"ID":"9a420cc4-49a1-449c-8180-213048aef749","Type":"ContainerStarted","Data":"97f1579903e0db5f33866c02e75c998162c613faed35e54eb82cd274475ba338"} Feb 20 12:20:42.136662 master-0 kubenswrapper[31420]: I0220 12:20:42.133188 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4cff4645-lz9x7" event={"ID":"9a420cc4-49a1-449c-8180-213048aef749","Type":"ContainerStarted","Data":"f5ad48a16984d72c2b8c7fc736ae30aac863e6486f096d2f58b1dfd890b0544c"} Feb 20 12:20:42.136662 master-0 kubenswrapper[31420]: I0220 12:20:42.133200 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4cff4645-lz9x7" event={"ID":"9a420cc4-49a1-449c-8180-213048aef749","Type":"ContainerStarted","Data":"e54a96686611769e6d70edc907fd7985191aedb15fe29dbff933e2d020471d89"} Feb 20 12:20:42.136662 master-0 kubenswrapper[31420]: I0220 12:20:42.133352 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-d44a4-scheduler-0" podUID="7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" containerName="cinder-scheduler" containerID="cri-o://fe2b648170fe1bb7db5faad7bf9580cd0581a232b17bbf9188d51410a783040c" gracePeriod=30 Feb 20 12:20:42.136662 master-0 kubenswrapper[31420]: I0220 12:20:42.133450 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-d44a4-scheduler-0" podUID="7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" containerName="probe" containerID="cri-o://6a5f20015d60f0caf4a3c90f7a11a9bb69d3e34616ebeb3bf9ad1e2b81e33cd4" gracePeriod=30 Feb 20 12:20:42.136662 master-0 kubenswrapper[31420]: I0220 12:20:42.133855 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" podUID="1c9aaa04-a36f-4409-85d4-9199555742bb" containerName="cinder-volume" containerID="cri-o://b6c0206c6145effc16fdb8403acedaa44d75fe3936abb334b1cd205396cd982c" gracePeriod=30 Feb 20 12:20:42.136662 master-0 kubenswrapper[31420]: I0220 12:20:42.133953 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-d44a4-backup-0" podUID="0c21113a-3e64-4d0b-861f-16aac0d2828e" containerName="cinder-backup" containerID="cri-o://a7675feb7bac5f97d669dd031b106a1466591362c68c21e5915e5afd8e6ff1bc" gracePeriod=30 Feb 20 12:20:42.136662 master-0 kubenswrapper[31420]: I0220 12:20:42.134042 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" podUID="1c9aaa04-a36f-4409-85d4-9199555742bb" containerName="probe" containerID="cri-o://bbdaf214a60b249dbdce79322999fb549904dfeb3a103943f978a29d1da1ecca" gracePeriod=30 Feb 20 12:20:42.136662 master-0 kubenswrapper[31420]: I0220 12:20:42.134257 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-d44a4-backup-0" podUID="0c21113a-3e64-4d0b-861f-16aac0d2828e" containerName="probe" containerID="cri-o://881fe204546a65081ad238b04d163d16275a68fda01f05651ed23a032ffd1bd4" gracePeriod=30 Feb 20 12:20:43.152176 master-0 kubenswrapper[31420]: I0220 12:20:43.152116 31420 generic.go:334] "Generic (PLEG): container finished" podID="7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" containerID="6a5f20015d60f0caf4a3c90f7a11a9bb69d3e34616ebeb3bf9ad1e2b81e33cd4" exitCode=0 Feb 20 12:20:43.152752 master-0 kubenswrapper[31420]: I0220 12:20:43.152256 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-scheduler-0" event={"ID":"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c","Type":"ContainerDied","Data":"6a5f20015d60f0caf4a3c90f7a11a9bb69d3e34616ebeb3bf9ad1e2b81e33cd4"} Feb 20 12:20:43.171995 master-0 kubenswrapper[31420]: I0220 12:20:43.170654 31420 generic.go:334] "Generic (PLEG): container finished" podID="0c21113a-3e64-4d0b-861f-16aac0d2828e" containerID="881fe204546a65081ad238b04d163d16275a68fda01f05651ed23a032ffd1bd4" exitCode=0 Feb 20 12:20:43.171995 master-0 kubenswrapper[31420]: I0220 12:20:43.170707 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-backup-0" event={"ID":"0c21113a-3e64-4d0b-861f-16aac0d2828e","Type":"ContainerDied","Data":"881fe204546a65081ad238b04d163d16275a68fda01f05651ed23a032ffd1bd4"} Feb 20 12:20:43.173669 master-0 kubenswrapper[31420]: I0220 12:20:43.173239 31420 generic.go:334] "Generic (PLEG): container finished" podID="1c9aaa04-a36f-4409-85d4-9199555742bb" containerID="bbdaf214a60b249dbdce79322999fb549904dfeb3a103943f978a29d1da1ecca" exitCode=0 Feb 20 12:20:43.173669 master-0 kubenswrapper[31420]: I0220 12:20:43.173278 31420 generic.go:334] "Generic (PLEG): container finished" podID="1c9aaa04-a36f-4409-85d4-9199555742bb" containerID="b6c0206c6145effc16fdb8403acedaa44d75fe3936abb334b1cd205396cd982c" exitCode=0 Feb 20 12:20:43.173669 master-0 kubenswrapper[31420]: I0220 12:20:43.173291 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" event={"ID":"1c9aaa04-a36f-4409-85d4-9199555742bb","Type":"ContainerDied","Data":"bbdaf214a60b249dbdce79322999fb549904dfeb3a103943f978a29d1da1ecca"} Feb 20 12:20:43.173669 master-0 kubenswrapper[31420]: I0220 12:20:43.173387 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" event={"ID":"1c9aaa04-a36f-4409-85d4-9199555742bb","Type":"ContainerDied","Data":"b6c0206c6145effc16fdb8403acedaa44d75fe3936abb334b1cd205396cd982c"} Feb 20 12:20:43.175276 master-0 kubenswrapper[31420]: I0220 12:20:43.175223 31420 generic.go:334] "Generic (PLEG): container finished" podID="1caf9802-b963-4368-ac29-e47812b48ad3" containerID="aa7560336d93a2765df2978574a3c3d391bc12dec63fac01cdb150e552294168" exitCode=0 Feb 20 12:20:43.175357 master-0 kubenswrapper[31420]: I0220 12:20:43.175271 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-9z9n4" event={"ID":"1caf9802-b963-4368-ac29-e47812b48ad3","Type":"ContainerDied","Data":"aa7560336d93a2765df2978574a3c3d391bc12dec63fac01cdb150e552294168"} Feb 20 12:20:43.175584 master-0 kubenswrapper[31420]: I0220 12:20:43.175553 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:20:43.209198 master-0 kubenswrapper[31420]: I0220 12:20:43.209107 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c4cff4645-lz9x7" podStartSLOduration=3.209082807 podStartE2EDuration="3.209082807s" podCreationTimestamp="2026-02-20 12:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:42.174331289 +0000 UTC m=+946.893569540" watchObservedRunningTime="2026-02-20 12:20:43.209082807 +0000 UTC m=+947.928321048" Feb 20 12:20:43.271263 master-0 kubenswrapper[31420]: I0220 12:20:43.271208 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:43.355306 master-0 kubenswrapper[31420]: I0220 12:20:43.354975 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prj2f\" (UniqueName: \"kubernetes.io/projected/1c9aaa04-a36f-4409-85d4-9199555742bb-kube-api-access-prj2f\") pod \"1c9aaa04-a36f-4409-85d4-9199555742bb\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " Feb 20 12:20:43.355306 master-0 kubenswrapper[31420]: I0220 12:20:43.355156 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-iscsi\") pod \"1c9aaa04-a36f-4409-85d4-9199555742bb\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " Feb 20 12:20:43.355306 master-0 kubenswrapper[31420]: I0220 12:20:43.355227 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-locks-brick\") pod \"1c9aaa04-a36f-4409-85d4-9199555742bb\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " Feb 20 12:20:43.355306 master-0 kubenswrapper[31420]: I0220 12:20:43.355265 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-config-data\") pod \"1c9aaa04-a36f-4409-85d4-9199555742bb\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " Feb 20 12:20:43.355894 master-0 kubenswrapper[31420]: I0220 12:20:43.355771 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "1c9aaa04-a36f-4409-85d4-9199555742bb" (UID: "1c9aaa04-a36f-4409-85d4-9199555742bb"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:43.355894 master-0 kubenswrapper[31420]: I0220 12:20:43.355828 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-lib-cinder\") pod \"1c9aaa04-a36f-4409-85d4-9199555742bb\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " Feb 20 12:20:43.356147 master-0 kubenswrapper[31420]: I0220 12:20:43.356099 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-nvme\") pod \"1c9aaa04-a36f-4409-85d4-9199555742bb\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " Feb 20 12:20:43.356395 master-0 kubenswrapper[31420]: I0220 12:20:43.356315 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-lib-modules\") pod \"1c9aaa04-a36f-4409-85d4-9199555742bb\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " Feb 20 12:20:43.356551 master-0 kubenswrapper[31420]: I0220 12:20:43.356363 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-combined-ca-bundle\") pod \"1c9aaa04-a36f-4409-85d4-9199555742bb\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " Feb 20 12:20:43.357324 master-0 kubenswrapper[31420]: I0220 12:20:43.356674 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-sys\") pod \"1c9aaa04-a36f-4409-85d4-9199555742bb\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " Feb 20 12:20:43.357324 master-0 kubenswrapper[31420]: I0220 12:20:43.356714 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-dev\") pod \"1c9aaa04-a36f-4409-85d4-9199555742bb\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " Feb 20 12:20:43.357324 master-0 kubenswrapper[31420]: I0220 12:20:43.356790 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-run\") pod \"1c9aaa04-a36f-4409-85d4-9199555742bb\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " Feb 20 12:20:43.357324 master-0 kubenswrapper[31420]: I0220 12:20:43.356831 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-scripts\") pod \"1c9aaa04-a36f-4409-85d4-9199555742bb\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " Feb 20 12:20:43.357324 master-0 kubenswrapper[31420]: I0220 12:20:43.356874 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-config-data-custom\") pod \"1c9aaa04-a36f-4409-85d4-9199555742bb\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " Feb 20 12:20:43.357324 master-0 kubenswrapper[31420]: I0220 12:20:43.356925 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-machine-id\") pod \"1c9aaa04-a36f-4409-85d4-9199555742bb\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " Feb 20 12:20:43.357324 master-0 kubenswrapper[31420]: I0220 12:20:43.356950 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-locks-cinder\") pod \"1c9aaa04-a36f-4409-85d4-9199555742bb\" (UID: \"1c9aaa04-a36f-4409-85d4-9199555742bb\") " Feb 20 12:20:43.358630 master-0 kubenswrapper[31420]: I0220 12:20:43.358584 31420 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:43.358884 master-0 kubenswrapper[31420]: I0220 12:20:43.358822 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "1c9aaa04-a36f-4409-85d4-9199555742bb" (UID: "1c9aaa04-a36f-4409-85d4-9199555742bb"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:43.359339 master-0 kubenswrapper[31420]: I0220 12:20:43.358970 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "1c9aaa04-a36f-4409-85d4-9199555742bb" (UID: "1c9aaa04-a36f-4409-85d4-9199555742bb"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:43.359339 master-0 kubenswrapper[31420]: I0220 12:20:43.359284 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9aaa04-a36f-4409-85d4-9199555742bb-kube-api-access-prj2f" (OuterVolumeSpecName: "kube-api-access-prj2f") pod "1c9aaa04-a36f-4409-85d4-9199555742bb" (UID: "1c9aaa04-a36f-4409-85d4-9199555742bb"). InnerVolumeSpecName "kube-api-access-prj2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:43.359461 master-0 kubenswrapper[31420]: I0220 12:20:43.359415 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-sys" (OuterVolumeSpecName: "sys") pod "1c9aaa04-a36f-4409-85d4-9199555742bb" (UID: "1c9aaa04-a36f-4409-85d4-9199555742bb"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:43.359461 master-0 kubenswrapper[31420]: I0220 12:20:43.359442 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "1c9aaa04-a36f-4409-85d4-9199555742bb" (UID: "1c9aaa04-a36f-4409-85d4-9199555742bb"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:43.359590 master-0 kubenswrapper[31420]: I0220 12:20:43.359463 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "1c9aaa04-a36f-4409-85d4-9199555742bb" (UID: "1c9aaa04-a36f-4409-85d4-9199555742bb"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:43.359590 master-0 kubenswrapper[31420]: I0220 12:20:43.359483 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1c9aaa04-a36f-4409-85d4-9199555742bb" (UID: "1c9aaa04-a36f-4409-85d4-9199555742bb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:43.362763 master-0 kubenswrapper[31420]: I0220 12:20:43.360290 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-dev" (OuterVolumeSpecName: "dev") pod "1c9aaa04-a36f-4409-85d4-9199555742bb" (UID: "1c9aaa04-a36f-4409-85d4-9199555742bb"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:43.362763 master-0 kubenswrapper[31420]: I0220 12:20:43.360217 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1c9aaa04-a36f-4409-85d4-9199555742bb" (UID: "1c9aaa04-a36f-4409-85d4-9199555742bb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:43.362763 master-0 kubenswrapper[31420]: I0220 12:20:43.360555 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-run" (OuterVolumeSpecName: "run") pod "1c9aaa04-a36f-4409-85d4-9199555742bb" (UID: "1c9aaa04-a36f-4409-85d4-9199555742bb"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:43.362763 master-0 kubenswrapper[31420]: I0220 12:20:43.362710 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1c9aaa04-a36f-4409-85d4-9199555742bb" (UID: "1c9aaa04-a36f-4409-85d4-9199555742bb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:43.398030 master-0 kubenswrapper[31420]: I0220 12:20:43.397957 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-scripts" (OuterVolumeSpecName: "scripts") pod "1c9aaa04-a36f-4409-85d4-9199555742bb" (UID: "1c9aaa04-a36f-4409-85d4-9199555742bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:43.438449 master-0 kubenswrapper[31420]: I0220 12:20:43.438384 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c9aaa04-a36f-4409-85d4-9199555742bb" (UID: "1c9aaa04-a36f-4409-85d4-9199555742bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:43.466467 master-0 kubenswrapper[31420]: I0220 12:20:43.466377 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:43.466467 master-0 kubenswrapper[31420]: I0220 12:20:43.466425 31420 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-sys\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:43.466467 master-0 kubenswrapper[31420]: I0220 12:20:43.466438 31420 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-dev\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:43.466467 master-0 kubenswrapper[31420]: I0220 12:20:43.466446 31420 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-run\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:43.466467 master-0 kubenswrapper[31420]: I0220 12:20:43.466457 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:43.466467 master-0 kubenswrapper[31420]: I0220 12:20:43.466466 31420 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:43.466467 master-0 kubenswrapper[31420]: I0220 12:20:43.466476 31420 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:43.468414 master-0 kubenswrapper[31420]: I0220 12:20:43.466486 31420 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:43.468414 master-0 kubenswrapper[31420]: I0220 12:20:43.466496 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prj2f\" (UniqueName: \"kubernetes.io/projected/1c9aaa04-a36f-4409-85d4-9199555742bb-kube-api-access-prj2f\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:43.468414 master-0 kubenswrapper[31420]: I0220 12:20:43.466507 31420 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:43.468414 master-0 kubenswrapper[31420]: I0220 12:20:43.466518 31420 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:43.468414 master-0 kubenswrapper[31420]: I0220 12:20:43.466541 31420 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:43.468414 master-0 kubenswrapper[31420]: I0220 12:20:43.466552 31420 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c9aaa04-a36f-4409-85d4-9199555742bb-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:43.549803 master-0 kubenswrapper[31420]: I0220 12:20:43.549703 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-config-data" (OuterVolumeSpecName: "config-data") pod "1c9aaa04-a36f-4409-85d4-9199555742bb" (UID: "1c9aaa04-a36f-4409-85d4-9199555742bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:43.569653 master-0 kubenswrapper[31420]: I0220 12:20:43.569578 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c9aaa04-a36f-4409-85d4-9199555742bb-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:43.898911 master-0 kubenswrapper[31420]: I0220 12:20:43.898847 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:43.986083 master-0 kubenswrapper[31420]: I0220 12:20:43.986019 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-config-data\") pod \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " Feb 20 12:20:43.993634 master-0 kubenswrapper[31420]: I0220 12:20:43.993592 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-etc-machine-id\") pod \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " Feb 20 12:20:43.993990 master-0 kubenswrapper[31420]: I0220 12:20:43.993977 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-combined-ca-bundle\") pod \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " Feb 20 12:20:43.994503 master-0 kubenswrapper[31420]: I0220 12:20:43.993886 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" (UID: "7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:43.994583 master-0 kubenswrapper[31420]: I0220 12:20:43.994471 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-scripts\") pod \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " Feb 20 12:20:43.995730 master-0 kubenswrapper[31420]: I0220 12:20:43.995692 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-config-data-custom\") pod \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " Feb 20 12:20:43.995800 master-0 kubenswrapper[31420]: I0220 12:20:43.995750 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qpzz\" (UniqueName: \"kubernetes.io/projected/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-kube-api-access-5qpzz\") pod \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\" (UID: \"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c\") " Feb 20 12:20:43.999797 master-0 kubenswrapper[31420]: I0220 12:20:43.999477 31420 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.006545 master-0 kubenswrapper[31420]: I0220 12:20:44.002274 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-kube-api-access-5qpzz" (OuterVolumeSpecName: "kube-api-access-5qpzz") pod "7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" (UID: "7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c"). InnerVolumeSpecName "kube-api-access-5qpzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:44.006545 master-0 kubenswrapper[31420]: I0220 12:20:44.002759 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" (UID: "7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:44.016158 master-0 kubenswrapper[31420]: I0220 12:20:44.016116 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-scripts" (OuterVolumeSpecName: "scripts") pod "7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" (UID: "7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:44.061634 master-0 kubenswrapper[31420]: I0220 12:20:44.061520 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" (UID: "7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:44.102610 master-0 kubenswrapper[31420]: I0220 12:20:44.101390 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.102610 master-0 kubenswrapper[31420]: I0220 12:20:44.101447 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.102610 master-0 kubenswrapper[31420]: I0220 12:20:44.101462 31420 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.102610 master-0 kubenswrapper[31420]: I0220 12:20:44.101475 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qpzz\" (UniqueName: \"kubernetes.io/projected/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-kube-api-access-5qpzz\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.113501 master-0 kubenswrapper[31420]: I0220 12:20:44.113419 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-config-data" (OuterVolumeSpecName: "config-data") pod "7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" (UID: "7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:44.131525 master-0 kubenswrapper[31420]: I0220 12:20:44.131480 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.199920 master-0 kubenswrapper[31420]: I0220 12:20:44.193774 31420 generic.go:334] "Generic (PLEG): container finished" podID="7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" containerID="fe2b648170fe1bb7db5faad7bf9580cd0581a232b17bbf9188d51410a783040c" exitCode=0 Feb 20 12:20:44.199920 master-0 kubenswrapper[31420]: I0220 12:20:44.193816 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.199920 master-0 kubenswrapper[31420]: I0220 12:20:44.193870 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-scheduler-0" event={"ID":"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c","Type":"ContainerDied","Data":"fe2b648170fe1bb7db5faad7bf9580cd0581a232b17bbf9188d51410a783040c"} Feb 20 12:20:44.199920 master-0 kubenswrapper[31420]: I0220 12:20:44.193898 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-scheduler-0" event={"ID":"7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c","Type":"ContainerDied","Data":"87a4fa58cd182c90e5acb960d708b66f44136fa35a70d72c73fe871b4e8af3c1"} Feb 20 12:20:44.199920 master-0 kubenswrapper[31420]: I0220 12:20:44.193930 31420 scope.go:117] "RemoveContainer" containerID="6a5f20015d60f0caf4a3c90f7a11a9bb69d3e34616ebeb3bf9ad1e2b81e33cd4" Feb 20 12:20:44.199920 master-0 kubenswrapper[31420]: I0220 12:20:44.198332 31420 generic.go:334] "Generic (PLEG): container finished" podID="0c21113a-3e64-4d0b-861f-16aac0d2828e" containerID="a7675feb7bac5f97d669dd031b106a1466591362c68c21e5915e5afd8e6ff1bc" exitCode=0 Feb 20 12:20:44.199920 master-0 kubenswrapper[31420]: I0220 12:20:44.198400 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-backup-0" event={"ID":"0c21113a-3e64-4d0b-861f-16aac0d2828e","Type":"ContainerDied","Data":"a7675feb7bac5f97d669dd031b106a1466591362c68c21e5915e5afd8e6ff1bc"} Feb 20 12:20:44.199920 master-0 kubenswrapper[31420]: I0220 12:20:44.198425 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-backup-0" event={"ID":"0c21113a-3e64-4d0b-861f-16aac0d2828e","Type":"ContainerDied","Data":"74b35e21dfedd908fb52194f379158ab37462b04efc99b30f298e6b4a46584dd"} Feb 20 12:20:44.199920 master-0 kubenswrapper[31420]: I0220 12:20:44.198476 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.203267 master-0 kubenswrapper[31420]: I0220 12:20:44.203202 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.206354 master-0 kubenswrapper[31420]: I0220 12:20:44.204827 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-lib-cinder\") pod \"0c21113a-3e64-4d0b-861f-16aac0d2828e\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " Feb 20 12:20:44.206354 master-0 kubenswrapper[31420]: I0220 12:20:44.204909 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-iscsi\") pod \"0c21113a-3e64-4d0b-861f-16aac0d2828e\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " Feb 20 12:20:44.206354 master-0 kubenswrapper[31420]: I0220 12:20:44.205027 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-sys\") pod \"0c21113a-3e64-4d0b-861f-16aac0d2828e\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " Feb 20 12:20:44.206354 master-0 kubenswrapper[31420]: I0220 12:20:44.205186 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-lib-modules\") pod \"0c21113a-3e64-4d0b-861f-16aac0d2828e\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " Feb 20 12:20:44.206354 master-0 kubenswrapper[31420]: I0220 12:20:44.205258 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h67gl\" (UniqueName: \"kubernetes.io/projected/0c21113a-3e64-4d0b-861f-16aac0d2828e-kube-api-access-h67gl\") pod \"0c21113a-3e64-4d0b-861f-16aac0d2828e\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " Feb 20 12:20:44.206354 master-0 kubenswrapper[31420]: I0220 12:20:44.205378 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-run\") pod \"0c21113a-3e64-4d0b-861f-16aac0d2828e\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " Feb 20 12:20:44.206354 master-0 kubenswrapper[31420]: I0220 12:20:44.205416 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-nvme\") pod \"0c21113a-3e64-4d0b-861f-16aac0d2828e\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " Feb 20 12:20:44.206354 master-0 kubenswrapper[31420]: I0220 12:20:44.205487 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-config-data-custom\") pod \"0c21113a-3e64-4d0b-861f-16aac0d2828e\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " Feb 20 12:20:44.206354 master-0 kubenswrapper[31420]: I0220 12:20:44.205558 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-locks-cinder\") pod \"0c21113a-3e64-4d0b-861f-16aac0d2828e\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " Feb 20 12:20:44.206354 master-0 kubenswrapper[31420]: I0220 12:20:44.205588 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-combined-ca-bundle\") pod \"0c21113a-3e64-4d0b-861f-16aac0d2828e\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " Feb 20 12:20:44.206354 master-0 kubenswrapper[31420]: I0220 12:20:44.205628 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-config-data\") pod \"0c21113a-3e64-4d0b-861f-16aac0d2828e\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " Feb 20 12:20:44.206354 master-0 kubenswrapper[31420]: I0220 12:20:44.205694 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-dev\") pod \"0c21113a-3e64-4d0b-861f-16aac0d2828e\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " Feb 20 12:20:44.206354 master-0 kubenswrapper[31420]: I0220 12:20:44.205722 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-locks-brick\") pod \"0c21113a-3e64-4d0b-861f-16aac0d2828e\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " Feb 20 12:20:44.206354 master-0 kubenswrapper[31420]: I0220 12:20:44.205781 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-scripts\") pod \"0c21113a-3e64-4d0b-861f-16aac0d2828e\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " Feb 20 12:20:44.206354 master-0 kubenswrapper[31420]: I0220 12:20:44.205845 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-machine-id\") pod \"0c21113a-3e64-4d0b-861f-16aac0d2828e\" (UID: \"0c21113a-3e64-4d0b-861f-16aac0d2828e\") " Feb 20 12:20:44.222737 master-0 kubenswrapper[31420]: I0220 12:20:44.207880 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.222737 master-0 kubenswrapper[31420]: I0220 12:20:44.207955 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0c21113a-3e64-4d0b-861f-16aac0d2828e" (UID: "0c21113a-3e64-4d0b-861f-16aac0d2828e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:44.222737 master-0 kubenswrapper[31420]: I0220 12:20:44.208001 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "0c21113a-3e64-4d0b-861f-16aac0d2828e" (UID: "0c21113a-3e64-4d0b-861f-16aac0d2828e"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:44.222737 master-0 kubenswrapper[31420]: I0220 12:20:44.208023 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "0c21113a-3e64-4d0b-861f-16aac0d2828e" (UID: "0c21113a-3e64-4d0b-861f-16aac0d2828e"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:44.222737 master-0 kubenswrapper[31420]: I0220 12:20:44.208059 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-sys" (OuterVolumeSpecName: "sys") pod "0c21113a-3e64-4d0b-861f-16aac0d2828e" (UID: "0c21113a-3e64-4d0b-861f-16aac0d2828e"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:44.222737 master-0 kubenswrapper[31420]: I0220 12:20:44.208077 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0c21113a-3e64-4d0b-861f-16aac0d2828e" (UID: "0c21113a-3e64-4d0b-861f-16aac0d2828e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:44.222737 master-0 kubenswrapper[31420]: I0220 12:20:44.209559 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "0c21113a-3e64-4d0b-861f-16aac0d2828e" (UID: "0c21113a-3e64-4d0b-861f-16aac0d2828e"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:44.222737 master-0 kubenswrapper[31420]: I0220 12:20:44.209719 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-run" (OuterVolumeSpecName: "run") pod "0c21113a-3e64-4d0b-861f-16aac0d2828e" (UID: "0c21113a-3e64-4d0b-861f-16aac0d2828e"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:44.222737 master-0 kubenswrapper[31420]: I0220 12:20:44.209920 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-dev" (OuterVolumeSpecName: "dev") pod "0c21113a-3e64-4d0b-861f-16aac0d2828e" (UID: "0c21113a-3e64-4d0b-861f-16aac0d2828e"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:44.222737 master-0 kubenswrapper[31420]: I0220 12:20:44.211018 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "0c21113a-3e64-4d0b-861f-16aac0d2828e" (UID: "0c21113a-3e64-4d0b-861f-16aac0d2828e"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:44.222737 master-0 kubenswrapper[31420]: I0220 12:20:44.211075 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" event={"ID":"1c9aaa04-a36f-4409-85d4-9199555742bb","Type":"ContainerDied","Data":"d2a5128d49beb16a0ccbc451edb50e285f00e100d2ce784efe2df0f046754dea"} Feb 20 12:20:44.222737 master-0 kubenswrapper[31420]: I0220 12:20:44.212299 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "0c21113a-3e64-4d0b-861f-16aac0d2828e" (UID: "0c21113a-3e64-4d0b-861f-16aac0d2828e"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 12:20:44.249749 master-0 kubenswrapper[31420]: I0220 12:20:44.249310 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0c21113a-3e64-4d0b-861f-16aac0d2828e" (UID: "0c21113a-3e64-4d0b-861f-16aac0d2828e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:44.252294 master-0 kubenswrapper[31420]: I0220 12:20:44.252250 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c21113a-3e64-4d0b-861f-16aac0d2828e-kube-api-access-h67gl" (OuterVolumeSpecName: "kube-api-access-h67gl") pod "0c21113a-3e64-4d0b-861f-16aac0d2828e" (UID: "0c21113a-3e64-4d0b-861f-16aac0d2828e"). InnerVolumeSpecName "kube-api-access-h67gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:44.255824 master-0 kubenswrapper[31420]: I0220 12:20:44.255428 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-scripts" (OuterVolumeSpecName: "scripts") pod "0c21113a-3e64-4d0b-861f-16aac0d2828e" (UID: "0c21113a-3e64-4d0b-861f-16aac0d2828e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:44.272052 master-0 kubenswrapper[31420]: I0220 12:20:44.266586 31420 scope.go:117] "RemoveContainer" containerID="fe2b648170fe1bb7db5faad7bf9580cd0581a232b17bbf9188d51410a783040c" Feb 20 12:20:44.272052 master-0 kubenswrapper[31420]: I0220 12:20:44.268723 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d44a4-scheduler-0"] Feb 20 12:20:44.299080 master-0 kubenswrapper[31420]: I0220 12:20:44.296844 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c21113a-3e64-4d0b-861f-16aac0d2828e" (UID: "0c21113a-3e64-4d0b-861f-16aac0d2828e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:44.299080 master-0 kubenswrapper[31420]: I0220 12:20:44.297666 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d44a4-scheduler-0"] Feb 20 12:20:44.320376 master-0 kubenswrapper[31420]: I0220 12:20:44.309619 31420 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.320376 master-0 kubenswrapper[31420]: I0220 12:20:44.317857 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.320376 master-0 kubenswrapper[31420]: I0220 12:20:44.317999 31420 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-dev\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.320376 master-0 kubenswrapper[31420]: I0220 12:20:44.318017 31420 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.320376 master-0 kubenswrapper[31420]: I0220 12:20:44.318138 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.320376 master-0 kubenswrapper[31420]: I0220 12:20:44.318358 31420 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.320376 master-0 kubenswrapper[31420]: I0220 12:20:44.318379 31420 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.320376 master-0 kubenswrapper[31420]: I0220 12:20:44.318893 31420 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.320376 master-0 kubenswrapper[31420]: I0220 12:20:44.318913 31420 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-sys\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.320376 master-0 kubenswrapper[31420]: I0220 12:20:44.318941 31420 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.320376 master-0 kubenswrapper[31420]: I0220 12:20:44.318958 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h67gl\" (UniqueName: \"kubernetes.io/projected/0c21113a-3e64-4d0b-861f-16aac0d2828e-kube-api-access-h67gl\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.320376 master-0 kubenswrapper[31420]: I0220 12:20:44.318972 31420 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-run\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.320376 master-0 kubenswrapper[31420]: I0220 12:20:44.318984 31420 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0c21113a-3e64-4d0b-861f-16aac0d2828e-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.320376 master-0 kubenswrapper[31420]: I0220 12:20:44.318997 31420 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.320376 master-0 kubenswrapper[31420]: I0220 12:20:44.317065 31420 scope.go:117] "RemoveContainer" containerID="6a5f20015d60f0caf4a3c90f7a11a9bb69d3e34616ebeb3bf9ad1e2b81e33cd4" Feb 20 12:20:44.321822 master-0 kubenswrapper[31420]: E0220 12:20:44.321768 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a5f20015d60f0caf4a3c90f7a11a9bb69d3e34616ebeb3bf9ad1e2b81e33cd4\": container with ID starting with 6a5f20015d60f0caf4a3c90f7a11a9bb69d3e34616ebeb3bf9ad1e2b81e33cd4 not found: ID does not exist" containerID="6a5f20015d60f0caf4a3c90f7a11a9bb69d3e34616ebeb3bf9ad1e2b81e33cd4" Feb 20 12:20:44.321891 master-0 kubenswrapper[31420]: I0220 12:20:44.321842 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a5f20015d60f0caf4a3c90f7a11a9bb69d3e34616ebeb3bf9ad1e2b81e33cd4"} err="failed to get container status \"6a5f20015d60f0caf4a3c90f7a11a9bb69d3e34616ebeb3bf9ad1e2b81e33cd4\": rpc error: code = NotFound desc = could not find container \"6a5f20015d60f0caf4a3c90f7a11a9bb69d3e34616ebeb3bf9ad1e2b81e33cd4\": container with ID starting with 6a5f20015d60f0caf4a3c90f7a11a9bb69d3e34616ebeb3bf9ad1e2b81e33cd4 not found: ID does not exist" Feb 20 12:20:44.321891 master-0 kubenswrapper[31420]: I0220 12:20:44.321879 31420 scope.go:117] "RemoveContainer" containerID="fe2b648170fe1bb7db5faad7bf9580cd0581a232b17bbf9188d51410a783040c" Feb 20 12:20:44.330599 master-0 kubenswrapper[31420]: E0220 12:20:44.328997 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe2b648170fe1bb7db5faad7bf9580cd0581a232b17bbf9188d51410a783040c\": container with ID starting with fe2b648170fe1bb7db5faad7bf9580cd0581a232b17bbf9188d51410a783040c not found: ID does not exist" containerID="fe2b648170fe1bb7db5faad7bf9580cd0581a232b17bbf9188d51410a783040c" Feb 20 12:20:44.330599 master-0 kubenswrapper[31420]: I0220 12:20:44.330058 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe2b648170fe1bb7db5faad7bf9580cd0581a232b17bbf9188d51410a783040c"} err="failed to get container status \"fe2b648170fe1bb7db5faad7bf9580cd0581a232b17bbf9188d51410a783040c\": rpc error: code = NotFound desc = could not find container \"fe2b648170fe1bb7db5faad7bf9580cd0581a232b17bbf9188d51410a783040c\": container with ID starting with fe2b648170fe1bb7db5faad7bf9580cd0581a232b17bbf9188d51410a783040c not found: ID does not exist" Feb 20 12:20:44.330599 master-0 kubenswrapper[31420]: I0220 12:20:44.330128 31420 scope.go:117] "RemoveContainer" containerID="881fe204546a65081ad238b04d163d16275a68fda01f05651ed23a032ffd1bd4" Feb 20 12:20:44.330599 master-0 kubenswrapper[31420]: I0220 12:20:44.330327 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d44a4-scheduler-0"] Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: E0220 12:20:44.350043 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9aaa04-a36f-4409-85d4-9199555742bb" containerName="cinder-volume" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: I0220 12:20:44.350096 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9aaa04-a36f-4409-85d4-9199555742bb" containerName="cinder-volume" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: E0220 12:20:44.350139 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c21113a-3e64-4d0b-861f-16aac0d2828e" containerName="cinder-backup" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: I0220 12:20:44.350149 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c21113a-3e64-4d0b-861f-16aac0d2828e" containerName="cinder-backup" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: E0220 12:20:44.350165 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" containerName="probe" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: I0220 12:20:44.350174 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" containerName="probe" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: E0220 12:20:44.350201 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c21113a-3e64-4d0b-861f-16aac0d2828e" containerName="probe" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: I0220 12:20:44.350209 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c21113a-3e64-4d0b-861f-16aac0d2828e" containerName="probe" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: E0220 12:20:44.350238 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9aaa04-a36f-4409-85d4-9199555742bb" containerName="probe" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: I0220 12:20:44.350246 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9aaa04-a36f-4409-85d4-9199555742bb" containerName="probe" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: E0220 12:20:44.350280 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" containerName="cinder-scheduler" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: I0220 12:20:44.350289 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" containerName="cinder-scheduler" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: I0220 12:20:44.350622 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c21113a-3e64-4d0b-861f-16aac0d2828e" containerName="cinder-backup" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: I0220 12:20:44.350658 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9aaa04-a36f-4409-85d4-9199555742bb" containerName="cinder-volume" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: I0220 12:20:44.350697 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c21113a-3e64-4d0b-861f-16aac0d2828e" containerName="probe" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: I0220 12:20:44.350714 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9aaa04-a36f-4409-85d4-9199555742bb" containerName="probe" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: I0220 12:20:44.350728 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" containerName="probe" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: I0220 12:20:44.350747 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" containerName="cinder-scheduler" Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: I0220 12:20:44.352103 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-scheduler-0"] Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: I0220 12:20:44.352131 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d44a4-volume-lvm-iscsi-0"] Feb 20 12:20:44.354138 master-0 kubenswrapper[31420]: I0220 12:20:44.352222 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.357769 master-0 kubenswrapper[31420]: I0220 12:20:44.357572 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-d44a4-scheduler-config-data" Feb 20 12:20:44.357769 master-0 kubenswrapper[31420]: I0220 12:20:44.357622 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d44a4-volume-lvm-iscsi-0"] Feb 20 12:20:44.372130 master-0 kubenswrapper[31420]: I0220 12:20:44.368510 31420 scope.go:117] "RemoveContainer" containerID="a7675feb7bac5f97d669dd031b106a1466591362c68c21e5915e5afd8e6ff1bc" Feb 20 12:20:44.392650 master-0 kubenswrapper[31420]: I0220 12:20:44.388408 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d44a4-volume-lvm-iscsi-0"] Feb 20 12:20:44.392650 master-0 kubenswrapper[31420]: I0220 12:20:44.390648 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.398094 master-0 kubenswrapper[31420]: I0220 12:20:44.398049 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-d44a4-volume-lvm-iscsi-config-data" Feb 20 12:20:44.416930 master-0 kubenswrapper[31420]: I0220 12:20:44.416857 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-volume-lvm-iscsi-0"] Feb 20 12:20:44.422865 master-0 kubenswrapper[31420]: I0220 12:20:44.422806 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-config-data\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.423009 master-0 kubenswrapper[31420]: I0220 12:20:44.422909 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-etc-machine-id\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.423009 master-0 kubenswrapper[31420]: I0220 12:20:44.422955 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-scripts\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.423009 master-0 kubenswrapper[31420]: I0220 12:20:44.422981 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvtth\" (UniqueName: \"kubernetes.io/projected/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-kube-api-access-gvtth\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.423323 master-0 kubenswrapper[31420]: I0220 12:20:44.423036 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-config-data-custom\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.423323 master-0 kubenswrapper[31420]: I0220 12:20:44.423163 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-combined-ca-bundle\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.424843 master-0 kubenswrapper[31420]: I0220 12:20:44.424669 31420 scope.go:117] "RemoveContainer" containerID="881fe204546a65081ad238b04d163d16275a68fda01f05651ed23a032ffd1bd4" Feb 20 12:20:44.425366 master-0 kubenswrapper[31420]: E0220 12:20:44.425313 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"881fe204546a65081ad238b04d163d16275a68fda01f05651ed23a032ffd1bd4\": container with ID starting with 881fe204546a65081ad238b04d163d16275a68fda01f05651ed23a032ffd1bd4 not found: ID does not exist" containerID="881fe204546a65081ad238b04d163d16275a68fda01f05651ed23a032ffd1bd4" Feb 20 12:20:44.425427 master-0 kubenswrapper[31420]: I0220 12:20:44.425366 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"881fe204546a65081ad238b04d163d16275a68fda01f05651ed23a032ffd1bd4"} err="failed to get container status \"881fe204546a65081ad238b04d163d16275a68fda01f05651ed23a032ffd1bd4\": rpc error: code = NotFound desc = could not find container \"881fe204546a65081ad238b04d163d16275a68fda01f05651ed23a032ffd1bd4\": container with ID starting with 881fe204546a65081ad238b04d163d16275a68fda01f05651ed23a032ffd1bd4 not found: ID does not exist" Feb 20 12:20:44.425427 master-0 kubenswrapper[31420]: I0220 12:20:44.425395 31420 scope.go:117] "RemoveContainer" containerID="a7675feb7bac5f97d669dd031b106a1466591362c68c21e5915e5afd8e6ff1bc" Feb 20 12:20:44.425843 master-0 kubenswrapper[31420]: E0220 12:20:44.425806 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7675feb7bac5f97d669dd031b106a1466591362c68c21e5915e5afd8e6ff1bc\": container with ID starting with a7675feb7bac5f97d669dd031b106a1466591362c68c21e5915e5afd8e6ff1bc not found: ID does not exist" containerID="a7675feb7bac5f97d669dd031b106a1466591362c68c21e5915e5afd8e6ff1bc" Feb 20 12:20:44.425958 master-0 kubenswrapper[31420]: I0220 12:20:44.425840 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7675feb7bac5f97d669dd031b106a1466591362c68c21e5915e5afd8e6ff1bc"} err="failed to get container status \"a7675feb7bac5f97d669dd031b106a1466591362c68c21e5915e5afd8e6ff1bc\": rpc error: code = NotFound desc = could not find container \"a7675feb7bac5f97d669dd031b106a1466591362c68c21e5915e5afd8e6ff1bc\": container with ID starting with a7675feb7bac5f97d669dd031b106a1466591362c68c21e5915e5afd8e6ff1bc not found: ID does not exist" Feb 20 12:20:44.425958 master-0 kubenswrapper[31420]: I0220 12:20:44.425866 31420 scope.go:117] "RemoveContainer" containerID="bbdaf214a60b249dbdce79322999fb549904dfeb3a103943f978a29d1da1ecca" Feb 20 12:20:44.439338 master-0 kubenswrapper[31420]: I0220 12:20:44.439299 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-config-data" (OuterVolumeSpecName: "config-data") pod "0c21113a-3e64-4d0b-861f-16aac0d2828e" (UID: "0c21113a-3e64-4d0b-861f-16aac0d2828e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:44.460919 master-0 kubenswrapper[31420]: I0220 12:20:44.460849 31420 scope.go:117] "RemoveContainer" containerID="b6c0206c6145effc16fdb8403acedaa44d75fe3936abb334b1cd205396cd982c" Feb 20 12:20:44.526494 master-0 kubenswrapper[31420]: I0220 12:20:44.526427 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-etc-machine-id\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.526677 master-0 kubenswrapper[31420]: I0220 12:20:44.526521 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-etc-machine-id\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.526677 master-0 kubenswrapper[31420]: I0220 12:20:44.526554 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49d0bb61-aecd-4962-a916-db3bcd3d9767-config-data-custom\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.526677 master-0 kubenswrapper[31420]: I0220 12:20:44.526640 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-scripts\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.526798 master-0 kubenswrapper[31420]: I0220 12:20:44.526725 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d0bb61-aecd-4962-a916-db3bcd3d9767-combined-ca-bundle\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.526798 master-0 kubenswrapper[31420]: I0220 12:20:44.526760 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvtth\" (UniqueName: \"kubernetes.io/projected/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-kube-api-access-gvtth\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.526798 master-0 kubenswrapper[31420]: I0220 12:20:44.526793 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-dev\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.526896 master-0 kubenswrapper[31420]: I0220 12:20:44.526817 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-etc-nvme\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.526896 master-0 kubenswrapper[31420]: I0220 12:20:44.526858 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-config-data-custom\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.526896 master-0 kubenswrapper[31420]: I0220 12:20:44.526885 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-lib-modules\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.526997 master-0 kubenswrapper[31420]: I0220 12:20:44.526911 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-run\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.526997 master-0 kubenswrapper[31420]: I0220 12:20:44.526964 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-etc-machine-id\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.527055 master-0 kubenswrapper[31420]: I0220 12:20:44.526998 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-var-locks-cinder\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.527055 master-0 kubenswrapper[31420]: I0220 12:20:44.527033 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-var-lib-cinder\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.527120 master-0 kubenswrapper[31420]: I0220 12:20:44.527062 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-etc-iscsi\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.527120 master-0 kubenswrapper[31420]: I0220 12:20:44.527095 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49d0bb61-aecd-4962-a916-db3bcd3d9767-scripts\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.527178 master-0 kubenswrapper[31420]: I0220 12:20:44.527164 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-sys\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.527248 master-0 kubenswrapper[31420]: I0220 12:20:44.527211 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-combined-ca-bundle\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.527424 master-0 kubenswrapper[31420]: I0220 12:20:44.527368 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdzjx\" (UniqueName: \"kubernetes.io/projected/49d0bb61-aecd-4962-a916-db3bcd3d9767-kube-api-access-rdzjx\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.527470 master-0 kubenswrapper[31420]: I0220 12:20:44.527444 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d0bb61-aecd-4962-a916-db3bcd3d9767-config-data\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.527501 master-0 kubenswrapper[31420]: I0220 12:20:44.527486 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-config-data\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.527696 master-0 kubenswrapper[31420]: I0220 12:20:44.527514 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-var-locks-brick\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.527696 master-0 kubenswrapper[31420]: I0220 12:20:44.527658 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c21113a-3e64-4d0b-861f-16aac0d2828e-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:44.534332 master-0 kubenswrapper[31420]: I0220 12:20:44.534280 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-config-data-custom\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.534488 master-0 kubenswrapper[31420]: I0220 12:20:44.534449 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-combined-ca-bundle\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.534543 master-0 kubenswrapper[31420]: I0220 12:20:44.534496 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-scripts\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.534880 master-0 kubenswrapper[31420]: I0220 12:20:44.534839 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-config-data\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.553941 master-0 kubenswrapper[31420]: I0220 12:20:44.553828 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvtth\" (UniqueName: \"kubernetes.io/projected/0fcdb646-ba2b-466f-b072-1fd1b9e18a2d-kube-api-access-gvtth\") pod \"cinder-d44a4-scheduler-0\" (UID: \"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d\") " pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.580969 master-0 kubenswrapper[31420]: I0220 12:20:44.580861 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d44a4-backup-0"] Feb 20 12:20:44.613265 master-0 kubenswrapper[31420]: I0220 12:20:44.612115 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d44a4-backup-0"] Feb 20 12:20:44.621917 master-0 kubenswrapper[31420]: I0220 12:20:44.621854 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d44a4-backup-0"] Feb 20 12:20:44.627931 master-0 kubenswrapper[31420]: I0220 12:20:44.627837 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.630953 master-0 kubenswrapper[31420]: I0220 12:20:44.630905 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-d44a4-backup-config-data" Feb 20 12:20:44.632360 master-0 kubenswrapper[31420]: I0220 12:20:44.632310 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-etc-machine-id\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.632360 master-0 kubenswrapper[31420]: I0220 12:20:44.632350 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-var-locks-cinder\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.632580 master-0 kubenswrapper[31420]: I0220 12:20:44.632383 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-var-lib-cinder\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.632580 master-0 kubenswrapper[31420]: I0220 12:20:44.632400 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-etc-iscsi\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.632580 master-0 kubenswrapper[31420]: I0220 12:20:44.632420 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49d0bb61-aecd-4962-a916-db3bcd3d9767-scripts\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.632580 master-0 kubenswrapper[31420]: I0220 12:20:44.632467 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-sys\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.632580 master-0 kubenswrapper[31420]: I0220 12:20:44.632535 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdzjx\" (UniqueName: \"kubernetes.io/projected/49d0bb61-aecd-4962-a916-db3bcd3d9767-kube-api-access-rdzjx\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.632580 master-0 kubenswrapper[31420]: I0220 12:20:44.632568 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d0bb61-aecd-4962-a916-db3bcd3d9767-config-data\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.632580 master-0 kubenswrapper[31420]: I0220 12:20:44.632591 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-var-locks-brick\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.633158 master-0 kubenswrapper[31420]: I0220 12:20:44.632648 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49d0bb61-aecd-4962-a916-db3bcd3d9767-config-data-custom\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.633158 master-0 kubenswrapper[31420]: I0220 12:20:44.632686 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d0bb61-aecd-4962-a916-db3bcd3d9767-combined-ca-bundle\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.633158 master-0 kubenswrapper[31420]: I0220 12:20:44.632723 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-dev\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.633158 master-0 kubenswrapper[31420]: I0220 12:20:44.632747 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-etc-nvme\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.633158 master-0 kubenswrapper[31420]: I0220 12:20:44.632788 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-lib-modules\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.633158 master-0 kubenswrapper[31420]: I0220 12:20:44.632815 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-run\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.633158 master-0 kubenswrapper[31420]: I0220 12:20:44.632893 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-run\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.633158 master-0 kubenswrapper[31420]: I0220 12:20:44.633140 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-etc-machine-id\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.633768 master-0 kubenswrapper[31420]: I0220 12:20:44.633179 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-var-locks-cinder\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.633768 master-0 kubenswrapper[31420]: I0220 12:20:44.633220 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-var-lib-cinder\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.633768 master-0 kubenswrapper[31420]: I0220 12:20:44.633247 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-etc-iscsi\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.641470 master-0 kubenswrapper[31420]: I0220 12:20:44.641397 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-backup-0"] Feb 20 12:20:44.642741 master-0 kubenswrapper[31420]: I0220 12:20:44.642676 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-sys\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.646084 master-0 kubenswrapper[31420]: I0220 12:20:44.645753 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-var-locks-brick\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.646084 master-0 kubenswrapper[31420]: I0220 12:20:44.645818 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-etc-nvme\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.646084 master-0 kubenswrapper[31420]: I0220 12:20:44.645898 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-dev\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.646084 master-0 kubenswrapper[31420]: I0220 12:20:44.645967 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49d0bb61-aecd-4962-a916-db3bcd3d9767-lib-modules\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.657596 master-0 kubenswrapper[31420]: I0220 12:20:44.656478 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d0bb61-aecd-4962-a916-db3bcd3d9767-config-data\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.666018 master-0 kubenswrapper[31420]: I0220 12:20:44.664893 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49d0bb61-aecd-4962-a916-db3bcd3d9767-scripts\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.668263 master-0 kubenswrapper[31420]: I0220 12:20:44.668232 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49d0bb61-aecd-4962-a916-db3bcd3d9767-config-data-custom\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.668699 master-0 kubenswrapper[31420]: I0220 12:20:44.668597 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d0bb61-aecd-4962-a916-db3bcd3d9767-combined-ca-bundle\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.671359 master-0 kubenswrapper[31420]: I0220 12:20:44.670880 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdzjx\" (UniqueName: \"kubernetes.io/projected/49d0bb61-aecd-4962-a916-db3bcd3d9767-kube-api-access-rdzjx\") pod \"cinder-d44a4-volume-lvm-iscsi-0\" (UID: \"49d0bb61-aecd-4962-a916-db3bcd3d9767\") " pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.698289 master-0 kubenswrapper[31420]: I0220 12:20:44.694418 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:44.725677 master-0 kubenswrapper[31420]: I0220 12:20:44.724643 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:44.734237 master-0 kubenswrapper[31420]: I0220 12:20:44.734187 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-lib-modules\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.734348 master-0 kubenswrapper[31420]: I0220 12:20:44.734250 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41cf8dc-8d38-4183-89fb-5d89372e867e-scripts\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.734348 master-0 kubenswrapper[31420]: I0220 12:20:44.734279 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-var-locks-cinder\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.734348 master-0 kubenswrapper[31420]: I0220 12:20:44.734297 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-dev\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.734348 master-0 kubenswrapper[31420]: I0220 12:20:44.734318 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-sys\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.734521 master-0 kubenswrapper[31420]: I0220 12:20:44.734371 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d41cf8dc-8d38-4183-89fb-5d89372e867e-config-data-custom\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.734521 master-0 kubenswrapper[31420]: I0220 12:20:44.734409 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41cf8dc-8d38-4183-89fb-5d89372e867e-config-data\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.734521 master-0 kubenswrapper[31420]: I0220 12:20:44.734431 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-var-lib-cinder\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.734521 master-0 kubenswrapper[31420]: I0220 12:20:44.734444 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-etc-nvme\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.734521 master-0 kubenswrapper[31420]: I0220 12:20:44.734483 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-var-locks-brick\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.734521 master-0 kubenswrapper[31420]: I0220 12:20:44.734503 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-etc-machine-id\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.734765 master-0 kubenswrapper[31420]: I0220 12:20:44.734538 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41cf8dc-8d38-4183-89fb-5d89372e867e-combined-ca-bundle\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.734765 master-0 kubenswrapper[31420]: I0220 12:20:44.734559 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-run\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.734765 master-0 kubenswrapper[31420]: I0220 12:20:44.734599 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwcgc\" (UniqueName: \"kubernetes.io/projected/d41cf8dc-8d38-4183-89fb-5d89372e867e-kube-api-access-zwcgc\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.734765 master-0 kubenswrapper[31420]: I0220 12:20:44.734615 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-etc-iscsi\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837293 master-0 kubenswrapper[31420]: I0220 12:20:44.837098 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-lib-modules\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837293 master-0 kubenswrapper[31420]: I0220 12:20:44.837171 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41cf8dc-8d38-4183-89fb-5d89372e867e-scripts\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837293 master-0 kubenswrapper[31420]: I0220 12:20:44.837208 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-var-locks-cinder\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837293 master-0 kubenswrapper[31420]: I0220 12:20:44.837235 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-dev\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837293 master-0 kubenswrapper[31420]: I0220 12:20:44.837269 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-sys\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837612 master-0 kubenswrapper[31420]: I0220 12:20:44.837339 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d41cf8dc-8d38-4183-89fb-5d89372e867e-config-data-custom\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837612 master-0 kubenswrapper[31420]: I0220 12:20:44.837390 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41cf8dc-8d38-4183-89fb-5d89372e867e-config-data\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837612 master-0 kubenswrapper[31420]: I0220 12:20:44.837417 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-var-lib-cinder\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837612 master-0 kubenswrapper[31420]: I0220 12:20:44.837437 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-etc-nvme\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837612 master-0 kubenswrapper[31420]: I0220 12:20:44.837472 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-var-locks-brick\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837612 master-0 kubenswrapper[31420]: I0220 12:20:44.837495 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-etc-machine-id\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837612 master-0 kubenswrapper[31420]: I0220 12:20:44.837545 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41cf8dc-8d38-4183-89fb-5d89372e867e-combined-ca-bundle\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837612 master-0 kubenswrapper[31420]: I0220 12:20:44.837573 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-run\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837928 master-0 kubenswrapper[31420]: I0220 12:20:44.837624 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwcgc\" (UniqueName: \"kubernetes.io/projected/d41cf8dc-8d38-4183-89fb-5d89372e867e-kube-api-access-zwcgc\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837928 master-0 kubenswrapper[31420]: I0220 12:20:44.837650 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-etc-iscsi\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837928 master-0 kubenswrapper[31420]: I0220 12:20:44.837824 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-etc-iscsi\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.837928 master-0 kubenswrapper[31420]: I0220 12:20:44.837870 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-lib-modules\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.838559 master-0 kubenswrapper[31420]: I0220 12:20:44.838484 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-var-locks-cinder\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.838626 master-0 kubenswrapper[31420]: I0220 12:20:44.838592 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-dev\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.838670 master-0 kubenswrapper[31420]: I0220 12:20:44.838631 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-sys\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.839985 master-0 kubenswrapper[31420]: I0220 12:20:44.839955 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-var-lib-cinder\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.840091 master-0 kubenswrapper[31420]: I0220 12:20:44.840013 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-etc-nvme\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.840091 master-0 kubenswrapper[31420]: I0220 12:20:44.840055 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-var-locks-brick\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.840091 master-0 kubenswrapper[31420]: I0220 12:20:44.840079 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-etc-machine-id\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.840686 master-0 kubenswrapper[31420]: I0220 12:20:44.840646 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d41cf8dc-8d38-4183-89fb-5d89372e867e-run\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.844432 master-0 kubenswrapper[31420]: I0220 12:20:44.844384 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41cf8dc-8d38-4183-89fb-5d89372e867e-combined-ca-bundle\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.854629 master-0 kubenswrapper[31420]: I0220 12:20:44.854095 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d41cf8dc-8d38-4183-89fb-5d89372e867e-config-data-custom\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.855332 master-0 kubenswrapper[31420]: I0220 12:20:44.855282 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41cf8dc-8d38-4183-89fb-5d89372e867e-scripts\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.856140 master-0 kubenswrapper[31420]: I0220 12:20:44.856088 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41cf8dc-8d38-4183-89fb-5d89372e867e-config-data\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.882549 master-0 kubenswrapper[31420]: I0220 12:20:44.875693 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwcgc\" (UniqueName: \"kubernetes.io/projected/d41cf8dc-8d38-4183-89fb-5d89372e867e-kube-api-access-zwcgc\") pod \"cinder-d44a4-backup-0\" (UID: \"d41cf8dc-8d38-4183-89fb-5d89372e867e\") " pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.959730 master-0 kubenswrapper[31420]: I0220 12:20:44.959666 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:44.990462 master-0 kubenswrapper[31420]: I0220 12:20:44.989551 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:45.144101 master-0 kubenswrapper[31420]: I0220 12:20:45.142890 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4jdp\" (UniqueName: \"kubernetes.io/projected/1caf9802-b963-4368-ac29-e47812b48ad3-kube-api-access-p4jdp\") pod \"1caf9802-b963-4368-ac29-e47812b48ad3\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " Feb 20 12:20:45.144101 master-0 kubenswrapper[31420]: I0220 12:20:45.142947 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1caf9802-b963-4368-ac29-e47812b48ad3-etc-podinfo\") pod \"1caf9802-b963-4368-ac29-e47812b48ad3\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " Feb 20 12:20:45.144101 master-0 kubenswrapper[31420]: I0220 12:20:45.143101 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-combined-ca-bundle\") pod \"1caf9802-b963-4368-ac29-e47812b48ad3\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " Feb 20 12:20:45.144101 master-0 kubenswrapper[31420]: I0220 12:20:45.143189 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-config-data\") pod \"1caf9802-b963-4368-ac29-e47812b48ad3\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " Feb 20 12:20:45.144101 master-0 kubenswrapper[31420]: I0220 12:20:45.143308 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-scripts\") pod \"1caf9802-b963-4368-ac29-e47812b48ad3\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " Feb 20 12:20:45.144101 master-0 kubenswrapper[31420]: I0220 12:20:45.143380 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1caf9802-b963-4368-ac29-e47812b48ad3-config-data-merged\") pod \"1caf9802-b963-4368-ac29-e47812b48ad3\" (UID: \"1caf9802-b963-4368-ac29-e47812b48ad3\") " Feb 20 12:20:45.144893 master-0 kubenswrapper[31420]: I0220 12:20:45.144670 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1caf9802-b963-4368-ac29-e47812b48ad3-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "1caf9802-b963-4368-ac29-e47812b48ad3" (UID: "1caf9802-b963-4368-ac29-e47812b48ad3"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:20:45.149342 master-0 kubenswrapper[31420]: I0220 12:20:45.149274 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1caf9802-b963-4368-ac29-e47812b48ad3-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "1caf9802-b963-4368-ac29-e47812b48ad3" (UID: "1caf9802-b963-4368-ac29-e47812b48ad3"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 20 12:20:45.152050 master-0 kubenswrapper[31420]: I0220 12:20:45.151987 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1caf9802-b963-4368-ac29-e47812b48ad3-kube-api-access-p4jdp" (OuterVolumeSpecName: "kube-api-access-p4jdp") pod "1caf9802-b963-4368-ac29-e47812b48ad3" (UID: "1caf9802-b963-4368-ac29-e47812b48ad3"). InnerVolumeSpecName "kube-api-access-p4jdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:45.153730 master-0 kubenswrapper[31420]: I0220 12:20:45.153652 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-scripts" (OuterVolumeSpecName: "scripts") pod "1caf9802-b963-4368-ac29-e47812b48ad3" (UID: "1caf9802-b963-4368-ac29-e47812b48ad3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:45.181627 master-0 kubenswrapper[31420]: I0220 12:20:45.181573 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-config-data" (OuterVolumeSpecName: "config-data") pod "1caf9802-b963-4368-ac29-e47812b48ad3" (UID: "1caf9802-b963-4368-ac29-e47812b48ad3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:45.212622 master-0 kubenswrapper[31420]: I0220 12:20:45.212571 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-volume-lvm-iscsi-0"] Feb 20 12:20:45.227189 master-0 kubenswrapper[31420]: W0220 12:20:45.227133 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fcdb646_ba2b_466f_b072_1fd1b9e18a2d.slice/crio-d490382d2e409d0ad59ff37bcd5ce737c3fe18535055c5b621d6ce80941d135c WatchSource:0}: Error finding container d490382d2e409d0ad59ff37bcd5ce737c3fe18535055c5b621d6ce80941d135c: Status 404 returned error can't find the container with id d490382d2e409d0ad59ff37bcd5ce737c3fe18535055c5b621d6ce80941d135c Feb 20 12:20:45.227962 master-0 kubenswrapper[31420]: I0220 12:20:45.227903 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-9z9n4" event={"ID":"1caf9802-b963-4368-ac29-e47812b48ad3","Type":"ContainerDied","Data":"704b949559673cc39aea04ffc269074517b7e2bfc091883085abd01edd13ed97"} Feb 20 12:20:45.227962 master-0 kubenswrapper[31420]: I0220 12:20:45.227933 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-9z9n4" Feb 20 12:20:45.228403 master-0 kubenswrapper[31420]: I0220 12:20:45.227939 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="704b949559673cc39aea04ffc269074517b7e2bfc091883085abd01edd13ed97" Feb 20 12:20:45.232323 master-0 kubenswrapper[31420]: W0220 12:20:45.232268 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49d0bb61_aecd_4962_a916_db3bcd3d9767.slice/crio-7874768d2d7235e6d2f9069aed5d09cebc638af6f2aca29d28ef587e9b8f1b20 WatchSource:0}: Error finding container 7874768d2d7235e6d2f9069aed5d09cebc638af6f2aca29d28ef587e9b8f1b20: Status 404 returned error can't find the container with id 7874768d2d7235e6d2f9069aed5d09cebc638af6f2aca29d28ef587e9b8f1b20 Feb 20 12:20:45.245710 master-0 kubenswrapper[31420]: I0220 12:20:45.245380 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:45.245710 master-0 kubenswrapper[31420]: I0220 12:20:45.245416 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:45.245710 master-0 kubenswrapper[31420]: I0220 12:20:45.245426 31420 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1caf9802-b963-4368-ac29-e47812b48ad3-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:45.245710 master-0 kubenswrapper[31420]: I0220 12:20:45.245437 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4jdp\" (UniqueName: \"kubernetes.io/projected/1caf9802-b963-4368-ac29-e47812b48ad3-kube-api-access-p4jdp\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:45.245710 master-0 kubenswrapper[31420]: I0220 12:20:45.245445 31420 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1caf9802-b963-4368-ac29-e47812b48ad3-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:45.261713 master-0 kubenswrapper[31420]: I0220 12:20:45.261637 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1caf9802-b963-4368-ac29-e47812b48ad3" (UID: "1caf9802-b963-4368-ac29-e47812b48ad3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:20:45.291000 master-0 kubenswrapper[31420]: I0220 12:20:45.290947 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-scheduler-0"] Feb 20 12:20:45.350802 master-0 kubenswrapper[31420]: I0220 12:20:45.350758 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1caf9802-b963-4368-ac29-e47812b48ad3-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:45.536471 master-0 kubenswrapper[31420]: I0220 12:20:45.536340 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c21113a-3e64-4d0b-861f-16aac0d2828e" path="/var/lib/kubelet/pods/0c21113a-3e64-4d0b-861f-16aac0d2828e/volumes" Feb 20 12:20:45.537218 master-0 kubenswrapper[31420]: I0220 12:20:45.537195 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c9aaa04-a36f-4409-85d4-9199555742bb" path="/var/lib/kubelet/pods/1c9aaa04-a36f-4409-85d4-9199555742bb/volumes" Feb 20 12:20:45.537876 master-0 kubenswrapper[31420]: I0220 12:20:45.537841 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c" path="/var/lib/kubelet/pods/7264e78a-1d5d-44a0-b0a0-3bbd3513fe0c/volumes" Feb 20 12:20:45.628439 master-0 kubenswrapper[31420]: I0220 12:20:45.618249 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d44a4-backup-0"] Feb 20 12:20:45.875622 master-0 kubenswrapper[31420]: I0220 12:20:45.852048 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-2p4tg"] Feb 20 12:20:45.875622 master-0 kubenswrapper[31420]: E0220 12:20:45.852614 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1caf9802-b963-4368-ac29-e47812b48ad3" containerName="ironic-db-sync" Feb 20 12:20:45.875622 master-0 kubenswrapper[31420]: I0220 12:20:45.852629 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="1caf9802-b963-4368-ac29-e47812b48ad3" containerName="ironic-db-sync" Feb 20 12:20:45.875622 master-0 kubenswrapper[31420]: E0220 12:20:45.852653 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1caf9802-b963-4368-ac29-e47812b48ad3" containerName="init" Feb 20 12:20:45.875622 master-0 kubenswrapper[31420]: I0220 12:20:45.852660 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="1caf9802-b963-4368-ac29-e47812b48ad3" containerName="init" Feb 20 12:20:45.875622 master-0 kubenswrapper[31420]: I0220 12:20:45.852974 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="1caf9802-b963-4368-ac29-e47812b48ad3" containerName="ironic-db-sync" Feb 20 12:20:45.885024 master-0 kubenswrapper[31420]: I0220 12:20:45.884985 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-2p4tg" Feb 20 12:20:46.020749 master-0 kubenswrapper[31420]: I0220 12:20:46.014752 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk9k7\" (UniqueName: \"kubernetes.io/projected/aab73f02-e440-40c2-bc9d-073803f49fc8-kube-api-access-kk9k7\") pod \"ironic-inspector-db-create-2p4tg\" (UID: \"aab73f02-e440-40c2-bc9d-073803f49fc8\") " pod="openstack/ironic-inspector-db-create-2p4tg" Feb 20 12:20:46.020749 master-0 kubenswrapper[31420]: I0220 12:20:46.014890 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aab73f02-e440-40c2-bc9d-073803f49fc8-operator-scripts\") pod \"ironic-inspector-db-create-2p4tg\" (UID: \"aab73f02-e440-40c2-bc9d-073803f49fc8\") " pod="openstack/ironic-inspector-db-create-2p4tg" Feb 20 12:20:46.087756 master-0 kubenswrapper[31420]: I0220 12:20:46.082599 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-2p4tg"] Feb 20 12:20:46.104551 master-0 kubenswrapper[31420]: I0220 12:20:46.102630 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-50c8-account-create-update-m4sp9"] Feb 20 12:20:46.135604 master-0 kubenswrapper[31420]: I0220 12:20:46.135130 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk9k7\" (UniqueName: \"kubernetes.io/projected/aab73f02-e440-40c2-bc9d-073803f49fc8-kube-api-access-kk9k7\") pod \"ironic-inspector-db-create-2p4tg\" (UID: \"aab73f02-e440-40c2-bc9d-073803f49fc8\") " pod="openstack/ironic-inspector-db-create-2p4tg" Feb 20 12:20:46.135604 master-0 kubenswrapper[31420]: I0220 12:20:46.135206 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aab73f02-e440-40c2-bc9d-073803f49fc8-operator-scripts\") pod \"ironic-inspector-db-create-2p4tg\" (UID: \"aab73f02-e440-40c2-bc9d-073803f49fc8\") " pod="openstack/ironic-inspector-db-create-2p4tg" Feb 20 12:20:46.136028 master-0 kubenswrapper[31420]: I0220 12:20:46.136004 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aab73f02-e440-40c2-bc9d-073803f49fc8-operator-scripts\") pod \"ironic-inspector-db-create-2p4tg\" (UID: \"aab73f02-e440-40c2-bc9d-073803f49fc8\") " pod="openstack/ironic-inspector-db-create-2p4tg" Feb 20 12:20:46.147888 master-0 kubenswrapper[31420]: I0220 12:20:46.142086 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-50c8-account-create-update-m4sp9" Feb 20 12:20:46.147888 master-0 kubenswrapper[31420]: I0220 12:20:46.147825 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Feb 20 12:20:46.153651 master-0 kubenswrapper[31420]: I0220 12:20:46.153580 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-50c8-account-create-update-m4sp9"] Feb 20 12:20:46.192767 master-0 kubenswrapper[31420]: I0220 12:20:46.182548 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-5db78c68bd-t4cm6"] Feb 20 12:20:46.192767 master-0 kubenswrapper[31420]: I0220 12:20:46.184206 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:20:46.192767 master-0 kubenswrapper[31420]: I0220 12:20:46.188864 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Feb 20 12:20:46.206550 master-0 kubenswrapper[31420]: I0220 12:20:46.199981 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk9k7\" (UniqueName: \"kubernetes.io/projected/aab73f02-e440-40c2-bc9d-073803f49fc8-kube-api-access-kk9k7\") pod \"ironic-inspector-db-create-2p4tg\" (UID: \"aab73f02-e440-40c2-bc9d-073803f49fc8\") " pod="openstack/ironic-inspector-db-create-2p4tg" Feb 20 12:20:46.211549 master-0 kubenswrapper[31420]: I0220 12:20:46.208582 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-5db78c68bd-t4cm6"] Feb 20 12:20:46.243635 master-0 kubenswrapper[31420]: I0220 12:20:46.236971 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpj2d\" (UniqueName: \"kubernetes.io/projected/95fc91ce-d187-45a1-bc88-45c0415d6cde-kube-api-access-hpj2d\") pod \"ironic-inspector-50c8-account-create-update-m4sp9\" (UID: \"95fc91ce-d187-45a1-bc88-45c0415d6cde\") " pod="openstack/ironic-inspector-50c8-account-create-update-m4sp9" Feb 20 12:20:46.243635 master-0 kubenswrapper[31420]: I0220 12:20:46.237074 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fc91ce-d187-45a1-bc88-45c0415d6cde-operator-scripts\") pod \"ironic-inspector-50c8-account-create-update-m4sp9\" (UID: \"95fc91ce-d187-45a1-bc88-45c0415d6cde\") " pod="openstack/ironic-inspector-50c8-account-create-update-m4sp9" Feb 20 12:20:46.282994 master-0 kubenswrapper[31420]: I0220 12:20:46.282940 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" event={"ID":"49d0bb61-aecd-4962-a916-db3bcd3d9767","Type":"ContainerStarted","Data":"ff129fdd61137978f385c8a28e93f1f579f3f49ada058ce94daad156632037a1"} Feb 20 12:20:46.282994 master-0 kubenswrapper[31420]: I0220 12:20:46.282991 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" event={"ID":"49d0bb61-aecd-4962-a916-db3bcd3d9767","Type":"ContainerStarted","Data":"7874768d2d7235e6d2f9069aed5d09cebc638af6f2aca29d28ef587e9b8f1b20"} Feb 20 12:20:46.284247 master-0 kubenswrapper[31420]: I0220 12:20:46.284223 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-scheduler-0" event={"ID":"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d","Type":"ContainerStarted","Data":"d490382d2e409d0ad59ff37bcd5ce737c3fe18535055c5b621d6ce80941d135c"} Feb 20 12:20:46.293747 master-0 kubenswrapper[31420]: I0220 12:20:46.293666 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-backup-0" event={"ID":"d41cf8dc-8d38-4183-89fb-5d89372e867e","Type":"ContainerStarted","Data":"d375c1371edbad09753f6e8dd52cc3b0629774edcc53d840e4ac777f845e8ea4"} Feb 20 12:20:46.302616 master-0 kubenswrapper[31420]: I0220 12:20:46.302570 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8446c48bd9-jqsfx"] Feb 20 12:20:46.302875 master-0 kubenswrapper[31420]: I0220 12:20:46.302836 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" podUID="4e04e8c1-0541-4161-bb09-f4250f360d61" containerName="dnsmasq-dns" containerID="cri-o://04675b63566a6d3105ec458d7992db2d262a3d29955b1443f3008073c61023e3" gracePeriod=10 Feb 20 12:20:46.305248 master-0 kubenswrapper[31420]: I0220 12:20:46.305081 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:46.314838 master-0 kubenswrapper[31420]: I0220 12:20:46.314796 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5589979f4f-n6jwb"] Feb 20 12:20:46.316904 master-0 kubenswrapper[31420]: I0220 12:20:46.316878 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.324709 master-0 kubenswrapper[31420]: I0220 12:20:46.324596 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5589979f4f-n6jwb"] Feb 20 12:20:46.340083 master-0 kubenswrapper[31420]: I0220 12:20:46.339870 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2416044-6dc6-4ce7-8b30-574bce497d5e-combined-ca-bundle\") pod \"ironic-neutron-agent-5db78c68bd-t4cm6\" (UID: \"d2416044-6dc6-4ce7-8b30-574bce497d5e\") " pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:20:46.340083 master-0 kubenswrapper[31420]: I0220 12:20:46.339959 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q2xs\" (UniqueName: \"kubernetes.io/projected/d2416044-6dc6-4ce7-8b30-574bce497d5e-kube-api-access-6q2xs\") pod \"ironic-neutron-agent-5db78c68bd-t4cm6\" (UID: \"d2416044-6dc6-4ce7-8b30-574bce497d5e\") " pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:20:46.340083 master-0 kubenswrapper[31420]: I0220 12:20:46.339996 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2416044-6dc6-4ce7-8b30-574bce497d5e-config\") pod \"ironic-neutron-agent-5db78c68bd-t4cm6\" (UID: \"d2416044-6dc6-4ce7-8b30-574bce497d5e\") " pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:20:46.340083 master-0 kubenswrapper[31420]: I0220 12:20:46.340029 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpj2d\" (UniqueName: \"kubernetes.io/projected/95fc91ce-d187-45a1-bc88-45c0415d6cde-kube-api-access-hpj2d\") pod \"ironic-inspector-50c8-account-create-update-m4sp9\" (UID: \"95fc91ce-d187-45a1-bc88-45c0415d6cde\") " pod="openstack/ironic-inspector-50c8-account-create-update-m4sp9" Feb 20 12:20:46.340083 master-0 kubenswrapper[31420]: I0220 12:20:46.340080 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fc91ce-d187-45a1-bc88-45c0415d6cde-operator-scripts\") pod \"ironic-inspector-50c8-account-create-update-m4sp9\" (UID: \"95fc91ce-d187-45a1-bc88-45c0415d6cde\") " pod="openstack/ironic-inspector-50c8-account-create-update-m4sp9" Feb 20 12:20:46.340947 master-0 kubenswrapper[31420]: I0220 12:20:46.340818 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fc91ce-d187-45a1-bc88-45c0415d6cde-operator-scripts\") pod \"ironic-inspector-50c8-account-create-update-m4sp9\" (UID: \"95fc91ce-d187-45a1-bc88-45c0415d6cde\") " pod="openstack/ironic-inspector-50c8-account-create-update-m4sp9" Feb 20 12:20:46.354562 master-0 kubenswrapper[31420]: I0220 12:20:46.352593 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-5f9fdb754b-plq9n"] Feb 20 12:20:46.355258 master-0 kubenswrapper[31420]: I0220 12:20:46.355176 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.356263 master-0 kubenswrapper[31420]: I0220 12:20:46.356126 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-2p4tg" Feb 20 12:20:46.375812 master-0 kubenswrapper[31420]: I0220 12:20:46.359282 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Feb 20 12:20:46.375812 master-0 kubenswrapper[31420]: I0220 12:20:46.359595 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 20 12:20:46.375812 master-0 kubenswrapper[31420]: I0220 12:20:46.359758 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Feb 20 12:20:46.375812 master-0 kubenswrapper[31420]: I0220 12:20:46.359900 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 20 12:20:46.375812 master-0 kubenswrapper[31420]: I0220 12:20:46.362047 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Feb 20 12:20:46.385269 master-0 kubenswrapper[31420]: I0220 12:20:46.385224 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpj2d\" (UniqueName: \"kubernetes.io/projected/95fc91ce-d187-45a1-bc88-45c0415d6cde-kube-api-access-hpj2d\") pod \"ironic-inspector-50c8-account-create-update-m4sp9\" (UID: \"95fc91ce-d187-45a1-bc88-45c0415d6cde\") " pod="openstack/ironic-inspector-50c8-account-create-update-m4sp9" Feb 20 12:20:46.405768 master-0 kubenswrapper[31420]: I0220 12:20:46.405611 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5f9fdb754b-plq9n"] Feb 20 12:20:46.449171 master-0 kubenswrapper[31420]: I0220 12:20:46.449115 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-config-data\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.449246 master-0 kubenswrapper[31420]: I0220 12:20:46.449179 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2416044-6dc6-4ce7-8b30-574bce497d5e-combined-ca-bundle\") pod \"ironic-neutron-agent-5db78c68bd-t4cm6\" (UID: \"d2416044-6dc6-4ce7-8b30-574bce497d5e\") " pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:20:46.449246 master-0 kubenswrapper[31420]: I0220 12:20:46.449219 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdrjl\" (UniqueName: \"kubernetes.io/projected/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-kube-api-access-wdrjl\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.449246 master-0 kubenswrapper[31420]: I0220 12:20:46.449239 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-config-data-custom\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.449378 master-0 kubenswrapper[31420]: I0220 12:20:46.449354 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q2xs\" (UniqueName: \"kubernetes.io/projected/d2416044-6dc6-4ce7-8b30-574bce497d5e-kube-api-access-6q2xs\") pod \"ironic-neutron-agent-5db78c68bd-t4cm6\" (UID: \"d2416044-6dc6-4ce7-8b30-574bce497d5e\") " pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:20:46.449416 master-0 kubenswrapper[31420]: I0220 12:20:46.449397 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-scripts\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.449446 master-0 kubenswrapper[31420]: I0220 12:20:46.449420 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-ovsdbserver-sb\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.449475 master-0 kubenswrapper[31420]: I0220 12:20:46.449467 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2416044-6dc6-4ce7-8b30-574bce497d5e-config\") pod \"ironic-neutron-agent-5db78c68bd-t4cm6\" (UID: \"d2416044-6dc6-4ce7-8b30-574bce497d5e\") " pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:20:46.449546 master-0 kubenswrapper[31420]: I0220 12:20:46.449516 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81f14191-e623-41d6-8a28-e59e94280af4-config-data-merged\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.450950 master-0 kubenswrapper[31420]: I0220 12:20:46.450900 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-ovsdbserver-nb\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.451024 master-0 kubenswrapper[31420]: I0220 12:20:46.450960 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81f14191-e623-41d6-8a28-e59e94280af4-logs\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.451024 master-0 kubenswrapper[31420]: I0220 12:20:46.451000 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-dns-svc\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.451100 master-0 kubenswrapper[31420]: I0220 12:20:46.451031 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-dns-swift-storage-0\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.451100 master-0 kubenswrapper[31420]: I0220 12:20:46.451097 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2wpl\" (UniqueName: \"kubernetes.io/projected/81f14191-e623-41d6-8a28-e59e94280af4-kube-api-access-z2wpl\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.451465 master-0 kubenswrapper[31420]: I0220 12:20:46.451418 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-combined-ca-bundle\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.451465 master-0 kubenswrapper[31420]: I0220 12:20:46.451440 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/81f14191-e623-41d6-8a28-e59e94280af4-etc-podinfo\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.451601 master-0 kubenswrapper[31420]: I0220 12:20:46.451555 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-config\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.460732 master-0 kubenswrapper[31420]: I0220 12:20:46.458852 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2416044-6dc6-4ce7-8b30-574bce497d5e-config\") pod \"ironic-neutron-agent-5db78c68bd-t4cm6\" (UID: \"d2416044-6dc6-4ce7-8b30-574bce497d5e\") " pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:20:46.474375 master-0 kubenswrapper[31420]: I0220 12:20:46.474295 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q2xs\" (UniqueName: \"kubernetes.io/projected/d2416044-6dc6-4ce7-8b30-574bce497d5e-kube-api-access-6q2xs\") pod \"ironic-neutron-agent-5db78c68bd-t4cm6\" (UID: \"d2416044-6dc6-4ce7-8b30-574bce497d5e\") " pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:20:46.485697 master-0 kubenswrapper[31420]: I0220 12:20:46.485648 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-50c8-account-create-update-m4sp9" Feb 20 12:20:46.486246 master-0 kubenswrapper[31420]: I0220 12:20:46.486210 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2416044-6dc6-4ce7-8b30-574bce497d5e-combined-ca-bundle\") pod \"ironic-neutron-agent-5db78c68bd-t4cm6\" (UID: \"d2416044-6dc6-4ce7-8b30-574bce497d5e\") " pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:20:46.553218 master-0 kubenswrapper[31420]: I0220 12:20:46.553106 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-combined-ca-bundle\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.553218 master-0 kubenswrapper[31420]: I0220 12:20:46.553173 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/81f14191-e623-41d6-8a28-e59e94280af4-etc-podinfo\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.553218 master-0 kubenswrapper[31420]: I0220 12:20:46.553209 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-config\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.553470 master-0 kubenswrapper[31420]: I0220 12:20:46.553231 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-config-data\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.553470 master-0 kubenswrapper[31420]: I0220 12:20:46.553254 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-config-data-custom\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.553470 master-0 kubenswrapper[31420]: I0220 12:20:46.553277 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdrjl\" (UniqueName: \"kubernetes.io/projected/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-kube-api-access-wdrjl\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.553470 master-0 kubenswrapper[31420]: I0220 12:20:46.553333 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-scripts\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.553470 master-0 kubenswrapper[31420]: I0220 12:20:46.553356 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-ovsdbserver-sb\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.553470 master-0 kubenswrapper[31420]: I0220 12:20:46.553396 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81f14191-e623-41d6-8a28-e59e94280af4-config-data-merged\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.553470 master-0 kubenswrapper[31420]: I0220 12:20:46.553434 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-ovsdbserver-nb\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.553470 master-0 kubenswrapper[31420]: I0220 12:20:46.553459 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81f14191-e623-41d6-8a28-e59e94280af4-logs\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.553793 master-0 kubenswrapper[31420]: I0220 12:20:46.553479 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-dns-svc\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.553793 master-0 kubenswrapper[31420]: I0220 12:20:46.553499 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-dns-swift-storage-0\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.553793 master-0 kubenswrapper[31420]: I0220 12:20:46.553541 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2wpl\" (UniqueName: \"kubernetes.io/projected/81f14191-e623-41d6-8a28-e59e94280af4-kube-api-access-z2wpl\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.564958 master-0 kubenswrapper[31420]: I0220 12:20:46.564898 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/81f14191-e623-41d6-8a28-e59e94280af4-etc-podinfo\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.565985 master-0 kubenswrapper[31420]: I0220 12:20:46.565950 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-config\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.566141 master-0 kubenswrapper[31420]: I0220 12:20:46.566098 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-scripts\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.566793 master-0 kubenswrapper[31420]: I0220 12:20:46.566765 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-ovsdbserver-nb\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.567321 master-0 kubenswrapper[31420]: I0220 12:20:46.567293 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-ovsdbserver-sb\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.567558 master-0 kubenswrapper[31420]: I0220 12:20:46.567520 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81f14191-e623-41d6-8a28-e59e94280af4-config-data-merged\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.568404 master-0 kubenswrapper[31420]: I0220 12:20:46.568366 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-dns-swift-storage-0\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.569507 master-0 kubenswrapper[31420]: I0220 12:20:46.569470 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-dns-svc\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.569767 master-0 kubenswrapper[31420]: I0220 12:20:46.569737 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81f14191-e623-41d6-8a28-e59e94280af4-logs\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.570835 master-0 kubenswrapper[31420]: I0220 12:20:46.570771 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-config-data\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.571180 master-0 kubenswrapper[31420]: I0220 12:20:46.571139 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-config-data-custom\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.591662 master-0 kubenswrapper[31420]: I0220 12:20:46.579173 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-combined-ca-bundle\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:46.596823 master-0 kubenswrapper[31420]: I0220 12:20:46.596352 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdrjl\" (UniqueName: \"kubernetes.io/projected/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-kube-api-access-wdrjl\") pod \"dnsmasq-dns-5589979f4f-n6jwb\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:46.619849 master-0 kubenswrapper[31420]: I0220 12:20:46.619748 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2wpl\" (UniqueName: \"kubernetes.io/projected/81f14191-e623-41d6-8a28-e59e94280af4-kube-api-access-z2wpl\") pod \"ironic-5f9fdb754b-plq9n\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:47.154682 master-0 kubenswrapper[31420]: I0220 12:20:47.154626 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:20:47.318559 master-0 kubenswrapper[31420]: I0220 12:20:47.317583 31420 generic.go:334] "Generic (PLEG): container finished" podID="4e04e8c1-0541-4161-bb09-f4250f360d61" containerID="04675b63566a6d3105ec458d7992db2d262a3d29955b1443f3008073c61023e3" exitCode=0 Feb 20 12:20:47.318559 master-0 kubenswrapper[31420]: I0220 12:20:47.317666 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" event={"ID":"4e04e8c1-0541-4161-bb09-f4250f360d61","Type":"ContainerDied","Data":"04675b63566a6d3105ec458d7992db2d262a3d29955b1443f3008073c61023e3"} Feb 20 12:20:47.326768 master-0 kubenswrapper[31420]: I0220 12:20:47.320581 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" event={"ID":"49d0bb61-aecd-4962-a916-db3bcd3d9767","Type":"ContainerStarted","Data":"ee0bafefc14fc0019054d695be0b965626f9589e5fa93f973fd70a1b4d19c5ab"} Feb 20 12:20:47.367643 master-0 kubenswrapper[31420]: I0220 12:20:47.361361 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" podStartSLOduration=3.36133894 podStartE2EDuration="3.36133894s" podCreationTimestamp="2026-02-20 12:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:47.349119115 +0000 UTC m=+952.068357376" watchObservedRunningTime="2026-02-20 12:20:47.36133894 +0000 UTC m=+952.080577171" Feb 20 12:20:47.432554 master-0 kubenswrapper[31420]: I0220 12:20:47.431368 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-2p4tg"] Feb 20 12:20:47.578634 master-0 kubenswrapper[31420]: I0220 12:20:47.574274 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-50c8-account-create-update-m4sp9"] Feb 20 12:20:47.721576 master-0 kubenswrapper[31420]: I0220 12:20:47.721522 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:47.778279 master-0 kubenswrapper[31420]: I0220 12:20:47.778244 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:47.810007 master-0 kubenswrapper[31420]: I0220 12:20:47.809962 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:47.925161 master-0 kubenswrapper[31420]: I0220 12:20:47.922286 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-5db78c68bd-t4cm6"] Feb 20 12:20:47.963551 master-0 kubenswrapper[31420]: I0220 12:20:47.950691 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Feb 20 12:20:47.963551 master-0 kubenswrapper[31420]: E0220 12:20:47.951216 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e04e8c1-0541-4161-bb09-f4250f360d61" containerName="init" Feb 20 12:20:47.963551 master-0 kubenswrapper[31420]: I0220 12:20:47.951232 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e04e8c1-0541-4161-bb09-f4250f360d61" containerName="init" Feb 20 12:20:47.963551 master-0 kubenswrapper[31420]: E0220 12:20:47.951254 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e04e8c1-0541-4161-bb09-f4250f360d61" containerName="dnsmasq-dns" Feb 20 12:20:47.963551 master-0 kubenswrapper[31420]: I0220 12:20:47.951260 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e04e8c1-0541-4161-bb09-f4250f360d61" containerName="dnsmasq-dns" Feb 20 12:20:47.963551 master-0 kubenswrapper[31420]: I0220 12:20:47.951492 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e04e8c1-0541-4161-bb09-f4250f360d61" containerName="dnsmasq-dns" Feb 20 12:20:47.996559 master-0 kubenswrapper[31420]: I0220 12:20:47.994272 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 20 12:20:47.996559 master-0 kubenswrapper[31420]: I0220 12:20:47.994389 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 20 12:20:48.006774 master-0 kubenswrapper[31420]: W0220 12:20:48.003736 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2416044_6dc6_4ce7_8b30_574bce497d5e.slice/crio-8a062fb097df2aad5760eec2a2849bbc48508232612c6b3cee338dfb3ace525c WatchSource:0}: Error finding container 8a062fb097df2aad5760eec2a2849bbc48508232612c6b3cee338dfb3ace525c: Status 404 returned error can't find the container with id 8a062fb097df2aad5760eec2a2849bbc48508232612c6b3cee338dfb3ace525c Feb 20 12:20:48.006774 master-0 kubenswrapper[31420]: I0220 12:20:48.004147 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Feb 20 12:20:48.006774 master-0 kubenswrapper[31420]: I0220 12:20:48.004361 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Feb 20 12:20:48.006774 master-0 kubenswrapper[31420]: I0220 12:20:48.005551 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9564d\" (UniqueName: \"kubernetes.io/projected/4e04e8c1-0541-4161-bb09-f4250f360d61-kube-api-access-9564d\") pod \"4e04e8c1-0541-4161-bb09-f4250f360d61\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " Feb 20 12:20:48.006774 master-0 kubenswrapper[31420]: I0220 12:20:48.005618 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-ovsdbserver-sb\") pod \"4e04e8c1-0541-4161-bb09-f4250f360d61\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " Feb 20 12:20:48.006774 master-0 kubenswrapper[31420]: I0220 12:20:48.005682 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-ovsdbserver-nb\") pod \"4e04e8c1-0541-4161-bb09-f4250f360d61\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " Feb 20 12:20:48.006774 master-0 kubenswrapper[31420]: I0220 12:20:48.005753 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-dns-swift-storage-0\") pod \"4e04e8c1-0541-4161-bb09-f4250f360d61\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " Feb 20 12:20:48.006774 master-0 kubenswrapper[31420]: I0220 12:20:48.005876 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-config\") pod \"4e04e8c1-0541-4161-bb09-f4250f360d61\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " Feb 20 12:20:48.006774 master-0 kubenswrapper[31420]: I0220 12:20:48.005923 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-dns-svc\") pod \"4e04e8c1-0541-4161-bb09-f4250f360d61\" (UID: \"4e04e8c1-0541-4161-bb09-f4250f360d61\") " Feb 20 12:20:48.072773 master-0 kubenswrapper[31420]: I0220 12:20:48.064039 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e04e8c1-0541-4161-bb09-f4250f360d61-kube-api-access-9564d" (OuterVolumeSpecName: "kube-api-access-9564d") pod "4e04e8c1-0541-4161-bb09-f4250f360d61" (UID: "4e04e8c1-0541-4161-bb09-f4250f360d61"). InnerVolumeSpecName "kube-api-access-9564d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:48.110423 master-0 kubenswrapper[31420]: I0220 12:20:48.108689 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.110423 master-0 kubenswrapper[31420]: I0220 12:20:48.108808 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmpwf\" (UniqueName: \"kubernetes.io/projected/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-kube-api-access-rmpwf\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.110423 master-0 kubenswrapper[31420]: I0220 12:20:48.108861 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.110423 master-0 kubenswrapper[31420]: I0220 12:20:48.108925 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.110423 master-0 kubenswrapper[31420]: I0220 12:20:48.108951 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-scripts\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.110423 master-0 kubenswrapper[31420]: I0220 12:20:48.108994 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99ff1016-1120-46f3-97dc-6ce412b6cc40\" (UniqueName: \"kubernetes.io/csi/topolvm.io^af0db9e5-778a-47d8-b7e3-337dfc557ed7\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.110423 master-0 kubenswrapper[31420]: I0220 12:20:48.109654 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-config-data\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.110423 master-0 kubenswrapper[31420]: I0220 12:20:48.110420 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.111625 master-0 kubenswrapper[31420]: I0220 12:20:48.110693 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9564d\" (UniqueName: \"kubernetes.io/projected/4e04e8c1-0541-4161-bb09-f4250f360d61-kube-api-access-9564d\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:48.215081 master-0 kubenswrapper[31420]: I0220 12:20:48.213619 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.215081 master-0 kubenswrapper[31420]: I0220 12:20:48.213888 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmpwf\" (UniqueName: \"kubernetes.io/projected/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-kube-api-access-rmpwf\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.215081 master-0 kubenswrapper[31420]: I0220 12:20:48.214769 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.215081 master-0 kubenswrapper[31420]: I0220 12:20:48.214835 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.215081 master-0 kubenswrapper[31420]: I0220 12:20:48.214861 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-scripts\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.215081 master-0 kubenswrapper[31420]: I0220 12:20:48.214925 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-99ff1016-1120-46f3-97dc-6ce412b6cc40\" (UniqueName: \"kubernetes.io/csi/topolvm.io^af0db9e5-778a-47d8-b7e3-337dfc557ed7\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.215081 master-0 kubenswrapper[31420]: I0220 12:20:48.215089 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-config-data\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.215516 master-0 kubenswrapper[31420]: I0220 12:20:48.215226 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.220292 master-0 kubenswrapper[31420]: I0220 12:20:48.218218 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.220292 master-0 kubenswrapper[31420]: I0220 12:20:48.219973 31420 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 12:20:48.220292 master-0 kubenswrapper[31420]: I0220 12:20:48.220014 31420 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-99ff1016-1120-46f3-97dc-6ce412b6cc40\" (UniqueName: \"kubernetes.io/csi/topolvm.io^af0db9e5-778a-47d8-b7e3-337dfc557ed7\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/66d0e13fb9811db261fc0a8677e12a556394a68ec6de98f9d8a9d1f620d21aa9/globalmount\"" pod="openstack/ironic-conductor-0" Feb 20 12:20:48.241390 master-0 kubenswrapper[31420]: I0220 12:20:48.240929 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.245547 master-0 kubenswrapper[31420]: I0220 12:20:48.242179 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-scripts\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.245547 master-0 kubenswrapper[31420]: I0220 12:20:48.242406 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.251549 master-0 kubenswrapper[31420]: I0220 12:20:48.248441 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.251549 master-0 kubenswrapper[31420]: I0220 12:20:48.248652 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-config-data\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.251549 master-0 kubenswrapper[31420]: I0220 12:20:48.248881 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmpwf\" (UniqueName: \"kubernetes.io/projected/1c15d66e-eaa8-4305-a5cb-1fa14e718d2c-kube-api-access-rmpwf\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:48.370835 master-0 kubenswrapper[31420]: I0220 12:20:48.370645 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" event={"ID":"d2416044-6dc6-4ce7-8b30-574bce497d5e","Type":"ContainerStarted","Data":"8a062fb097df2aad5760eec2a2849bbc48508232612c6b3cee338dfb3ace525c"} Feb 20 12:20:48.389117 master-0 kubenswrapper[31420]: I0220 12:20:48.372626 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-scheduler-0" event={"ID":"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d","Type":"ContainerStarted","Data":"131de837bd6ac9a035e8dbd1435f401fb2d04ea12066513da70e14f00089473f"} Feb 20 12:20:48.389117 master-0 kubenswrapper[31420]: I0220 12:20:48.374664 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" event={"ID":"4e04e8c1-0541-4161-bb09-f4250f360d61","Type":"ContainerDied","Data":"84e5bd7ee1ab1ac4242dd31defd56143150212bf5b6dceb00ce042bff5acad13"} Feb 20 12:20:48.389117 master-0 kubenswrapper[31420]: I0220 12:20:48.374709 31420 scope.go:117] "RemoveContainer" containerID="04675b63566a6d3105ec458d7992db2d262a3d29955b1443f3008073c61023e3" Feb 20 12:20:48.389117 master-0 kubenswrapper[31420]: I0220 12:20:48.374883 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" Feb 20 12:20:48.389117 master-0 kubenswrapper[31420]: I0220 12:20:48.383130 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-backup-0" event={"ID":"d41cf8dc-8d38-4183-89fb-5d89372e867e","Type":"ContainerStarted","Data":"3797756f7d997f0ee91ce5cbbaf445c48d573accd06eb0da7f83cf4f32df968d"} Feb 20 12:20:48.389117 master-0 kubenswrapper[31420]: I0220 12:20:48.385106 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-2p4tg" event={"ID":"aab73f02-e440-40c2-bc9d-073803f49fc8","Type":"ContainerStarted","Data":"ca417e97161c3f238998b472d9de2e1f751bbfd181d6871cebf94542e2e98d86"} Feb 20 12:20:48.389117 master-0 kubenswrapper[31420]: I0220 12:20:48.387684 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-50c8-account-create-update-m4sp9" event={"ID":"95fc91ce-d187-45a1-bc88-45c0415d6cde","Type":"ContainerStarted","Data":"387eaf5aaa7009dc08246926076b02256f4f8593ccce0e5d6d2cfb33385fb560"} Feb 20 12:20:48.468434 master-0 kubenswrapper[31420]: I0220 12:20:48.467100 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4e04e8c1-0541-4161-bb09-f4250f360d61" (UID: "4e04e8c1-0541-4161-bb09-f4250f360d61"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:48.474617 master-0 kubenswrapper[31420]: I0220 12:20:48.471355 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5589979f4f-n6jwb"] Feb 20 12:20:48.504611 master-0 kubenswrapper[31420]: I0220 12:20:48.491220 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5f9fdb754b-plq9n"] Feb 20 12:20:48.512545 master-0 kubenswrapper[31420]: I0220 12:20:48.505873 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4e04e8c1-0541-4161-bb09-f4250f360d61" (UID: "4e04e8c1-0541-4161-bb09-f4250f360d61"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:48.512545 master-0 kubenswrapper[31420]: I0220 12:20:48.509713 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-config" (OuterVolumeSpecName: "config") pod "4e04e8c1-0541-4161-bb09-f4250f360d61" (UID: "4e04e8c1-0541-4161-bb09-f4250f360d61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:48.530545 master-0 kubenswrapper[31420]: I0220 12:20:48.530100 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:48.530545 master-0 kubenswrapper[31420]: I0220 12:20:48.530151 31420 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:48.530545 master-0 kubenswrapper[31420]: I0220 12:20:48.530168 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:48.656931 master-0 kubenswrapper[31420]: I0220 12:20:48.656226 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4e04e8c1-0541-4161-bb09-f4250f360d61" (UID: "4e04e8c1-0541-4161-bb09-f4250f360d61"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:48.708555 master-0 kubenswrapper[31420]: I0220 12:20:48.705034 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e04e8c1-0541-4161-bb09-f4250f360d61" (UID: "4e04e8c1-0541-4161-bb09-f4250f360d61"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:48.738613 master-0 kubenswrapper[31420]: I0220 12:20:48.738133 31420 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:48.738613 master-0 kubenswrapper[31420]: I0220 12:20:48.738193 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e04e8c1-0541-4161-bb09-f4250f360d61-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:48.795904 master-0 kubenswrapper[31420]: I0220 12:20:48.795851 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-d44a4-api-0" Feb 20 12:20:48.800057 master-0 kubenswrapper[31420]: I0220 12:20:48.799999 31420 scope.go:117] "RemoveContainer" containerID="595b10fbb07757b8f9b2b7f36f5bab4b926fd1e1b08d71d444820209ebcc29cd" Feb 20 12:20:49.344682 master-0 kubenswrapper[31420]: I0220 12:20:49.344632 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8446c48bd9-jqsfx"] Feb 20 12:20:49.374562 master-0 kubenswrapper[31420]: I0220 12:20:49.374491 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8446c48bd9-jqsfx"] Feb 20 12:20:49.409475 master-0 kubenswrapper[31420]: I0220 12:20:49.409413 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5f9fdb754b-plq9n" event={"ID":"81f14191-e623-41d6-8a28-e59e94280af4","Type":"ContainerStarted","Data":"8b9d290d3d6fd0004a3b267569078cea27fd17fd39fac8257b81cfa517648bb0"} Feb 20 12:20:49.412360 master-0 kubenswrapper[31420]: I0220 12:20:49.412300 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" event={"ID":"bedd35bf-2057-4ed2-b1bd-5b65e43788b0","Type":"ContainerStarted","Data":"ee909ce32fda6e59b257bb37cdb642db6d50c72a1f9069025dc58635e080816d"} Feb 20 12:20:49.416636 master-0 kubenswrapper[31420]: I0220 12:20:49.416496 31420 generic.go:334] "Generic (PLEG): container finished" podID="aab73f02-e440-40c2-bc9d-073803f49fc8" containerID="7fe5f27a7ae49c9b623a643dc445b3fa77a869b52d99dffe7799a83234971877" exitCode=0 Feb 20 12:20:49.416752 master-0 kubenswrapper[31420]: I0220 12:20:49.416549 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-2p4tg" event={"ID":"aab73f02-e440-40c2-bc9d-073803f49fc8","Type":"ContainerDied","Data":"7fe5f27a7ae49c9b623a643dc445b3fa77a869b52d99dffe7799a83234971877"} Feb 20 12:20:49.426236 master-0 kubenswrapper[31420]: I0220 12:20:49.426167 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-50c8-account-create-update-m4sp9" event={"ID":"95fc91ce-d187-45a1-bc88-45c0415d6cde","Type":"ContainerStarted","Data":"6b38aff5ec6e38ac18cb466ee47b561072367c3019ad692af90939cdeb3e6adb"} Feb 20 12:20:49.498763 master-0 kubenswrapper[31420]: I0220 12:20:49.498674 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-50c8-account-create-update-m4sp9" podStartSLOduration=4.498650871 podStartE2EDuration="4.498650871s" podCreationTimestamp="2026-02-20 12:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:49.473118319 +0000 UTC m=+954.192356560" watchObservedRunningTime="2026-02-20 12:20:49.498650871 +0000 UTC m=+954.217889112" Feb 20 12:20:49.518652 master-0 kubenswrapper[31420]: I0220 12:20:49.518537 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e04e8c1-0541-4161-bb09-f4250f360d61" path="/var/lib/kubelet/pods/4e04e8c1-0541-4161-bb09-f4250f360d61/volumes" Feb 20 12:20:49.727259 master-0 kubenswrapper[31420]: I0220 12:20:49.727055 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:49.743377 master-0 kubenswrapper[31420]: I0220 12:20:49.743305 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-99ff1016-1120-46f3-97dc-6ce412b6cc40\" (UniqueName: \"kubernetes.io/csi/topolvm.io^af0db9e5-778a-47d8-b7e3-337dfc557ed7\") pod \"ironic-conductor-0\" (UID: \"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c\") " pod="openstack/ironic-conductor-0" Feb 20 12:20:49.898703 master-0 kubenswrapper[31420]: I0220 12:20:49.893939 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 20 12:20:50.114826 master-0 kubenswrapper[31420]: I0220 12:20:50.114765 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-584bd6df9d-zt8sf"] Feb 20 12:20:50.117762 master-0 kubenswrapper[31420]: I0220 12:20:50.117710 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.124192 master-0 kubenswrapper[31420]: I0220 12:20:50.124092 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Feb 20 12:20:50.129681 master-0 kubenswrapper[31420]: I0220 12:20:50.129629 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Feb 20 12:20:50.155634 master-0 kubenswrapper[31420]: I0220 12:20:50.154191 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-584bd6df9d-zt8sf"] Feb 20 12:20:50.222025 master-0 kubenswrapper[31420]: I0220 12:20:50.221494 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/056164a2-48ed-4f27-80d4-a1fae8ebba54-logs\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.222025 master-0 kubenswrapper[31420]: I0220 12:20:50.221575 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/056164a2-48ed-4f27-80d4-a1fae8ebba54-etc-podinfo\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.222025 master-0 kubenswrapper[31420]: I0220 12:20:50.221602 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-scripts\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.222025 master-0 kubenswrapper[31420]: I0220 12:20:50.221636 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-combined-ca-bundle\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.222025 master-0 kubenswrapper[31420]: I0220 12:20:50.221672 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-public-tls-certs\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.222025 master-0 kubenswrapper[31420]: I0220 12:20:50.221718 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-config-data\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.222025 master-0 kubenswrapper[31420]: I0220 12:20:50.221742 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/056164a2-48ed-4f27-80d4-a1fae8ebba54-config-data-merged\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.222025 master-0 kubenswrapper[31420]: I0220 12:20:50.221779 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-internal-tls-certs\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.222025 master-0 kubenswrapper[31420]: I0220 12:20:50.221828 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8rgs\" (UniqueName: \"kubernetes.io/projected/056164a2-48ed-4f27-80d4-a1fae8ebba54-kube-api-access-f8rgs\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.222025 master-0 kubenswrapper[31420]: I0220 12:20:50.221853 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-config-data-custom\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.333332 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/056164a2-48ed-4f27-80d4-a1fae8ebba54-logs\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.333521 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/056164a2-48ed-4f27-80d4-a1fae8ebba54-etc-podinfo\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.333649 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-scripts\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.333713 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-combined-ca-bundle\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.333782 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-public-tls-certs\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.333805 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/056164a2-48ed-4f27-80d4-a1fae8ebba54-logs\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.334034 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-config-data\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.334181 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/056164a2-48ed-4f27-80d4-a1fae8ebba54-config-data-merged\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.334333 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-internal-tls-certs\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.334509 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8rgs\" (UniqueName: \"kubernetes.io/projected/056164a2-48ed-4f27-80d4-a1fae8ebba54-kube-api-access-f8rgs\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.334576 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-config-data-custom\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.335145 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/056164a2-48ed-4f27-80d4-a1fae8ebba54-config-data-merged\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.339884 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-config-data-custom\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.350877 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/056164a2-48ed-4f27-80d4-a1fae8ebba54-etc-podinfo\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.351389 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-combined-ca-bundle\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.351399 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-internal-tls-certs\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.351657 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-config-data\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.362346 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-public-tls-certs\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.365624 master-0 kubenswrapper[31420]: I0220 12:20:50.362613 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/056164a2-48ed-4f27-80d4-a1fae8ebba54-scripts\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.383657 master-0 kubenswrapper[31420]: I0220 12:20:50.372102 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8rgs\" (UniqueName: \"kubernetes.io/projected/056164a2-48ed-4f27-80d4-a1fae8ebba54-kube-api-access-f8rgs\") pod \"ironic-584bd6df9d-zt8sf\" (UID: \"056164a2-48ed-4f27-80d4-a1fae8ebba54\") " pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.383657 master-0 kubenswrapper[31420]: I0220 12:20:50.381305 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:50.442557 master-0 kubenswrapper[31420]: I0220 12:20:50.441670 31420 generic.go:334] "Generic (PLEG): container finished" podID="bedd35bf-2057-4ed2-b1bd-5b65e43788b0" containerID="8cfb56d9455925a4695afd81e26d997b24beb7da151d9c019d68eece4909ad64" exitCode=0 Feb 20 12:20:50.442557 master-0 kubenswrapper[31420]: I0220 12:20:50.441760 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" event={"ID":"bedd35bf-2057-4ed2-b1bd-5b65e43788b0","Type":"ContainerDied","Data":"8cfb56d9455925a4695afd81e26d997b24beb7da151d9c019d68eece4909ad64"} Feb 20 12:20:50.456578 master-0 kubenswrapper[31420]: I0220 12:20:50.453353 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:20:50.456578 master-0 kubenswrapper[31420]: I0220 12:20:50.454960 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-scheduler-0" event={"ID":"0fcdb646-ba2b-466f-b072-1fd1b9e18a2d","Type":"ContainerStarted","Data":"24125e01bcd4da5a488680607c8421a54af13b9a81c82f6d5269ea5473c4ed3e"} Feb 20 12:20:50.463554 master-0 kubenswrapper[31420]: I0220 12:20:50.459164 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d44a4-backup-0" event={"ID":"d41cf8dc-8d38-4183-89fb-5d89372e867e","Type":"ContainerStarted","Data":"f6ae9ad378676b8a1c0e3ec2d050c53705501f59e176c3f02cb5e87514ee4650"} Feb 20 12:20:50.476476 master-0 kubenswrapper[31420]: I0220 12:20:50.474569 31420 generic.go:334] "Generic (PLEG): container finished" podID="95fc91ce-d187-45a1-bc88-45c0415d6cde" containerID="6b38aff5ec6e38ac18cb466ee47b561072367c3019ad692af90939cdeb3e6adb" exitCode=0 Feb 20 12:20:50.476476 master-0 kubenswrapper[31420]: I0220 12:20:50.474860 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-50c8-account-create-update-m4sp9" event={"ID":"95fc91ce-d187-45a1-bc88-45c0415d6cde","Type":"ContainerDied","Data":"6b38aff5ec6e38ac18cb466ee47b561072367c3019ad692af90939cdeb3e6adb"} Feb 20 12:20:50.493509 master-0 kubenswrapper[31420]: I0220 12:20:50.489387 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:50.595712 master-0 kubenswrapper[31420]: I0220 12:20:50.590117 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-d44a4-backup-0" podStartSLOduration=6.590101241 podStartE2EDuration="6.590101241s" podCreationTimestamp="2026-02-20 12:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:50.586459648 +0000 UTC m=+955.305697889" watchObservedRunningTime="2026-02-20 12:20:50.590101241 +0000 UTC m=+955.309339482" Feb 20 12:20:50.700552 master-0 kubenswrapper[31420]: I0220 12:20:50.700360 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 20 12:20:50.717819 master-0 kubenswrapper[31420]: W0220 12:20:50.717740 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c15d66e_eaa8_4305_a5cb_1fa14e718d2c.slice/crio-ae46aa6beb036625c7d5b581119d056f1bfa9d4e9d1e0500a9af75d41ebab21f WatchSource:0}: Error finding container ae46aa6beb036625c7d5b581119d056f1bfa9d4e9d1e0500a9af75d41ebab21f: Status 404 returned error can't find the container with id ae46aa6beb036625c7d5b581119d056f1bfa9d4e9d1e0500a9af75d41ebab21f Feb 20 12:20:50.746568 master-0 kubenswrapper[31420]: I0220 12:20:50.743763 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-d44a4-scheduler-0" podStartSLOduration=6.743742434 podStartE2EDuration="6.743742434s" podCreationTimestamp="2026-02-20 12:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:50.702442436 +0000 UTC m=+955.421680677" watchObservedRunningTime="2026-02-20 12:20:50.743742434 +0000 UTC m=+955.462980675" Feb 20 12:20:50.985615 master-0 kubenswrapper[31420]: I0220 12:20:50.982140 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c4f4ddf86-w8dng"] Feb 20 12:20:50.985615 master-0 kubenswrapper[31420]: I0220 12:20:50.984802 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.002762 master-0 kubenswrapper[31420]: I0220 12:20:51.002017 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c4f4ddf86-w8dng"] Feb 20 12:20:51.102555 master-0 kubenswrapper[31420]: I0220 12:20:51.101559 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d19c8d22-6d80-4412-bf5e-11082d827b39-logs\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.102555 master-0 kubenswrapper[31420]: I0220 12:20:51.101675 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d19c8d22-6d80-4412-bf5e-11082d827b39-internal-tls-certs\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.102555 master-0 kubenswrapper[31420]: I0220 12:20:51.101740 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d19c8d22-6d80-4412-bf5e-11082d827b39-combined-ca-bundle\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.102555 master-0 kubenswrapper[31420]: I0220 12:20:51.101784 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d19c8d22-6d80-4412-bf5e-11082d827b39-config-data\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.102555 master-0 kubenswrapper[31420]: I0220 12:20:51.101849 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d19c8d22-6d80-4412-bf5e-11082d827b39-public-tls-certs\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.102555 master-0 kubenswrapper[31420]: I0220 12:20:51.102192 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d19c8d22-6d80-4412-bf5e-11082d827b39-scripts\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.103071 master-0 kubenswrapper[31420]: I0220 12:20:51.103037 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zm42\" (UniqueName: \"kubernetes.io/projected/d19c8d22-6d80-4412-bf5e-11082d827b39-kube-api-access-2zm42\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.221073 master-0 kubenswrapper[31420]: I0220 12:20:51.207355 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zm42\" (UniqueName: \"kubernetes.io/projected/d19c8d22-6d80-4412-bf5e-11082d827b39-kube-api-access-2zm42\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.221073 master-0 kubenswrapper[31420]: I0220 12:20:51.210671 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d19c8d22-6d80-4412-bf5e-11082d827b39-logs\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.221073 master-0 kubenswrapper[31420]: I0220 12:20:51.210767 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d19c8d22-6d80-4412-bf5e-11082d827b39-internal-tls-certs\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.221073 master-0 kubenswrapper[31420]: I0220 12:20:51.210807 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d19c8d22-6d80-4412-bf5e-11082d827b39-combined-ca-bundle\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.221073 master-0 kubenswrapper[31420]: I0220 12:20:51.210846 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d19c8d22-6d80-4412-bf5e-11082d827b39-config-data\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.221073 master-0 kubenswrapper[31420]: I0220 12:20:51.210926 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d19c8d22-6d80-4412-bf5e-11082d827b39-public-tls-certs\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.221073 master-0 kubenswrapper[31420]: I0220 12:20:51.211221 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d19c8d22-6d80-4412-bf5e-11082d827b39-scripts\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.221073 master-0 kubenswrapper[31420]: I0220 12:20:51.212105 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d19c8d22-6d80-4412-bf5e-11082d827b39-logs\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.221073 master-0 kubenswrapper[31420]: I0220 12:20:51.216641 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d19c8d22-6d80-4412-bf5e-11082d827b39-scripts\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.221073 master-0 kubenswrapper[31420]: I0220 12:20:51.219225 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d19c8d22-6d80-4412-bf5e-11082d827b39-internal-tls-certs\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.221073 master-0 kubenswrapper[31420]: I0220 12:20:51.219385 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d19c8d22-6d80-4412-bf5e-11082d827b39-public-tls-certs\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.221073 master-0 kubenswrapper[31420]: I0220 12:20:51.220249 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d19c8d22-6d80-4412-bf5e-11082d827b39-config-data\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.232627 master-0 kubenswrapper[31420]: I0220 12:20:51.232502 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d19c8d22-6d80-4412-bf5e-11082d827b39-combined-ca-bundle\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.232771 master-0 kubenswrapper[31420]: I0220 12:20:51.232734 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zm42\" (UniqueName: \"kubernetes.io/projected/d19c8d22-6d80-4412-bf5e-11082d827b39-kube-api-access-2zm42\") pod \"placement-c4f4ddf86-w8dng\" (UID: \"d19c8d22-6d80-4412-bf5e-11082d827b39\") " pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.245084 master-0 kubenswrapper[31420]: I0220 12:20:51.245021 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-584bd6df9d-zt8sf"] Feb 20 12:20:51.320864 master-0 kubenswrapper[31420]: I0220 12:20:51.320808 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:51.490799 master-0 kubenswrapper[31420]: I0220 12:20:51.490689 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" event={"ID":"bedd35bf-2057-4ed2-b1bd-5b65e43788b0","Type":"ContainerStarted","Data":"e7e08004d7636b14fa970434cb4dff81287ef80692e718ae5831a963bda9c13a"} Feb 20 12:20:51.491280 master-0 kubenswrapper[31420]: I0220 12:20:51.491035 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:51.492393 master-0 kubenswrapper[31420]: I0220 12:20:51.492360 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c","Type":"ContainerStarted","Data":"58943731e82f408df4b66d7e8d710e25922dac000446b894537f9d736471f371"} Feb 20 12:20:51.492492 master-0 kubenswrapper[31420]: I0220 12:20:51.492476 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c","Type":"ContainerStarted","Data":"ae46aa6beb036625c7d5b581119d056f1bfa9d4e9d1e0500a9af75d41ebab21f"} Feb 20 12:20:51.538402 master-0 kubenswrapper[31420]: I0220 12:20:51.526584 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" podStartSLOduration=5.526486448 podStartE2EDuration="5.526486448s" podCreationTimestamp="2026-02-20 12:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:51.51274431 +0000 UTC m=+956.231982551" watchObservedRunningTime="2026-02-20 12:20:51.526486448 +0000 UTC m=+956.245724699" Feb 20 12:20:52.528249 master-0 kubenswrapper[31420]: I0220 12:20:52.528179 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-584bd6df9d-zt8sf" event={"ID":"056164a2-48ed-4f27-80d4-a1fae8ebba54","Type":"ContainerStarted","Data":"ca1b47ddd3535a2140c6128cb1d8970c4e6ca867584a72be57d2d907730a9dce"} Feb 20 12:20:52.757613 master-0 kubenswrapper[31420]: I0220 12:20:52.750305 31420 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8446c48bd9-jqsfx" podUID="4e04e8c1-0541-4161-bb09-f4250f360d61" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.226:5353: i/o timeout" Feb 20 12:20:52.867026 master-0 kubenswrapper[31420]: I0220 12:20:52.864873 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-2p4tg" Feb 20 12:20:52.907109 master-0 kubenswrapper[31420]: I0220 12:20:52.895971 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-50c8-account-create-update-m4sp9" Feb 20 12:20:52.969060 master-0 kubenswrapper[31420]: I0220 12:20:52.969004 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpj2d\" (UniqueName: \"kubernetes.io/projected/95fc91ce-d187-45a1-bc88-45c0415d6cde-kube-api-access-hpj2d\") pod \"95fc91ce-d187-45a1-bc88-45c0415d6cde\" (UID: \"95fc91ce-d187-45a1-bc88-45c0415d6cde\") " Feb 20 12:20:52.969501 master-0 kubenswrapper[31420]: I0220 12:20:52.969243 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk9k7\" (UniqueName: \"kubernetes.io/projected/aab73f02-e440-40c2-bc9d-073803f49fc8-kube-api-access-kk9k7\") pod \"aab73f02-e440-40c2-bc9d-073803f49fc8\" (UID: \"aab73f02-e440-40c2-bc9d-073803f49fc8\") " Feb 20 12:20:52.969501 master-0 kubenswrapper[31420]: I0220 12:20:52.969324 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aab73f02-e440-40c2-bc9d-073803f49fc8-operator-scripts\") pod \"aab73f02-e440-40c2-bc9d-073803f49fc8\" (UID: \"aab73f02-e440-40c2-bc9d-073803f49fc8\") " Feb 20 12:20:52.969501 master-0 kubenswrapper[31420]: I0220 12:20:52.969352 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fc91ce-d187-45a1-bc88-45c0415d6cde-operator-scripts\") pod \"95fc91ce-d187-45a1-bc88-45c0415d6cde\" (UID: \"95fc91ce-d187-45a1-bc88-45c0415d6cde\") " Feb 20 12:20:52.970348 master-0 kubenswrapper[31420]: I0220 12:20:52.970320 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fc91ce-d187-45a1-bc88-45c0415d6cde-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95fc91ce-d187-45a1-bc88-45c0415d6cde" (UID: "95fc91ce-d187-45a1-bc88-45c0415d6cde"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:52.970827 master-0 kubenswrapper[31420]: I0220 12:20:52.970800 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aab73f02-e440-40c2-bc9d-073803f49fc8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aab73f02-e440-40c2-bc9d-073803f49fc8" (UID: "aab73f02-e440-40c2-bc9d-073803f49fc8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:52.979382 master-0 kubenswrapper[31420]: I0220 12:20:52.979328 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aab73f02-e440-40c2-bc9d-073803f49fc8-kube-api-access-kk9k7" (OuterVolumeSpecName: "kube-api-access-kk9k7") pod "aab73f02-e440-40c2-bc9d-073803f49fc8" (UID: "aab73f02-e440-40c2-bc9d-073803f49fc8"). InnerVolumeSpecName "kube-api-access-kk9k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:52.980482 master-0 kubenswrapper[31420]: I0220 12:20:52.980448 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95fc91ce-d187-45a1-bc88-45c0415d6cde-kube-api-access-hpj2d" (OuterVolumeSpecName: "kube-api-access-hpj2d") pod "95fc91ce-d187-45a1-bc88-45c0415d6cde" (UID: "95fc91ce-d187-45a1-bc88-45c0415d6cde"). InnerVolumeSpecName "kube-api-access-hpj2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:53.071519 master-0 kubenswrapper[31420]: I0220 12:20:53.070872 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c4f4ddf86-w8dng"] Feb 20 12:20:53.077552 master-0 kubenswrapper[31420]: I0220 12:20:53.072502 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk9k7\" (UniqueName: \"kubernetes.io/projected/aab73f02-e440-40c2-bc9d-073803f49fc8-kube-api-access-kk9k7\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:53.077552 master-0 kubenswrapper[31420]: I0220 12:20:53.072579 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aab73f02-e440-40c2-bc9d-073803f49fc8-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:53.077552 master-0 kubenswrapper[31420]: I0220 12:20:53.072595 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fc91ce-d187-45a1-bc88-45c0415d6cde-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:53.077552 master-0 kubenswrapper[31420]: I0220 12:20:53.072609 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpj2d\" (UniqueName: \"kubernetes.io/projected/95fc91ce-d187-45a1-bc88-45c0415d6cde-kube-api-access-hpj2d\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:53.542099 master-0 kubenswrapper[31420]: I0220 12:20:53.542019 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c4f4ddf86-w8dng" event={"ID":"d19c8d22-6d80-4412-bf5e-11082d827b39","Type":"ContainerStarted","Data":"156517eddc3d05fd0cfb727846d347780e1b94d4655392b735478fb736c16066"} Feb 20 12:20:53.542099 master-0 kubenswrapper[31420]: I0220 12:20:53.542092 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c4f4ddf86-w8dng" event={"ID":"d19c8d22-6d80-4412-bf5e-11082d827b39","Type":"ContainerStarted","Data":"7c98421358343da7fff0f208d3030c8ce000529db3ffde6aefb9570adff1691f"} Feb 20 12:20:53.544224 master-0 kubenswrapper[31420]: I0220 12:20:53.544151 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-2p4tg" Feb 20 12:20:53.544314 master-0 kubenswrapper[31420]: I0220 12:20:53.544196 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-2p4tg" event={"ID":"aab73f02-e440-40c2-bc9d-073803f49fc8","Type":"ContainerDied","Data":"ca417e97161c3f238998b472d9de2e1f751bbfd181d6871cebf94542e2e98d86"} Feb 20 12:20:53.544314 master-0 kubenswrapper[31420]: I0220 12:20:53.544299 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca417e97161c3f238998b472d9de2e1f751bbfd181d6871cebf94542e2e98d86" Feb 20 12:20:53.546803 master-0 kubenswrapper[31420]: I0220 12:20:53.546768 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-50c8-account-create-update-m4sp9" event={"ID":"95fc91ce-d187-45a1-bc88-45c0415d6cde","Type":"ContainerDied","Data":"387eaf5aaa7009dc08246926076b02256f4f8593ccce0e5d6d2cfb33385fb560"} Feb 20 12:20:53.546803 master-0 kubenswrapper[31420]: I0220 12:20:53.546793 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="387eaf5aaa7009dc08246926076b02256f4f8593ccce0e5d6d2cfb33385fb560" Feb 20 12:20:53.547240 master-0 kubenswrapper[31420]: I0220 12:20:53.547174 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-50c8-account-create-update-m4sp9" Feb 20 12:20:53.548417 master-0 kubenswrapper[31420]: I0220 12:20:53.548373 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" event={"ID":"d2416044-6dc6-4ce7-8b30-574bce497d5e","Type":"ContainerStarted","Data":"7473eafe665cb971b24c9f54e4994aee8842e7583e20c8ac5a132702e6d35c8f"} Feb 20 12:20:53.548727 master-0 kubenswrapper[31420]: I0220 12:20:53.548643 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:20:53.550156 master-0 kubenswrapper[31420]: I0220 12:20:53.550114 31420 generic.go:334] "Generic (PLEG): container finished" podID="81f14191-e623-41d6-8a28-e59e94280af4" containerID="6cd3d743c63ee674317e3ddf4b0fc5ff72bc0776de995183532ebfb547e2a934" exitCode=0 Feb 20 12:20:53.550238 master-0 kubenswrapper[31420]: I0220 12:20:53.550172 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5f9fdb754b-plq9n" event={"ID":"81f14191-e623-41d6-8a28-e59e94280af4","Type":"ContainerDied","Data":"6cd3d743c63ee674317e3ddf4b0fc5ff72bc0776de995183532ebfb547e2a934"} Feb 20 12:20:53.555102 master-0 kubenswrapper[31420]: I0220 12:20:53.555059 31420 generic.go:334] "Generic (PLEG): container finished" podID="056164a2-48ed-4f27-80d4-a1fae8ebba54" containerID="b339dd129ff4c3f9635b0e4b669648835342eca21f93ecebddd88f4ae64a72c4" exitCode=0 Feb 20 12:20:53.555102 master-0 kubenswrapper[31420]: I0220 12:20:53.555101 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-584bd6df9d-zt8sf" event={"ID":"056164a2-48ed-4f27-80d4-a1fae8ebba54","Type":"ContainerDied","Data":"b339dd129ff4c3f9635b0e4b669648835342eca21f93ecebddd88f4ae64a72c4"} Feb 20 12:20:53.588829 master-0 kubenswrapper[31420]: I0220 12:20:53.588691 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" podStartSLOduration=4.048297612 podStartE2EDuration="8.588658815s" podCreationTimestamp="2026-02-20 12:20:45 +0000 UTC" firstStartedPulling="2026-02-20 12:20:48.036664308 +0000 UTC m=+952.755902549" lastFinishedPulling="2026-02-20 12:20:52.577025511 +0000 UTC m=+957.296263752" observedRunningTime="2026-02-20 12:20:53.576409939 +0000 UTC m=+958.295648180" watchObservedRunningTime="2026-02-20 12:20:53.588658815 +0000 UTC m=+958.307897056" Feb 20 12:20:54.576566 master-0 kubenswrapper[31420]: I0220 12:20:54.576460 31420 generic.go:334] "Generic (PLEG): container finished" podID="1c15d66e-eaa8-4305-a5cb-1fa14e718d2c" containerID="58943731e82f408df4b66d7e8d710e25922dac000446b894537f9d736471f371" exitCode=0 Feb 20 12:20:54.577429 master-0 kubenswrapper[31420]: I0220 12:20:54.576552 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c","Type":"ContainerDied","Data":"58943731e82f408df4b66d7e8d710e25922dac000446b894537f9d736471f371"} Feb 20 12:20:54.602557 master-0 kubenswrapper[31420]: I0220 12:20:54.600975 31420 generic.go:334] "Generic (PLEG): container finished" podID="81f14191-e623-41d6-8a28-e59e94280af4" containerID="bdc1bcd750efc27ea475d3e47226a648b0ebf373e5dac041eb5fcb65874d6100" exitCode=1 Feb 20 12:20:54.602557 master-0 kubenswrapper[31420]: I0220 12:20:54.601578 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5f9fdb754b-plq9n" event={"ID":"81f14191-e623-41d6-8a28-e59e94280af4","Type":"ContainerDied","Data":"bdc1bcd750efc27ea475d3e47226a648b0ebf373e5dac041eb5fcb65874d6100"} Feb 20 12:20:54.602557 master-0 kubenswrapper[31420]: I0220 12:20:54.601632 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5f9fdb754b-plq9n" event={"ID":"81f14191-e623-41d6-8a28-e59e94280af4","Type":"ContainerStarted","Data":"c3c649b00762e84f17758cce4a7e8843225dc564b1544cc042956fe38a3fa05f"} Feb 20 12:20:54.602557 master-0 kubenswrapper[31420]: I0220 12:20:54.602456 31420 scope.go:117] "RemoveContainer" containerID="bdc1bcd750efc27ea475d3e47226a648b0ebf373e5dac041eb5fcb65874d6100" Feb 20 12:20:54.636340 master-0 kubenswrapper[31420]: I0220 12:20:54.636266 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-584bd6df9d-zt8sf" event={"ID":"056164a2-48ed-4f27-80d4-a1fae8ebba54","Type":"ContainerStarted","Data":"aaea33e2eac648a8a86fb017e2103e88d2b9b533abb0b0fbd7afe51fbdbb9010"} Feb 20 12:20:54.636340 master-0 kubenswrapper[31420]: I0220 12:20:54.636331 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-584bd6df9d-zt8sf" event={"ID":"056164a2-48ed-4f27-80d4-a1fae8ebba54","Type":"ContainerStarted","Data":"2b2c6bda2a0a9e97a3d77d96a884ac223446e178739f9d9c04d3f3fa35ab8a61"} Feb 20 12:20:54.636722 master-0 kubenswrapper[31420]: I0220 12:20:54.636680 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:20:54.655300 master-0 kubenswrapper[31420]: I0220 12:20:54.655212 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c4f4ddf86-w8dng" event={"ID":"d19c8d22-6d80-4412-bf5e-11082d827b39","Type":"ContainerStarted","Data":"48ba4384b1cbc73da1c82d5ddf7f6b81f51f6bc7660d1e8436a595f2ad66712b"} Feb 20 12:20:54.655604 master-0 kubenswrapper[31420]: I0220 12:20:54.655558 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:54.681971 master-0 kubenswrapper[31420]: I0220 12:20:54.681699 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-584bd6df9d-zt8sf" podStartSLOduration=3.785242852 podStartE2EDuration="4.681679949s" podCreationTimestamp="2026-02-20 12:20:50 +0000 UTC" firstStartedPulling="2026-02-20 12:20:51.682242201 +0000 UTC m=+956.401480442" lastFinishedPulling="2026-02-20 12:20:52.578679298 +0000 UTC m=+957.297917539" observedRunningTime="2026-02-20 12:20:54.677074109 +0000 UTC m=+959.396312370" watchObservedRunningTime="2026-02-20 12:20:54.681679949 +0000 UTC m=+959.400918190" Feb 20 12:20:54.695555 master-0 kubenswrapper[31420]: I0220 12:20:54.695466 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:54.732159 master-0 kubenswrapper[31420]: I0220 12:20:54.726158 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c4f4ddf86-w8dng" podStartSLOduration=4.726135606 podStartE2EDuration="4.726135606s" podCreationTimestamp="2026-02-20 12:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:20:54.711574584 +0000 UTC m=+959.430812845" watchObservedRunningTime="2026-02-20 12:20:54.726135606 +0000 UTC m=+959.445373847" Feb 20 12:20:54.908551 master-0 kubenswrapper[31420]: I0220 12:20:54.908333 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-d44a4-volume-lvm-iscsi-0" Feb 20 12:20:54.917333 master-0 kubenswrapper[31420]: I0220 12:20:54.916602 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-d44a4-scheduler-0" Feb 20 12:20:54.961494 master-0 kubenswrapper[31420]: I0220 12:20:54.960323 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:55.203130 master-0 kubenswrapper[31420]: I0220 12:20:55.203063 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-d44a4-backup-0" Feb 20 12:20:55.677307 master-0 kubenswrapper[31420]: I0220 12:20:55.677243 31420 generic.go:334] "Generic (PLEG): container finished" podID="d2416044-6dc6-4ce7-8b30-574bce497d5e" containerID="7473eafe665cb971b24c9f54e4994aee8842e7583e20c8ac5a132702e6d35c8f" exitCode=1 Feb 20 12:20:55.680839 master-0 kubenswrapper[31420]: I0220 12:20:55.677330 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" event={"ID":"d2416044-6dc6-4ce7-8b30-574bce497d5e","Type":"ContainerDied","Data":"7473eafe665cb971b24c9f54e4994aee8842e7583e20c8ac5a132702e6d35c8f"} Feb 20 12:20:55.680839 master-0 kubenswrapper[31420]: I0220 12:20:55.678212 31420 scope.go:117] "RemoveContainer" containerID="7473eafe665cb971b24c9f54e4994aee8842e7583e20c8ac5a132702e6d35c8f" Feb 20 12:20:55.682138 master-0 kubenswrapper[31420]: I0220 12:20:55.682096 31420 generic.go:334] "Generic (PLEG): container finished" podID="81f14191-e623-41d6-8a28-e59e94280af4" containerID="f5ed9cb5be309fa236c46f3e3f82e9f3aa68aaeda7c8e02769f617f80088ae7d" exitCode=1 Feb 20 12:20:55.682258 master-0 kubenswrapper[31420]: I0220 12:20:55.682232 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5f9fdb754b-plq9n" event={"ID":"81f14191-e623-41d6-8a28-e59e94280af4","Type":"ContainerDied","Data":"f5ed9cb5be309fa236c46f3e3f82e9f3aa68aaeda7c8e02769f617f80088ae7d"} Feb 20 12:20:55.682309 master-0 kubenswrapper[31420]: I0220 12:20:55.682279 31420 scope.go:117] "RemoveContainer" containerID="bdc1bcd750efc27ea475d3e47226a648b0ebf373e5dac041eb5fcb65874d6100" Feb 20 12:20:55.682766 master-0 kubenswrapper[31420]: I0220 12:20:55.682729 31420 scope.go:117] "RemoveContainer" containerID="f5ed9cb5be309fa236c46f3e3f82e9f3aa68aaeda7c8e02769f617f80088ae7d" Feb 20 12:20:55.683059 master-0 kubenswrapper[31420]: E0220 12:20:55.683028 31420 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-5f9fdb754b-plq9n_openstack(81f14191-e623-41d6-8a28-e59e94280af4)\"" pod="openstack/ironic-5f9fdb754b-plq9n" podUID="81f14191-e623-41d6-8a28-e59e94280af4" Feb 20 12:20:55.684103 master-0 kubenswrapper[31420]: I0220 12:20:55.684043 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:20:56.721094 master-0 kubenswrapper[31420]: I0220 12:20:56.721026 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" event={"ID":"d2416044-6dc6-4ce7-8b30-574bce497d5e","Type":"ContainerStarted","Data":"d75e45c881bc6b74bbd7bbf98c740ad8034658bd845e06076fa2e3543bc15c05"} Feb 20 12:20:56.721697 master-0 kubenswrapper[31420]: I0220 12:20:56.721641 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:20:56.730080 master-0 kubenswrapper[31420]: I0220 12:20:56.730029 31420 scope.go:117] "RemoveContainer" containerID="f5ed9cb5be309fa236c46f3e3f82e9f3aa68aaeda7c8e02769f617f80088ae7d" Feb 20 12:20:56.730359 master-0 kubenswrapper[31420]: E0220 12:20:56.730327 31420 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-5f9fdb754b-plq9n_openstack(81f14191-e623-41d6-8a28-e59e94280af4)\"" pod="openstack/ironic-5f9fdb754b-plq9n" podUID="81f14191-e623-41d6-8a28-e59e94280af4" Feb 20 12:20:57.724436 master-0 kubenswrapper[31420]: I0220 12:20:57.724220 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:20:57.779244 master-0 kubenswrapper[31420]: I0220 12:20:57.779168 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:57.779244 master-0 kubenswrapper[31420]: I0220 12:20:57.779253 31420 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:20:57.781664 master-0 kubenswrapper[31420]: I0220 12:20:57.780516 31420 scope.go:117] "RemoveContainer" containerID="f5ed9cb5be309fa236c46f3e3f82e9f3aa68aaeda7c8e02769f617f80088ae7d" Feb 20 12:20:57.781664 master-0 kubenswrapper[31420]: E0220 12:20:57.780870 31420 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-5f9fdb754b-plq9n_openstack(81f14191-e623-41d6-8a28-e59e94280af4)\"" pod="openstack/ironic-5f9fdb754b-plq9n" podUID="81f14191-e623-41d6-8a28-e59e94280af4" Feb 20 12:20:57.964447 master-0 kubenswrapper[31420]: I0220 12:20:57.964366 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b6f45695-fkj5c"] Feb 20 12:20:57.964745 master-0 kubenswrapper[31420]: I0220 12:20:57.964706 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" podUID="cebde23d-32f9-453e-9cea-3c240e0a8e43" containerName="dnsmasq-dns" containerID="cri-o://844fc409bd4fd08593b300e7fcacbcdecf3ba5a3e68c9f7a90c2f9f597024073" gracePeriod=10 Feb 20 12:20:58.768131 master-0 kubenswrapper[31420]: I0220 12:20:58.768021 31420 generic.go:334] "Generic (PLEG): container finished" podID="cebde23d-32f9-453e-9cea-3c240e0a8e43" containerID="844fc409bd4fd08593b300e7fcacbcdecf3ba5a3e68c9f7a90c2f9f597024073" exitCode=0 Feb 20 12:20:58.768131 master-0 kubenswrapper[31420]: I0220 12:20:58.768116 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" event={"ID":"cebde23d-32f9-453e-9cea-3c240e0a8e43","Type":"ContainerDied","Data":"844fc409bd4fd08593b300e7fcacbcdecf3ba5a3e68c9f7a90c2f9f597024073"} Feb 20 12:20:58.768799 master-0 kubenswrapper[31420]: I0220 12:20:58.768149 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" event={"ID":"cebde23d-32f9-453e-9cea-3c240e0a8e43","Type":"ContainerDied","Data":"5f987a254c3b84a1b2d3682466c8f2a5b2cbf026263d7aeae239b6d75f9ae4f7"} Feb 20 12:20:58.768799 master-0 kubenswrapper[31420]: I0220 12:20:58.768162 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f987a254c3b84a1b2d3682466c8f2a5b2cbf026263d7aeae239b6d75f9ae4f7" Feb 20 12:20:58.772265 master-0 kubenswrapper[31420]: I0220 12:20:58.772220 31420 generic.go:334] "Generic (PLEG): container finished" podID="d2416044-6dc6-4ce7-8b30-574bce497d5e" containerID="d75e45c881bc6b74bbd7bbf98c740ad8034658bd845e06076fa2e3543bc15c05" exitCode=1 Feb 20 12:20:58.772383 master-0 kubenswrapper[31420]: I0220 12:20:58.772268 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" event={"ID":"d2416044-6dc6-4ce7-8b30-574bce497d5e","Type":"ContainerDied","Data":"d75e45c881bc6b74bbd7bbf98c740ad8034658bd845e06076fa2e3543bc15c05"} Feb 20 12:20:58.772383 master-0 kubenswrapper[31420]: I0220 12:20:58.772309 31420 scope.go:117] "RemoveContainer" containerID="7473eafe665cb971b24c9f54e4994aee8842e7583e20c8ac5a132702e6d35c8f" Feb 20 12:20:58.773301 master-0 kubenswrapper[31420]: I0220 12:20:58.773254 31420 scope.go:117] "RemoveContainer" containerID="d75e45c881bc6b74bbd7bbf98c740ad8034658bd845e06076fa2e3543bc15c05" Feb 20 12:20:58.773632 master-0 kubenswrapper[31420]: E0220 12:20:58.773603 31420 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-5db78c68bd-t4cm6_openstack(d2416044-6dc6-4ce7-8b30-574bce497d5e)\"" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" podUID="d2416044-6dc6-4ce7-8b30-574bce497d5e" Feb 20 12:20:58.855430 master-0 kubenswrapper[31420]: I0220 12:20:58.855363 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:58.983251 master-0 kubenswrapper[31420]: I0220 12:20:58.983111 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-dns-svc\") pod \"cebde23d-32f9-453e-9cea-3c240e0a8e43\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " Feb 20 12:20:58.983251 master-0 kubenswrapper[31420]: I0220 12:20:58.983229 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-dns-swift-storage-0\") pod \"cebde23d-32f9-453e-9cea-3c240e0a8e43\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " Feb 20 12:20:58.983543 master-0 kubenswrapper[31420]: I0220 12:20:58.983512 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-ovsdbserver-nb\") pod \"cebde23d-32f9-453e-9cea-3c240e0a8e43\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " Feb 20 12:20:58.983672 master-0 kubenswrapper[31420]: I0220 12:20:58.983646 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-config\") pod \"cebde23d-32f9-453e-9cea-3c240e0a8e43\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " Feb 20 12:20:58.983720 master-0 kubenswrapper[31420]: I0220 12:20:58.983698 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-ovsdbserver-sb\") pod \"cebde23d-32f9-453e-9cea-3c240e0a8e43\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " Feb 20 12:20:58.983752 master-0 kubenswrapper[31420]: I0220 12:20:58.983734 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qwz8\" (UniqueName: \"kubernetes.io/projected/cebde23d-32f9-453e-9cea-3c240e0a8e43-kube-api-access-5qwz8\") pod \"cebde23d-32f9-453e-9cea-3c240e0a8e43\" (UID: \"cebde23d-32f9-453e-9cea-3c240e0a8e43\") " Feb 20 12:20:59.004480 master-0 kubenswrapper[31420]: I0220 12:20:59.004416 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cebde23d-32f9-453e-9cea-3c240e0a8e43-kube-api-access-5qwz8" (OuterVolumeSpecName: "kube-api-access-5qwz8") pod "cebde23d-32f9-453e-9cea-3c240e0a8e43" (UID: "cebde23d-32f9-453e-9cea-3c240e0a8e43"). InnerVolumeSpecName "kube-api-access-5qwz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:20:59.052709 master-0 kubenswrapper[31420]: I0220 12:20:59.052617 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cebde23d-32f9-453e-9cea-3c240e0a8e43" (UID: "cebde23d-32f9-453e-9cea-3c240e0a8e43"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:59.058110 master-0 kubenswrapper[31420]: I0220 12:20:59.058050 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-config" (OuterVolumeSpecName: "config") pod "cebde23d-32f9-453e-9cea-3c240e0a8e43" (UID: "cebde23d-32f9-453e-9cea-3c240e0a8e43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:59.063485 master-0 kubenswrapper[31420]: I0220 12:20:59.063386 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cebde23d-32f9-453e-9cea-3c240e0a8e43" (UID: "cebde23d-32f9-453e-9cea-3c240e0a8e43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:59.070631 master-0 kubenswrapper[31420]: I0220 12:20:59.070471 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cebde23d-32f9-453e-9cea-3c240e0a8e43" (UID: "cebde23d-32f9-453e-9cea-3c240e0a8e43"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:59.075913 master-0 kubenswrapper[31420]: I0220 12:20:59.075834 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cebde23d-32f9-453e-9cea-3c240e0a8e43" (UID: "cebde23d-32f9-453e-9cea-3c240e0a8e43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:20:59.086665 master-0 kubenswrapper[31420]: I0220 12:20:59.086567 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:59.086665 master-0 kubenswrapper[31420]: I0220 12:20:59.086625 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:59.086665 master-0 kubenswrapper[31420]: I0220 12:20:59.086638 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:59.086665 master-0 kubenswrapper[31420]: I0220 12:20:59.086651 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qwz8\" (UniqueName: \"kubernetes.io/projected/cebde23d-32f9-453e-9cea-3c240e0a8e43-kube-api-access-5qwz8\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:59.086665 master-0 kubenswrapper[31420]: I0220 12:20:59.086668 31420 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:59.087162 master-0 kubenswrapper[31420]: I0220 12:20:59.086680 31420 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cebde23d-32f9-453e-9cea-3c240e0a8e43-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 12:20:59.786912 master-0 kubenswrapper[31420]: I0220 12:20:59.786763 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b6f45695-fkj5c" Feb 20 12:20:59.839679 master-0 kubenswrapper[31420]: I0220 12:20:59.839620 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b6f45695-fkj5c"] Feb 20 12:20:59.853857 master-0 kubenswrapper[31420]: I0220 12:20:59.853745 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75b6f45695-fkj5c"] Feb 20 12:21:00.388472 master-0 kubenswrapper[31420]: I0220 12:21:00.386968 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5947585c67-kc792" Feb 20 12:21:00.912642 master-0 kubenswrapper[31420]: I0220 12:21:00.912563 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-zz7jl"] Feb 20 12:21:00.913649 master-0 kubenswrapper[31420]: E0220 12:21:00.913590 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cebde23d-32f9-453e-9cea-3c240e0a8e43" containerName="dnsmasq-dns" Feb 20 12:21:00.913649 master-0 kubenswrapper[31420]: I0220 12:21:00.913621 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="cebde23d-32f9-453e-9cea-3c240e0a8e43" containerName="dnsmasq-dns" Feb 20 12:21:00.913820 master-0 kubenswrapper[31420]: E0220 12:21:00.913781 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cebde23d-32f9-453e-9cea-3c240e0a8e43" containerName="init" Feb 20 12:21:00.913887 master-0 kubenswrapper[31420]: I0220 12:21:00.913793 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="cebde23d-32f9-453e-9cea-3c240e0a8e43" containerName="init" Feb 20 12:21:00.914002 master-0 kubenswrapper[31420]: E0220 12:21:00.913980 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab73f02-e440-40c2-bc9d-073803f49fc8" containerName="mariadb-database-create" Feb 20 12:21:00.914002 master-0 kubenswrapper[31420]: I0220 12:21:00.913994 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab73f02-e440-40c2-bc9d-073803f49fc8" containerName="mariadb-database-create" Feb 20 12:21:00.914163 master-0 kubenswrapper[31420]: E0220 12:21:00.914007 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fc91ce-d187-45a1-bc88-45c0415d6cde" containerName="mariadb-account-create-update" Feb 20 12:21:00.914163 master-0 kubenswrapper[31420]: I0220 12:21:00.914014 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fc91ce-d187-45a1-bc88-45c0415d6cde" containerName="mariadb-account-create-update" Feb 20 12:21:00.914282 master-0 kubenswrapper[31420]: I0220 12:21:00.914247 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fc91ce-d187-45a1-bc88-45c0415d6cde" containerName="mariadb-account-create-update" Feb 20 12:21:00.914346 master-0 kubenswrapper[31420]: I0220 12:21:00.914301 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab73f02-e440-40c2-bc9d-073803f49fc8" containerName="mariadb-database-create" Feb 20 12:21:00.914346 master-0 kubenswrapper[31420]: I0220 12:21:00.914319 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="cebde23d-32f9-453e-9cea-3c240e0a8e43" containerName="dnsmasq-dns" Feb 20 12:21:00.915227 master-0 kubenswrapper[31420]: I0220 12:21:00.915191 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:00.917864 master-0 kubenswrapper[31420]: I0220 12:21:00.917793 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 20 12:21:00.918196 master-0 kubenswrapper[31420]: I0220 12:21:00.918159 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 20 12:21:00.941113 master-0 kubenswrapper[31420]: I0220 12:21:00.941055 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-zz7jl"] Feb 20 12:21:01.064070 master-0 kubenswrapper[31420]: I0220 12:21:01.063970 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/db935f50-a18d-4ceb-9a23-149442b7f041-etc-podinfo\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.064326 master-0 kubenswrapper[31420]: I0220 12:21:01.064143 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/db935f50-a18d-4ceb-9a23-149442b7f041-var-lib-ironic\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.064326 master-0 kubenswrapper[31420]: I0220 12:21:01.064196 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/db935f50-a18d-4ceb-9a23-149442b7f041-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.064471 master-0 kubenswrapper[31420]: I0220 12:21:01.064437 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cmbj\" (UniqueName: \"kubernetes.io/projected/db935f50-a18d-4ceb-9a23-149442b7f041-kube-api-access-5cmbj\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.064589 master-0 kubenswrapper[31420]: I0220 12:21:01.064555 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-combined-ca-bundle\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.064675 master-0 kubenswrapper[31420]: I0220 12:21:01.064602 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-scripts\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.064871 master-0 kubenswrapper[31420]: I0220 12:21:01.064846 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-config\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.168630 master-0 kubenswrapper[31420]: I0220 12:21:01.168038 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-config\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.168630 master-0 kubenswrapper[31420]: I0220 12:21:01.168128 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/db935f50-a18d-4ceb-9a23-149442b7f041-etc-podinfo\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.168630 master-0 kubenswrapper[31420]: I0220 12:21:01.168207 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/db935f50-a18d-4ceb-9a23-149442b7f041-var-lib-ironic\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.168630 master-0 kubenswrapper[31420]: I0220 12:21:01.168243 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/db935f50-a18d-4ceb-9a23-149442b7f041-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.168630 master-0 kubenswrapper[31420]: I0220 12:21:01.168300 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cmbj\" (UniqueName: \"kubernetes.io/projected/db935f50-a18d-4ceb-9a23-149442b7f041-kube-api-access-5cmbj\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.168630 master-0 kubenswrapper[31420]: I0220 12:21:01.168327 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-combined-ca-bundle\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.168630 master-0 kubenswrapper[31420]: I0220 12:21:01.168346 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-scripts\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.171741 master-0 kubenswrapper[31420]: I0220 12:21:01.169571 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/db935f50-a18d-4ceb-9a23-149442b7f041-var-lib-ironic\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.171741 master-0 kubenswrapper[31420]: I0220 12:21:01.169867 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/db935f50-a18d-4ceb-9a23-149442b7f041-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.174363 master-0 kubenswrapper[31420]: I0220 12:21:01.174307 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/db935f50-a18d-4ceb-9a23-149442b7f041-etc-podinfo\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.174438 master-0 kubenswrapper[31420]: I0220 12:21:01.174361 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-combined-ca-bundle\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.179546 master-0 kubenswrapper[31420]: I0220 12:21:01.177588 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-scripts\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.179546 master-0 kubenswrapper[31420]: I0220 12:21:01.178374 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-config\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.188822 master-0 kubenswrapper[31420]: I0220 12:21:01.188767 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cmbj\" (UniqueName: \"kubernetes.io/projected/db935f50-a18d-4ceb-9a23-149442b7f041-kube-api-access-5cmbj\") pod \"ironic-inspector-db-sync-zz7jl\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.244550 master-0 kubenswrapper[31420]: I0220 12:21:01.244425 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:01.522171 master-0 kubenswrapper[31420]: I0220 12:21:01.522109 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cebde23d-32f9-453e-9cea-3c240e0a8e43" path="/var/lib/kubelet/pods/cebde23d-32f9-453e-9cea-3c240e0a8e43/volumes" Feb 20 12:21:01.804120 master-0 kubenswrapper[31420]: I0220 12:21:01.804058 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-zz7jl"] Feb 20 12:21:01.821373 master-0 kubenswrapper[31420]: W0220 12:21:01.821299 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb935f50_a18d_4ceb_9a23_149442b7f041.slice/crio-45aecfa296e7763e9721cb6cbeda177b2acb38ec27b7bb4f93dcdf88f41d6d23 WatchSource:0}: Error finding container 45aecfa296e7763e9721cb6cbeda177b2acb38ec27b7bb4f93dcdf88f41d6d23: Status 404 returned error can't find the container with id 45aecfa296e7763e9721cb6cbeda177b2acb38ec27b7bb4f93dcdf88f41d6d23 Feb 20 12:21:01.953075 master-0 kubenswrapper[31420]: I0220 12:21:01.953023 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-584bd6df9d-zt8sf" Feb 20 12:21:02.185545 master-0 kubenswrapper[31420]: I0220 12:21:02.181563 31420 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:21:02.185545 master-0 kubenswrapper[31420]: I0220 12:21:02.183652 31420 scope.go:117] "RemoveContainer" containerID="d75e45c881bc6b74bbd7bbf98c740ad8034658bd845e06076fa2e3543bc15c05" Feb 20 12:21:02.201260 master-0 kubenswrapper[31420]: E0220 12:21:02.200826 31420 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-5db78c68bd-t4cm6_openstack(d2416044-6dc6-4ce7-8b30-574bce497d5e)\"" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" podUID="d2416044-6dc6-4ce7-8b30-574bce497d5e" Feb 20 12:21:02.258385 master-0 kubenswrapper[31420]: I0220 12:21:02.257514 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 20 12:21:02.262123 master-0 kubenswrapper[31420]: I0220 12:21:02.260801 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 20 12:21:02.264277 master-0 kubenswrapper[31420]: I0220 12:21:02.263743 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 20 12:21:02.264277 master-0 kubenswrapper[31420]: I0220 12:21:02.263816 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 20 12:21:02.298609 master-0 kubenswrapper[31420]: I0220 12:21:02.295603 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 20 12:21:02.303874 master-0 kubenswrapper[31420]: I0220 12:21:02.303730 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-5f9fdb754b-plq9n"] Feb 20 12:21:02.304549 master-0 kubenswrapper[31420]: I0220 12:21:02.304386 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-5f9fdb754b-plq9n" podUID="81f14191-e623-41d6-8a28-e59e94280af4" containerName="ironic-api-log" containerID="cri-o://c3c649b00762e84f17758cce4a7e8843225dc564b1544cc042956fe38a3fa05f" gracePeriod=60 Feb 20 12:21:02.358070 master-0 kubenswrapper[31420]: I0220 12:21:02.357953 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 20 12:21:02.361185 master-0 kubenswrapper[31420]: E0220 12:21:02.361125 31420 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-2jh4c openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[combined-ca-bundle kube-api-access-2jh4c openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="7e4f2bb2-455b-4ef5-be92-d14932c75171" Feb 20 12:21:02.368290 master-0 kubenswrapper[31420]: I0220 12:21:02.368236 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 20 12:21:02.401952 master-0 kubenswrapper[31420]: I0220 12:21:02.401869 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 20 12:21:02.404594 master-0 kubenswrapper[31420]: I0220 12:21:02.404370 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 20 12:21:02.430420 master-0 kubenswrapper[31420]: I0220 12:21:02.430360 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 20 12:21:02.518576 master-0 kubenswrapper[31420]: I0220 12:21:02.517044 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50774a30-2089-4dcc-9b00-51a5a600c68b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"50774a30-2089-4dcc-9b00-51a5a600c68b\") " pod="openstack/openstackclient" Feb 20 12:21:02.518576 master-0 kubenswrapper[31420]: I0220 12:21:02.517154 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/50774a30-2089-4dcc-9b00-51a5a600c68b-openstack-config-secret\") pod \"openstackclient\" (UID: \"50774a30-2089-4dcc-9b00-51a5a600c68b\") " pod="openstack/openstackclient" Feb 20 12:21:02.518576 master-0 kubenswrapper[31420]: I0220 12:21:02.517284 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km7p7\" (UniqueName: \"kubernetes.io/projected/50774a30-2089-4dcc-9b00-51a5a600c68b-kube-api-access-km7p7\") pod \"openstackclient\" (UID: \"50774a30-2089-4dcc-9b00-51a5a600c68b\") " pod="openstack/openstackclient" Feb 20 12:21:02.518576 master-0 kubenswrapper[31420]: I0220 12:21:02.517335 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/50774a30-2089-4dcc-9b00-51a5a600c68b-openstack-config\") pod \"openstackclient\" (UID: \"50774a30-2089-4dcc-9b00-51a5a600c68b\") " pod="openstack/openstackclient" Feb 20 12:21:02.574039 master-0 kubenswrapper[31420]: E0220 12:21:02.573955 31420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81f14191_e623_41d6_8a28_e59e94280af4.slice/crio-c3c649b00762e84f17758cce4a7e8843225dc564b1544cc042956fe38a3fa05f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81f14191_e623_41d6_8a28_e59e94280af4.slice/crio-conmon-c3c649b00762e84f17758cce4a7e8843225dc564b1544cc042956fe38a3fa05f.scope\": RecentStats: unable to find data in memory cache]" Feb 20 12:21:02.620638 master-0 kubenswrapper[31420]: I0220 12:21:02.620582 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/50774a30-2089-4dcc-9b00-51a5a600c68b-openstack-config\") pod \"openstackclient\" (UID: \"50774a30-2089-4dcc-9b00-51a5a600c68b\") " pod="openstack/openstackclient" Feb 20 12:21:02.620973 master-0 kubenswrapper[31420]: I0220 12:21:02.620956 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50774a30-2089-4dcc-9b00-51a5a600c68b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"50774a30-2089-4dcc-9b00-51a5a600c68b\") " pod="openstack/openstackclient" Feb 20 12:21:02.621123 master-0 kubenswrapper[31420]: I0220 12:21:02.621106 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/50774a30-2089-4dcc-9b00-51a5a600c68b-openstack-config-secret\") pod \"openstackclient\" (UID: \"50774a30-2089-4dcc-9b00-51a5a600c68b\") " pod="openstack/openstackclient" Feb 20 12:21:02.621289 master-0 kubenswrapper[31420]: I0220 12:21:02.621275 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km7p7\" (UniqueName: \"kubernetes.io/projected/50774a30-2089-4dcc-9b00-51a5a600c68b-kube-api-access-km7p7\") pod \"openstackclient\" (UID: \"50774a30-2089-4dcc-9b00-51a5a600c68b\") " pod="openstack/openstackclient" Feb 20 12:21:02.621845 master-0 kubenswrapper[31420]: I0220 12:21:02.621806 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/50774a30-2089-4dcc-9b00-51a5a600c68b-openstack-config\") pod \"openstackclient\" (UID: \"50774a30-2089-4dcc-9b00-51a5a600c68b\") " pod="openstack/openstackclient" Feb 20 12:21:02.625377 master-0 kubenswrapper[31420]: I0220 12:21:02.625356 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/50774a30-2089-4dcc-9b00-51a5a600c68b-openstack-config-secret\") pod \"openstackclient\" (UID: \"50774a30-2089-4dcc-9b00-51a5a600c68b\") " pod="openstack/openstackclient" Feb 20 12:21:02.625870 master-0 kubenswrapper[31420]: I0220 12:21:02.625820 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50774a30-2089-4dcc-9b00-51a5a600c68b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"50774a30-2089-4dcc-9b00-51a5a600c68b\") " pod="openstack/openstackclient" Feb 20 12:21:02.649783 master-0 kubenswrapper[31420]: I0220 12:21:02.649736 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km7p7\" (UniqueName: \"kubernetes.io/projected/50774a30-2089-4dcc-9b00-51a5a600c68b-kube-api-access-km7p7\") pod \"openstackclient\" (UID: \"50774a30-2089-4dcc-9b00-51a5a600c68b\") " pod="openstack/openstackclient" Feb 20 12:21:02.768938 master-0 kubenswrapper[31420]: I0220 12:21:02.767039 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 20 12:21:02.927552 master-0 kubenswrapper[31420]: I0220 12:21:02.919798 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-zz7jl" event={"ID":"db935f50-a18d-4ceb-9a23-149442b7f041","Type":"ContainerStarted","Data":"45aecfa296e7763e9721cb6cbeda177b2acb38ec27b7bb4f93dcdf88f41d6d23"} Feb 20 12:21:02.975547 master-0 kubenswrapper[31420]: I0220 12:21:02.957859 31420 generic.go:334] "Generic (PLEG): container finished" podID="81f14191-e623-41d6-8a28-e59e94280af4" containerID="c3c649b00762e84f17758cce4a7e8843225dc564b1544cc042956fe38a3fa05f" exitCode=143 Feb 20 12:21:02.975547 master-0 kubenswrapper[31420]: I0220 12:21:02.957956 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 20 12:21:02.975547 master-0 kubenswrapper[31420]: I0220 12:21:02.958570 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5f9fdb754b-plq9n" event={"ID":"81f14191-e623-41d6-8a28-e59e94280af4","Type":"ContainerDied","Data":"c3c649b00762e84f17758cce4a7e8843225dc564b1544cc042956fe38a3fa05f"} Feb 20 12:21:02.975547 master-0 kubenswrapper[31420]: I0220 12:21:02.971161 31420 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7e4f2bb2-455b-4ef5-be92-d14932c75171" podUID="50774a30-2089-4dcc-9b00-51a5a600c68b" Feb 20 12:21:02.989183 master-0 kubenswrapper[31420]: I0220 12:21:02.989051 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:21:02.999593 master-0 kubenswrapper[31420]: I0220 12:21:02.999545 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 20 12:21:03.091270 master-0 kubenswrapper[31420]: I0220 12:21:03.090937 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81f14191-e623-41d6-8a28-e59e94280af4-config-data-merged\") pod \"81f14191-e623-41d6-8a28-e59e94280af4\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " Feb 20 12:21:03.091270 master-0 kubenswrapper[31420]: I0220 12:21:03.091064 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-scripts\") pod \"81f14191-e623-41d6-8a28-e59e94280af4\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " Feb 20 12:21:03.091270 master-0 kubenswrapper[31420]: I0220 12:21:03.091154 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-config-data-custom\") pod \"81f14191-e623-41d6-8a28-e59e94280af4\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " Feb 20 12:21:03.091270 master-0 kubenswrapper[31420]: I0220 12:21:03.091186 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81f14191-e623-41d6-8a28-e59e94280af4-logs\") pod \"81f14191-e623-41d6-8a28-e59e94280af4\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " Feb 20 12:21:03.091270 master-0 kubenswrapper[31420]: I0220 12:21:03.091207 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/81f14191-e623-41d6-8a28-e59e94280af4-etc-podinfo\") pod \"81f14191-e623-41d6-8a28-e59e94280af4\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " Feb 20 12:21:03.091270 master-0 kubenswrapper[31420]: I0220 12:21:03.091226 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2wpl\" (UniqueName: \"kubernetes.io/projected/81f14191-e623-41d6-8a28-e59e94280af4-kube-api-access-z2wpl\") pod \"81f14191-e623-41d6-8a28-e59e94280af4\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " Feb 20 12:21:03.091270 master-0 kubenswrapper[31420]: I0220 12:21:03.091261 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-combined-ca-bundle\") pod \"81f14191-e623-41d6-8a28-e59e94280af4\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " Feb 20 12:21:03.091709 master-0 kubenswrapper[31420]: I0220 12:21:03.091514 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-config-data\") pod \"81f14191-e623-41d6-8a28-e59e94280af4\" (UID: \"81f14191-e623-41d6-8a28-e59e94280af4\") " Feb 20 12:21:03.092266 master-0 kubenswrapper[31420]: I0220 12:21:03.091926 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81f14191-e623-41d6-8a28-e59e94280af4-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "81f14191-e623-41d6-8a28-e59e94280af4" (UID: "81f14191-e623-41d6-8a28-e59e94280af4"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:21:03.092266 master-0 kubenswrapper[31420]: I0220 12:21:03.092222 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81f14191-e623-41d6-8a28-e59e94280af4-logs" (OuterVolumeSpecName: "logs") pod "81f14191-e623-41d6-8a28-e59e94280af4" (UID: "81f14191-e623-41d6-8a28-e59e94280af4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:21:03.092840 master-0 kubenswrapper[31420]: I0220 12:21:03.092818 31420 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81f14191-e623-41d6-8a28-e59e94280af4-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:03.092840 master-0 kubenswrapper[31420]: I0220 12:21:03.092839 31420 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81f14191-e623-41d6-8a28-e59e94280af4-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:03.095184 master-0 kubenswrapper[31420]: I0220 12:21:03.095156 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/81f14191-e623-41d6-8a28-e59e94280af4-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "81f14191-e623-41d6-8a28-e59e94280af4" (UID: "81f14191-e623-41d6-8a28-e59e94280af4"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 20 12:21:03.095630 master-0 kubenswrapper[31420]: I0220 12:21:03.095591 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "81f14191-e623-41d6-8a28-e59e94280af4" (UID: "81f14191-e623-41d6-8a28-e59e94280af4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:03.096261 master-0 kubenswrapper[31420]: I0220 12:21:03.096222 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f14191-e623-41d6-8a28-e59e94280af4-kube-api-access-z2wpl" (OuterVolumeSpecName: "kube-api-access-z2wpl") pod "81f14191-e623-41d6-8a28-e59e94280af4" (UID: "81f14191-e623-41d6-8a28-e59e94280af4"). InnerVolumeSpecName "kube-api-access-z2wpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:21:03.099476 master-0 kubenswrapper[31420]: I0220 12:21:03.099404 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-scripts" (OuterVolumeSpecName: "scripts") pod "81f14191-e623-41d6-8a28-e59e94280af4" (UID: "81f14191-e623-41d6-8a28-e59e94280af4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:03.144783 master-0 kubenswrapper[31420]: I0220 12:21:03.142059 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-config-data" (OuterVolumeSpecName: "config-data") pod "81f14191-e623-41d6-8a28-e59e94280af4" (UID: "81f14191-e623-41d6-8a28-e59e94280af4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:03.160282 master-0 kubenswrapper[31420]: I0220 12:21:03.160192 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81f14191-e623-41d6-8a28-e59e94280af4" (UID: "81f14191-e623-41d6-8a28-e59e94280af4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:03.195135 master-0 kubenswrapper[31420]: I0220 12:21:03.195083 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:03.195135 master-0 kubenswrapper[31420]: I0220 12:21:03.195128 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:03.195135 master-0 kubenswrapper[31420]: I0220 12:21:03.195138 31420 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:03.195135 master-0 kubenswrapper[31420]: I0220 12:21:03.195148 31420 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/81f14191-e623-41d6-8a28-e59e94280af4-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:03.195135 master-0 kubenswrapper[31420]: I0220 12:21:03.195158 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2wpl\" (UniqueName: \"kubernetes.io/projected/81f14191-e623-41d6-8a28-e59e94280af4-kube-api-access-z2wpl\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:03.195135 master-0 kubenswrapper[31420]: I0220 12:21:03.195167 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f14191-e623-41d6-8a28-e59e94280af4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:03.394283 master-0 kubenswrapper[31420]: I0220 12:21:03.394198 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 20 12:21:03.397324 master-0 kubenswrapper[31420]: W0220 12:21:03.397247 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50774a30_2089_4dcc_9b00_51a5a600c68b.slice/crio-370223e1d621a0f232413e7c0fe8d9bffa993ec3cf1aa5d1537aa23b9485b42f WatchSource:0}: Error finding container 370223e1d621a0f232413e7c0fe8d9bffa993ec3cf1aa5d1537aa23b9485b42f: Status 404 returned error can't find the container with id 370223e1d621a0f232413e7c0fe8d9bffa993ec3cf1aa5d1537aa23b9485b42f Feb 20 12:21:03.517566 master-0 kubenswrapper[31420]: I0220 12:21:03.515686 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e4f2bb2-455b-4ef5-be92-d14932c75171" path="/var/lib/kubelet/pods/7e4f2bb2-455b-4ef5-be92-d14932c75171/volumes" Feb 20 12:21:03.972218 master-0 kubenswrapper[31420]: I0220 12:21:03.972158 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5f9fdb754b-plq9n" Feb 20 12:21:03.973216 master-0 kubenswrapper[31420]: I0220 12:21:03.973147 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5f9fdb754b-plq9n" event={"ID":"81f14191-e623-41d6-8a28-e59e94280af4","Type":"ContainerDied","Data":"8b9d290d3d6fd0004a3b267569078cea27fd17fd39fac8257b81cfa517648bb0"} Feb 20 12:21:03.973282 master-0 kubenswrapper[31420]: I0220 12:21:03.973250 31420 scope.go:117] "RemoveContainer" containerID="f5ed9cb5be309fa236c46f3e3f82e9f3aa68aaeda7c8e02769f617f80088ae7d" Feb 20 12:21:03.977830 master-0 kubenswrapper[31420]: I0220 12:21:03.977753 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 20 12:21:03.978295 master-0 kubenswrapper[31420]: I0220 12:21:03.977902 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"50774a30-2089-4dcc-9b00-51a5a600c68b","Type":"ContainerStarted","Data":"370223e1d621a0f232413e7c0fe8d9bffa993ec3cf1aa5d1537aa23b9485b42f"} Feb 20 12:21:04.006446 master-0 kubenswrapper[31420]: I0220 12:21:04.006392 31420 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7e4f2bb2-455b-4ef5-be92-d14932c75171" podUID="50774a30-2089-4dcc-9b00-51a5a600c68b" Feb 20 12:21:04.014142 master-0 kubenswrapper[31420]: I0220 12:21:04.014035 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-5f9fdb754b-plq9n"] Feb 20 12:21:04.041685 master-0 kubenswrapper[31420]: I0220 12:21:04.035521 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-5f9fdb754b-plq9n"] Feb 20 12:21:04.834200 master-0 kubenswrapper[31420]: I0220 12:21:04.833582 31420 scope.go:117] "RemoveContainer" containerID="c3c649b00762e84f17758cce4a7e8843225dc564b1544cc042956fe38a3fa05f" Feb 20 12:21:04.878740 master-0 kubenswrapper[31420]: I0220 12:21:04.878698 31420 scope.go:117] "RemoveContainer" containerID="6cd3d743c63ee674317e3ddf4b0fc5ff72bc0776de995183532ebfb547e2a934" Feb 20 12:21:05.512973 master-0 kubenswrapper[31420]: I0220 12:21:05.512919 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81f14191-e623-41d6-8a28-e59e94280af4" path="/var/lib/kubelet/pods/81f14191-e623-41d6-8a28-e59e94280af4/volumes" Feb 20 12:21:07.036411 master-0 kubenswrapper[31420]: I0220 12:21:07.036269 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-zz7jl" event={"ID":"db935f50-a18d-4ceb-9a23-149442b7f041","Type":"ContainerStarted","Data":"97757dc3712b363521655fbabb164ff78d64e5f2d3e118f917513153a122a4f6"} Feb 20 12:21:07.071772 master-0 kubenswrapper[31420]: I0220 12:21:07.071643 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-zz7jl" podStartSLOduration=2.867325626 podStartE2EDuration="7.07161737s" podCreationTimestamp="2026-02-20 12:21:00 +0000 UTC" firstStartedPulling="2026-02-20 12:21:01.824197222 +0000 UTC m=+966.543435463" lastFinishedPulling="2026-02-20 12:21:06.028488976 +0000 UTC m=+970.747727207" observedRunningTime="2026-02-20 12:21:07.055686879 +0000 UTC m=+971.774925120" watchObservedRunningTime="2026-02-20 12:21:07.07161737 +0000 UTC m=+971.790855611" Feb 20 12:21:07.315985 master-0 kubenswrapper[31420]: I0220 12:21:07.315841 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6c6dd6f84-vtwv4"] Feb 20 12:21:07.318101 master-0 kubenswrapper[31420]: E0220 12:21:07.317354 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f14191-e623-41d6-8a28-e59e94280af4" containerName="init" Feb 20 12:21:07.318101 master-0 kubenswrapper[31420]: I0220 12:21:07.317390 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f14191-e623-41d6-8a28-e59e94280af4" containerName="init" Feb 20 12:21:07.318101 master-0 kubenswrapper[31420]: E0220 12:21:07.317462 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f14191-e623-41d6-8a28-e59e94280af4" containerName="ironic-api" Feb 20 12:21:07.318101 master-0 kubenswrapper[31420]: I0220 12:21:07.317473 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f14191-e623-41d6-8a28-e59e94280af4" containerName="ironic-api" Feb 20 12:21:07.318101 master-0 kubenswrapper[31420]: E0220 12:21:07.317485 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f14191-e623-41d6-8a28-e59e94280af4" containerName="ironic-api-log" Feb 20 12:21:07.318101 master-0 kubenswrapper[31420]: I0220 12:21:07.317496 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f14191-e623-41d6-8a28-e59e94280af4" containerName="ironic-api-log" Feb 20 12:21:07.318387 master-0 kubenswrapper[31420]: I0220 12:21:07.318160 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f14191-e623-41d6-8a28-e59e94280af4" containerName="ironic-api" Feb 20 12:21:07.318387 master-0 kubenswrapper[31420]: I0220 12:21:07.318213 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f14191-e623-41d6-8a28-e59e94280af4" containerName="ironic-api" Feb 20 12:21:07.318387 master-0 kubenswrapper[31420]: I0220 12:21:07.318245 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f14191-e623-41d6-8a28-e59e94280af4" containerName="ironic-api-log" Feb 20 12:21:07.320078 master-0 kubenswrapper[31420]: E0220 12:21:07.318893 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f14191-e623-41d6-8a28-e59e94280af4" containerName="ironic-api" Feb 20 12:21:07.320078 master-0 kubenswrapper[31420]: I0220 12:21:07.318917 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f14191-e623-41d6-8a28-e59e94280af4" containerName="ironic-api" Feb 20 12:21:07.349651 master-0 kubenswrapper[31420]: I0220 12:21:07.349595 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.352715 master-0 kubenswrapper[31420]: I0220 12:21:07.352661 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 20 12:21:07.353013 master-0 kubenswrapper[31420]: I0220 12:21:07.352966 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 20 12:21:07.353092 master-0 kubenswrapper[31420]: I0220 12:21:07.353053 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 20 12:21:07.360780 master-0 kubenswrapper[31420]: I0220 12:21:07.360708 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6c6dd6f84-vtwv4"] Feb 20 12:21:07.402255 master-0 kubenswrapper[31420]: I0220 12:21:07.402170 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0da685-95b9-432d-85ba-f6d0389844cb-log-httpd\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.402776 master-0 kubenswrapper[31420]: I0220 12:21:07.402758 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb0da685-95b9-432d-85ba-f6d0389844cb-public-tls-certs\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.402881 master-0 kubenswrapper[31420]: I0220 12:21:07.402867 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p4gg\" (UniqueName: \"kubernetes.io/projected/eb0da685-95b9-432d-85ba-f6d0389844cb-kube-api-access-6p4gg\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.403050 master-0 kubenswrapper[31420]: I0220 12:21:07.403037 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0da685-95b9-432d-85ba-f6d0389844cb-combined-ca-bundle\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.403139 master-0 kubenswrapper[31420]: I0220 12:21:07.403126 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb0da685-95b9-432d-85ba-f6d0389844cb-etc-swift\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.403215 master-0 kubenswrapper[31420]: I0220 12:21:07.403201 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb0da685-95b9-432d-85ba-f6d0389844cb-internal-tls-certs\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.403330 master-0 kubenswrapper[31420]: I0220 12:21:07.403317 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0da685-95b9-432d-85ba-f6d0389844cb-run-httpd\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.403489 master-0 kubenswrapper[31420]: I0220 12:21:07.403473 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0da685-95b9-432d-85ba-f6d0389844cb-config-data\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.506793 master-0 kubenswrapper[31420]: I0220 12:21:07.505970 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0da685-95b9-432d-85ba-f6d0389844cb-config-data\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.506793 master-0 kubenswrapper[31420]: I0220 12:21:07.506049 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0da685-95b9-432d-85ba-f6d0389844cb-log-httpd\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.506793 master-0 kubenswrapper[31420]: I0220 12:21:07.506250 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb0da685-95b9-432d-85ba-f6d0389844cb-public-tls-certs\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.506793 master-0 kubenswrapper[31420]: I0220 12:21:07.506317 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p4gg\" (UniqueName: \"kubernetes.io/projected/eb0da685-95b9-432d-85ba-f6d0389844cb-kube-api-access-6p4gg\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.506793 master-0 kubenswrapper[31420]: I0220 12:21:07.506670 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0da685-95b9-432d-85ba-f6d0389844cb-combined-ca-bundle\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.506793 master-0 kubenswrapper[31420]: I0220 12:21:07.506715 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0da685-95b9-432d-85ba-f6d0389844cb-log-httpd\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.506793 master-0 kubenswrapper[31420]: I0220 12:21:07.506762 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb0da685-95b9-432d-85ba-f6d0389844cb-etc-swift\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.506793 master-0 kubenswrapper[31420]: I0220 12:21:07.506789 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb0da685-95b9-432d-85ba-f6d0389844cb-internal-tls-certs\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.507444 master-0 kubenswrapper[31420]: I0220 12:21:07.506994 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0da685-95b9-432d-85ba-f6d0389844cb-run-httpd\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.507651 master-0 kubenswrapper[31420]: I0220 12:21:07.507619 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb0da685-95b9-432d-85ba-f6d0389844cb-run-httpd\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.511330 master-0 kubenswrapper[31420]: I0220 12:21:07.511284 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb0da685-95b9-432d-85ba-f6d0389844cb-public-tls-certs\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.511485 master-0 kubenswrapper[31420]: I0220 12:21:07.511406 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0da685-95b9-432d-85ba-f6d0389844cb-config-data\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.514277 master-0 kubenswrapper[31420]: I0220 12:21:07.514178 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb0da685-95b9-432d-85ba-f6d0389844cb-internal-tls-certs\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.514654 master-0 kubenswrapper[31420]: I0220 12:21:07.514625 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb0da685-95b9-432d-85ba-f6d0389844cb-etc-swift\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.519763 master-0 kubenswrapper[31420]: I0220 12:21:07.519684 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0da685-95b9-432d-85ba-f6d0389844cb-combined-ca-bundle\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.529242 master-0 kubenswrapper[31420]: I0220 12:21:07.529193 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p4gg\" (UniqueName: \"kubernetes.io/projected/eb0da685-95b9-432d-85ba-f6d0389844cb-kube-api-access-6p4gg\") pod \"swift-proxy-6c6dd6f84-vtwv4\" (UID: \"eb0da685-95b9-432d-85ba-f6d0389844cb\") " pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.694451 master-0 kubenswrapper[31420]: I0220 12:21:07.694306 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:07.811164 master-0 kubenswrapper[31420]: I0220 12:21:07.811096 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:21:09.073080 master-0 kubenswrapper[31420]: I0220 12:21:09.072985 31420 generic.go:334] "Generic (PLEG): container finished" podID="db935f50-a18d-4ceb-9a23-149442b7f041" containerID="97757dc3712b363521655fbabb164ff78d64e5f2d3e118f917513153a122a4f6" exitCode=0 Feb 20 12:21:09.073080 master-0 kubenswrapper[31420]: I0220 12:21:09.073038 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-zz7jl" event={"ID":"db935f50-a18d-4ceb-9a23-149442b7f041","Type":"ContainerDied","Data":"97757dc3712b363521655fbabb164ff78d64e5f2d3e118f917513153a122a4f6"} Feb 20 12:21:10.597909 master-0 kubenswrapper[31420]: I0220 12:21:10.597814 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c4cff4645-lz9x7" Feb 20 12:21:10.704626 master-0 kubenswrapper[31420]: I0220 12:21:10.703877 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c69cf75d-mfgck"] Feb 20 12:21:10.704626 master-0 kubenswrapper[31420]: I0220 12:21:10.704217 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c69cf75d-mfgck" podUID="4f6becc0-4062-4971-9300-fe40c0538d25" containerName="neutron-api" containerID="cri-o://63c35f90eed454fde6afe3d0e77f139ad569d08726242fa8e3eec960ee4204cf" gracePeriod=30 Feb 20 12:21:10.704626 master-0 kubenswrapper[31420]: I0220 12:21:10.704514 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c69cf75d-mfgck" podUID="4f6becc0-4062-4971-9300-fe40c0538d25" containerName="neutron-httpd" containerID="cri-o://d9b044a68317b3d3f5d929b39f6e6e102e65dc56db00f41d411702628c62dc1c" gracePeriod=30 Feb 20 12:21:11.099292 master-0 kubenswrapper[31420]: I0220 12:21:11.099221 31420 generic.go:334] "Generic (PLEG): container finished" podID="4f6becc0-4062-4971-9300-fe40c0538d25" containerID="d9b044a68317b3d3f5d929b39f6e6e102e65dc56db00f41d411702628c62dc1c" exitCode=0 Feb 20 12:21:11.099292 master-0 kubenswrapper[31420]: I0220 12:21:11.099281 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c69cf75d-mfgck" event={"ID":"4f6becc0-4062-4971-9300-fe40c0538d25","Type":"ContainerDied","Data":"d9b044a68317b3d3f5d929b39f6e6e102e65dc56db00f41d411702628c62dc1c"} Feb 20 12:21:12.430570 master-0 kubenswrapper[31420]: I0220 12:21:12.430487 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e60fa-default-external-api-0"] Feb 20 12:21:12.431198 master-0 kubenswrapper[31420]: I0220 12:21:12.430765 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-e60fa-default-external-api-0" podUID="f95db2f7-4dcd-43a4-93fc-650f8cf79f68" containerName="glance-log" containerID="cri-o://356157ba73415aa002f3e37c1ee802ff68584de48c801ce706ce3de5e01cc676" gracePeriod=30 Feb 20 12:21:12.431198 master-0 kubenswrapper[31420]: I0220 12:21:12.430886 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-e60fa-default-external-api-0" podUID="f95db2f7-4dcd-43a4-93fc-650f8cf79f68" containerName="glance-httpd" containerID="cri-o://1724846740ee17916c57bb8036a6f78794e53d3205353094d9eaaa4dc6bce4ee" gracePeriod=30 Feb 20 12:21:12.761180 master-0 kubenswrapper[31420]: I0220 12:21:12.761112 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-8jmg4"] Feb 20 12:21:12.763779 master-0 kubenswrapper[31420]: I0220 12:21:12.763753 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8jmg4" Feb 20 12:21:12.769654 master-0 kubenswrapper[31420]: I0220 12:21:12.769551 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8jmg4"] Feb 20 12:21:12.897066 master-0 kubenswrapper[31420]: I0220 12:21:12.892030 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-rkmxn"] Feb 20 12:21:12.897066 master-0 kubenswrapper[31420]: I0220 12:21:12.896396 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603ae118-f4a7-48ea-b99b-8a71f297b617-operator-scripts\") pod \"nova-api-db-create-8jmg4\" (UID: \"603ae118-f4a7-48ea-b99b-8a71f297b617\") " pod="openstack/nova-api-db-create-8jmg4" Feb 20 12:21:12.897066 master-0 kubenswrapper[31420]: I0220 12:21:12.896594 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-688d4\" (UniqueName: \"kubernetes.io/projected/603ae118-f4a7-48ea-b99b-8a71f297b617-kube-api-access-688d4\") pod \"nova-api-db-create-8jmg4\" (UID: \"603ae118-f4a7-48ea-b99b-8a71f297b617\") " pod="openstack/nova-api-db-create-8jmg4" Feb 20 12:21:12.897549 master-0 kubenswrapper[31420]: I0220 12:21:12.897491 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkmxn" Feb 20 12:21:12.939597 master-0 kubenswrapper[31420]: I0220 12:21:12.939501 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rkmxn"] Feb 20 12:21:12.982864 master-0 kubenswrapper[31420]: I0220 12:21:12.982777 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f6bb-account-create-update-bmnfz"] Feb 20 12:21:12.984863 master-0 kubenswrapper[31420]: I0220 12:21:12.984825 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f6bb-account-create-update-bmnfz" Feb 20 12:21:12.987685 master-0 kubenswrapper[31420]: I0220 12:21:12.987654 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 20 12:21:12.995458 master-0 kubenswrapper[31420]: I0220 12:21:12.995375 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f6bb-account-create-update-bmnfz"] Feb 20 12:21:12.998785 master-0 kubenswrapper[31420]: I0220 12:21:12.998738 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z476h\" (UniqueName: \"kubernetes.io/projected/1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710-kube-api-access-z476h\") pod \"nova-cell0-db-create-rkmxn\" (UID: \"1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710\") " pod="openstack/nova-cell0-db-create-rkmxn" Feb 20 12:21:12.998970 master-0 kubenswrapper[31420]: I0220 12:21:12.998932 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603ae118-f4a7-48ea-b99b-8a71f297b617-operator-scripts\") pod \"nova-api-db-create-8jmg4\" (UID: \"603ae118-f4a7-48ea-b99b-8a71f297b617\") " pod="openstack/nova-api-db-create-8jmg4" Feb 20 12:21:12.999036 master-0 kubenswrapper[31420]: I0220 12:21:12.999005 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710-operator-scripts\") pod \"nova-cell0-db-create-rkmxn\" (UID: \"1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710\") " pod="openstack/nova-cell0-db-create-rkmxn" Feb 20 12:21:12.999036 master-0 kubenswrapper[31420]: I0220 12:21:12.999030 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-688d4\" (UniqueName: \"kubernetes.io/projected/603ae118-f4a7-48ea-b99b-8a71f297b617-kube-api-access-688d4\") pod \"nova-api-db-create-8jmg4\" (UID: \"603ae118-f4a7-48ea-b99b-8a71f297b617\") " pod="openstack/nova-api-db-create-8jmg4" Feb 20 12:21:12.999986 master-0 kubenswrapper[31420]: I0220 12:21:12.999957 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603ae118-f4a7-48ea-b99b-8a71f297b617-operator-scripts\") pod \"nova-api-db-create-8jmg4\" (UID: \"603ae118-f4a7-48ea-b99b-8a71f297b617\") " pod="openstack/nova-api-db-create-8jmg4" Feb 20 12:21:13.020179 master-0 kubenswrapper[31420]: I0220 12:21:13.019105 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-688d4\" (UniqueName: \"kubernetes.io/projected/603ae118-f4a7-48ea-b99b-8a71f297b617-kube-api-access-688d4\") pod \"nova-api-db-create-8jmg4\" (UID: \"603ae118-f4a7-48ea-b99b-8a71f297b617\") " pod="openstack/nova-api-db-create-8jmg4" Feb 20 12:21:13.048877 master-0 kubenswrapper[31420]: I0220 12:21:13.048389 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-zhh86"] Feb 20 12:21:13.050832 master-0 kubenswrapper[31420]: I0220 12:21:13.050772 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zhh86" Feb 20 12:21:13.093589 master-0 kubenswrapper[31420]: I0220 12:21:13.093517 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zhh86"] Feb 20 12:21:13.100465 master-0 kubenswrapper[31420]: I0220 12:21:13.100432 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq2sr\" (UniqueName: \"kubernetes.io/projected/af8da94d-28af-4392-bdf9-c0d1c6eaeda4-kube-api-access-bq2sr\") pod \"nova-api-f6bb-account-create-update-bmnfz\" (UID: \"af8da94d-28af-4392-bdf9-c0d1c6eaeda4\") " pod="openstack/nova-api-f6bb-account-create-update-bmnfz" Feb 20 12:21:13.100625 master-0 kubenswrapper[31420]: I0220 12:21:13.100567 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710-operator-scripts\") pod \"nova-cell0-db-create-rkmxn\" (UID: \"1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710\") " pod="openstack/nova-cell0-db-create-rkmxn" Feb 20 12:21:13.100625 master-0 kubenswrapper[31420]: I0220 12:21:13.100615 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af8da94d-28af-4392-bdf9-c0d1c6eaeda4-operator-scripts\") pod \"nova-api-f6bb-account-create-update-bmnfz\" (UID: \"af8da94d-28af-4392-bdf9-c0d1c6eaeda4\") " pod="openstack/nova-api-f6bb-account-create-update-bmnfz" Feb 20 12:21:13.100760 master-0 kubenswrapper[31420]: I0220 12:21:13.100658 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z476h\" (UniqueName: \"kubernetes.io/projected/1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710-kube-api-access-z476h\") pod \"nova-cell0-db-create-rkmxn\" (UID: \"1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710\") " pod="openstack/nova-cell0-db-create-rkmxn" Feb 20 12:21:13.102465 master-0 kubenswrapper[31420]: I0220 12:21:13.101651 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710-operator-scripts\") pod \"nova-cell0-db-create-rkmxn\" (UID: \"1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710\") " pod="openstack/nova-cell0-db-create-rkmxn" Feb 20 12:21:13.122183 master-0 kubenswrapper[31420]: I0220 12:21:13.122139 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z476h\" (UniqueName: \"kubernetes.io/projected/1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710-kube-api-access-z476h\") pod \"nova-cell0-db-create-rkmxn\" (UID: \"1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710\") " pod="openstack/nova-cell0-db-create-rkmxn" Feb 20 12:21:13.122676 master-0 kubenswrapper[31420]: I0220 12:21:13.122651 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8jmg4" Feb 20 12:21:13.173002 master-0 kubenswrapper[31420]: I0220 12:21:13.172916 31420 generic.go:334] "Generic (PLEG): container finished" podID="f95db2f7-4dcd-43a4-93fc-650f8cf79f68" containerID="356157ba73415aa002f3e37c1ee802ff68584de48c801ce706ce3de5e01cc676" exitCode=143 Feb 20 12:21:13.173317 master-0 kubenswrapper[31420]: I0220 12:21:13.173030 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-external-api-0" event={"ID":"f95db2f7-4dcd-43a4-93fc-650f8cf79f68","Type":"ContainerDied","Data":"356157ba73415aa002f3e37c1ee802ff68584de48c801ce706ce3de5e01cc676"} Feb 20 12:21:13.173634 master-0 kubenswrapper[31420]: I0220 12:21:13.173598 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-ace2-account-create-update-8fqwl"] Feb 20 12:21:13.177438 master-0 kubenswrapper[31420]: I0220 12:21:13.177383 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ace2-account-create-update-8fqwl" Feb 20 12:21:13.178409 master-0 kubenswrapper[31420]: I0220 12:21:13.178357 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-zz7jl" event={"ID":"db935f50-a18d-4ceb-9a23-149442b7f041","Type":"ContainerDied","Data":"45aecfa296e7763e9721cb6cbeda177b2acb38ec27b7bb4f93dcdf88f41d6d23"} Feb 20 12:21:13.178409 master-0 kubenswrapper[31420]: I0220 12:21:13.178402 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45aecfa296e7763e9721cb6cbeda177b2acb38ec27b7bb4f93dcdf88f41d6d23" Feb 20 12:21:13.181663 master-0 kubenswrapper[31420]: I0220 12:21:13.181637 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 20 12:21:13.191243 master-0 kubenswrapper[31420]: I0220 12:21:13.190922 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ace2-account-create-update-8fqwl"] Feb 20 12:21:13.203211 master-0 kubenswrapper[31420]: I0220 12:21:13.203156 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq2sr\" (UniqueName: \"kubernetes.io/projected/af8da94d-28af-4392-bdf9-c0d1c6eaeda4-kube-api-access-bq2sr\") pod \"nova-api-f6bb-account-create-update-bmnfz\" (UID: \"af8da94d-28af-4392-bdf9-c0d1c6eaeda4\") " pod="openstack/nova-api-f6bb-account-create-update-bmnfz" Feb 20 12:21:13.203315 master-0 kubenswrapper[31420]: I0220 12:21:13.203250 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1383ebf-b51c-4b56-bdec-191e09ab35ac-operator-scripts\") pod \"nova-cell1-db-create-zhh86\" (UID: \"d1383ebf-b51c-4b56-bdec-191e09ab35ac\") " pod="openstack/nova-cell1-db-create-zhh86" Feb 20 12:21:13.203366 master-0 kubenswrapper[31420]: I0220 12:21:13.203353 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvgp9\" (UniqueName: \"kubernetes.io/projected/d1383ebf-b51c-4b56-bdec-191e09ab35ac-kube-api-access-lvgp9\") pod \"nova-cell1-db-create-zhh86\" (UID: \"d1383ebf-b51c-4b56-bdec-191e09ab35ac\") " pod="openstack/nova-cell1-db-create-zhh86" Feb 20 12:21:13.203486 master-0 kubenswrapper[31420]: I0220 12:21:13.203461 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af8da94d-28af-4392-bdf9-c0d1c6eaeda4-operator-scripts\") pod \"nova-api-f6bb-account-create-update-bmnfz\" (UID: \"af8da94d-28af-4392-bdf9-c0d1c6eaeda4\") " pod="openstack/nova-api-f6bb-account-create-update-bmnfz" Feb 20 12:21:13.204516 master-0 kubenswrapper[31420]: I0220 12:21:13.204475 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af8da94d-28af-4392-bdf9-c0d1c6eaeda4-operator-scripts\") pod \"nova-api-f6bb-account-create-update-bmnfz\" (UID: \"af8da94d-28af-4392-bdf9-c0d1c6eaeda4\") " pod="openstack/nova-api-f6bb-account-create-update-bmnfz" Feb 20 12:21:13.228783 master-0 kubenswrapper[31420]: I0220 12:21:13.228691 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq2sr\" (UniqueName: \"kubernetes.io/projected/af8da94d-28af-4392-bdf9-c0d1c6eaeda4-kube-api-access-bq2sr\") pod \"nova-api-f6bb-account-create-update-bmnfz\" (UID: \"af8da94d-28af-4392-bdf9-c0d1c6eaeda4\") " pod="openstack/nova-api-f6bb-account-create-update-bmnfz" Feb 20 12:21:13.259261 master-0 kubenswrapper[31420]: I0220 12:21:13.259197 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkmxn" Feb 20 12:21:13.312150 master-0 kubenswrapper[31420]: I0220 12:21:13.310692 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1383ebf-b51c-4b56-bdec-191e09ab35ac-operator-scripts\") pod \"nova-cell1-db-create-zhh86\" (UID: \"d1383ebf-b51c-4b56-bdec-191e09ab35ac\") " pod="openstack/nova-cell1-db-create-zhh86" Feb 20 12:21:13.312150 master-0 kubenswrapper[31420]: I0220 12:21:13.310835 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dc933f4-f7a5-4ae9-8d43-336be86a5f34-operator-scripts\") pod \"nova-cell0-ace2-account-create-update-8fqwl\" (UID: \"5dc933f4-f7a5-4ae9-8d43-336be86a5f34\") " pod="openstack/nova-cell0-ace2-account-create-update-8fqwl" Feb 20 12:21:13.312150 master-0 kubenswrapper[31420]: I0220 12:21:13.310906 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvgp9\" (UniqueName: \"kubernetes.io/projected/d1383ebf-b51c-4b56-bdec-191e09ab35ac-kube-api-access-lvgp9\") pod \"nova-cell1-db-create-zhh86\" (UID: \"d1383ebf-b51c-4b56-bdec-191e09ab35ac\") " pod="openstack/nova-cell1-db-create-zhh86" Feb 20 12:21:13.312150 master-0 kubenswrapper[31420]: I0220 12:21:13.312081 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1383ebf-b51c-4b56-bdec-191e09ab35ac-operator-scripts\") pod \"nova-cell1-db-create-zhh86\" (UID: \"d1383ebf-b51c-4b56-bdec-191e09ab35ac\") " pod="openstack/nova-cell1-db-create-zhh86" Feb 20 12:21:13.319241 master-0 kubenswrapper[31420]: I0220 12:21:13.313033 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcw4w\" (UniqueName: \"kubernetes.io/projected/5dc933f4-f7a5-4ae9-8d43-336be86a5f34-kube-api-access-vcw4w\") pod \"nova-cell0-ace2-account-create-update-8fqwl\" (UID: \"5dc933f4-f7a5-4ae9-8d43-336be86a5f34\") " pod="openstack/nova-cell0-ace2-account-create-update-8fqwl" Feb 20 12:21:13.327182 master-0 kubenswrapper[31420]: I0220 12:21:13.326732 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:13.353212 master-0 kubenswrapper[31420]: I0220 12:21:13.353116 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvgp9\" (UniqueName: \"kubernetes.io/projected/d1383ebf-b51c-4b56-bdec-191e09ab35ac-kube-api-access-lvgp9\") pod \"nova-cell1-db-create-zhh86\" (UID: \"d1383ebf-b51c-4b56-bdec-191e09ab35ac\") " pod="openstack/nova-cell1-db-create-zhh86" Feb 20 12:21:13.393887 master-0 kubenswrapper[31420]: I0220 12:21:13.393793 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f6bb-account-create-update-bmnfz" Feb 20 12:21:13.411451 master-0 kubenswrapper[31420]: I0220 12:21:13.406178 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-bb02-account-create-update-gjp7p"] Feb 20 12:21:13.411451 master-0 kubenswrapper[31420]: E0220 12:21:13.411055 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db935f50-a18d-4ceb-9a23-149442b7f041" containerName="ironic-inspector-db-sync" Feb 20 12:21:13.411451 master-0 kubenswrapper[31420]: I0220 12:21:13.411093 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="db935f50-a18d-4ceb-9a23-149442b7f041" containerName="ironic-inspector-db-sync" Feb 20 12:21:13.420683 master-0 kubenswrapper[31420]: I0220 12:21:13.412511 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="db935f50-a18d-4ceb-9a23-149442b7f041" containerName="ironic-inspector-db-sync" Feb 20 12:21:13.420683 master-0 kubenswrapper[31420]: I0220 12:21:13.415566 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcw4w\" (UniqueName: \"kubernetes.io/projected/5dc933f4-f7a5-4ae9-8d43-336be86a5f34-kube-api-access-vcw4w\") pod \"nova-cell0-ace2-account-create-update-8fqwl\" (UID: \"5dc933f4-f7a5-4ae9-8d43-336be86a5f34\") " pod="openstack/nova-cell0-ace2-account-create-update-8fqwl" Feb 20 12:21:13.420683 master-0 kubenswrapper[31420]: I0220 12:21:13.415789 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dc933f4-f7a5-4ae9-8d43-336be86a5f34-operator-scripts\") pod \"nova-cell0-ace2-account-create-update-8fqwl\" (UID: \"5dc933f4-f7a5-4ae9-8d43-336be86a5f34\") " pod="openstack/nova-cell0-ace2-account-create-update-8fqwl" Feb 20 12:21:13.420683 master-0 kubenswrapper[31420]: I0220 12:21:13.417421 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dc933f4-f7a5-4ae9-8d43-336be86a5f34-operator-scripts\") pod \"nova-cell0-ace2-account-create-update-8fqwl\" (UID: \"5dc933f4-f7a5-4ae9-8d43-336be86a5f34\") " pod="openstack/nova-cell0-ace2-account-create-update-8fqwl" Feb 20 12:21:13.424540 master-0 kubenswrapper[31420]: I0220 12:21:13.424311 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bb02-account-create-update-gjp7p"] Feb 20 12:21:13.425889 master-0 kubenswrapper[31420]: I0220 12:21:13.424666 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bb02-account-create-update-gjp7p" Feb 20 12:21:13.433474 master-0 kubenswrapper[31420]: I0220 12:21:13.432131 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 20 12:21:13.435016 master-0 kubenswrapper[31420]: I0220 12:21:13.434979 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcw4w\" (UniqueName: \"kubernetes.io/projected/5dc933f4-f7a5-4ae9-8d43-336be86a5f34-kube-api-access-vcw4w\") pod \"nova-cell0-ace2-account-create-update-8fqwl\" (UID: \"5dc933f4-f7a5-4ae9-8d43-336be86a5f34\") " pod="openstack/nova-cell0-ace2-account-create-update-8fqwl" Feb 20 12:21:13.517409 master-0 kubenswrapper[31420]: I0220 12:21:13.516932 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-config\") pod \"db935f50-a18d-4ceb-9a23-149442b7f041\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " Feb 20 12:21:13.517409 master-0 kubenswrapper[31420]: I0220 12:21:13.517149 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/db935f50-a18d-4ceb-9a23-149442b7f041-etc-podinfo\") pod \"db935f50-a18d-4ceb-9a23-149442b7f041\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " Feb 20 12:21:13.517409 master-0 kubenswrapper[31420]: I0220 12:21:13.517244 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/db935f50-a18d-4ceb-9a23-149442b7f041-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"db935f50-a18d-4ceb-9a23-149442b7f041\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " Feb 20 12:21:13.517409 master-0 kubenswrapper[31420]: I0220 12:21:13.517267 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-combined-ca-bundle\") pod \"db935f50-a18d-4ceb-9a23-149442b7f041\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " Feb 20 12:21:13.517409 master-0 kubenswrapper[31420]: I0220 12:21:13.517330 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cmbj\" (UniqueName: \"kubernetes.io/projected/db935f50-a18d-4ceb-9a23-149442b7f041-kube-api-access-5cmbj\") pod \"db935f50-a18d-4ceb-9a23-149442b7f041\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " Feb 20 12:21:13.517409 master-0 kubenswrapper[31420]: I0220 12:21:13.517357 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-scripts\") pod \"db935f50-a18d-4ceb-9a23-149442b7f041\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " Feb 20 12:21:13.517409 master-0 kubenswrapper[31420]: I0220 12:21:13.517390 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/db935f50-a18d-4ceb-9a23-149442b7f041-var-lib-ironic\") pod \"db935f50-a18d-4ceb-9a23-149442b7f041\" (UID: \"db935f50-a18d-4ceb-9a23-149442b7f041\") " Feb 20 12:21:13.518074 master-0 kubenswrapper[31420]: I0220 12:21:13.518044 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smf77\" (UniqueName: \"kubernetes.io/projected/d142bf65-a53f-4fd8-95b0-46c05306e168-kube-api-access-smf77\") pod \"nova-cell1-bb02-account-create-update-gjp7p\" (UID: \"d142bf65-a53f-4fd8-95b0-46c05306e168\") " pod="openstack/nova-cell1-bb02-account-create-update-gjp7p" Feb 20 12:21:13.518212 master-0 kubenswrapper[31420]: I0220 12:21:13.518191 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d142bf65-a53f-4fd8-95b0-46c05306e168-operator-scripts\") pod \"nova-cell1-bb02-account-create-update-gjp7p\" (UID: \"d142bf65-a53f-4fd8-95b0-46c05306e168\") " pod="openstack/nova-cell1-bb02-account-create-update-gjp7p" Feb 20 12:21:13.519695 master-0 kubenswrapper[31420]: I0220 12:21:13.519671 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db935f50-a18d-4ceb-9a23-149442b7f041-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "db935f50-a18d-4ceb-9a23-149442b7f041" (UID: "db935f50-a18d-4ceb-9a23-149442b7f041"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:21:13.519868 master-0 kubenswrapper[31420]: I0220 12:21:13.519824 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db935f50-a18d-4ceb-9a23-149442b7f041-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "db935f50-a18d-4ceb-9a23-149442b7f041" (UID: "db935f50-a18d-4ceb-9a23-149442b7f041"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:21:13.525621 master-0 kubenswrapper[31420]: I0220 12:21:13.525387 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-scripts" (OuterVolumeSpecName: "scripts") pod "db935f50-a18d-4ceb-9a23-149442b7f041" (UID: "db935f50-a18d-4ceb-9a23-149442b7f041"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:13.529383 master-0 kubenswrapper[31420]: I0220 12:21:13.529058 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db935f50-a18d-4ceb-9a23-149442b7f041-kube-api-access-5cmbj" (OuterVolumeSpecName: "kube-api-access-5cmbj") pod "db935f50-a18d-4ceb-9a23-149442b7f041" (UID: "db935f50-a18d-4ceb-9a23-149442b7f041"). InnerVolumeSpecName "kube-api-access-5cmbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:21:13.529383 master-0 kubenswrapper[31420]: I0220 12:21:13.529053 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/db935f50-a18d-4ceb-9a23-149442b7f041-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "db935f50-a18d-4ceb-9a23-149442b7f041" (UID: "db935f50-a18d-4ceb-9a23-149442b7f041"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 20 12:21:13.574919 master-0 kubenswrapper[31420]: I0220 12:21:13.573606 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db935f50-a18d-4ceb-9a23-149442b7f041" (UID: "db935f50-a18d-4ceb-9a23-149442b7f041"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:13.576352 master-0 kubenswrapper[31420]: I0220 12:21:13.576320 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-config" (OuterVolumeSpecName: "config") pod "db935f50-a18d-4ceb-9a23-149442b7f041" (UID: "db935f50-a18d-4ceb-9a23-149442b7f041"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:13.613113 master-0 kubenswrapper[31420]: I0220 12:21:13.613052 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zhh86" Feb 20 12:21:13.627548 master-0 kubenswrapper[31420]: I0220 12:21:13.623734 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smf77\" (UniqueName: \"kubernetes.io/projected/d142bf65-a53f-4fd8-95b0-46c05306e168-kube-api-access-smf77\") pod \"nova-cell1-bb02-account-create-update-gjp7p\" (UID: \"d142bf65-a53f-4fd8-95b0-46c05306e168\") " pod="openstack/nova-cell1-bb02-account-create-update-gjp7p" Feb 20 12:21:13.627548 master-0 kubenswrapper[31420]: I0220 12:21:13.624190 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d142bf65-a53f-4fd8-95b0-46c05306e168-operator-scripts\") pod \"nova-cell1-bb02-account-create-update-gjp7p\" (UID: \"d142bf65-a53f-4fd8-95b0-46c05306e168\") " pod="openstack/nova-cell1-bb02-account-create-update-gjp7p" Feb 20 12:21:13.627548 master-0 kubenswrapper[31420]: I0220 12:21:13.625015 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d142bf65-a53f-4fd8-95b0-46c05306e168-operator-scripts\") pod \"nova-cell1-bb02-account-create-update-gjp7p\" (UID: \"d142bf65-a53f-4fd8-95b0-46c05306e168\") " pod="openstack/nova-cell1-bb02-account-create-update-gjp7p" Feb 20 12:21:13.627548 master-0 kubenswrapper[31420]: I0220 12:21:13.625207 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cmbj\" (UniqueName: \"kubernetes.io/projected/db935f50-a18d-4ceb-9a23-149442b7f041-kube-api-access-5cmbj\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:13.627548 master-0 kubenswrapper[31420]: I0220 12:21:13.625222 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:13.627548 master-0 kubenswrapper[31420]: I0220 12:21:13.625232 31420 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/db935f50-a18d-4ceb-9a23-149442b7f041-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:13.627548 master-0 kubenswrapper[31420]: I0220 12:21:13.625243 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:13.627548 master-0 kubenswrapper[31420]: I0220 12:21:13.625253 31420 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/db935f50-a18d-4ceb-9a23-149442b7f041-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:13.627548 master-0 kubenswrapper[31420]: I0220 12:21:13.625262 31420 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/db935f50-a18d-4ceb-9a23-149442b7f041-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:13.627548 master-0 kubenswrapper[31420]: I0220 12:21:13.625271 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db935f50-a18d-4ceb-9a23-149442b7f041-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:13.637792 master-0 kubenswrapper[31420]: I0220 12:21:13.636864 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ace2-account-create-update-8fqwl" Feb 20 12:21:13.646479 master-0 kubenswrapper[31420]: I0220 12:21:13.645669 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smf77\" (UniqueName: \"kubernetes.io/projected/d142bf65-a53f-4fd8-95b0-46c05306e168-kube-api-access-smf77\") pod \"nova-cell1-bb02-account-create-update-gjp7p\" (UID: \"d142bf65-a53f-4fd8-95b0-46c05306e168\") " pod="openstack/nova-cell1-bb02-account-create-update-gjp7p" Feb 20 12:21:13.745906 master-0 kubenswrapper[31420]: I0220 12:21:13.744034 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8jmg4"] Feb 20 12:21:13.775374 master-0 kubenswrapper[31420]: I0220 12:21:13.775333 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bb02-account-create-update-gjp7p" Feb 20 12:21:13.780750 master-0 kubenswrapper[31420]: I0220 12:21:13.780686 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6c6dd6f84-vtwv4"] Feb 20 12:21:14.045164 master-0 kubenswrapper[31420]: I0220 12:21:14.045112 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rkmxn"] Feb 20 12:21:14.062886 master-0 kubenswrapper[31420]: I0220 12:21:14.062834 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f6bb-account-create-update-bmnfz"] Feb 20 12:21:14.071312 master-0 kubenswrapper[31420]: W0220 12:21:14.070852 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ec6ab0a_82f4_4cbd_bba7_b3fe8f9a1710.slice/crio-902b368c962e8b924e10514853143c5b70da779ea9c26eafe48178b28c1fb33c WatchSource:0}: Error finding container 902b368c962e8b924e10514853143c5b70da779ea9c26eafe48178b28c1fb33c: Status 404 returned error can't find the container with id 902b368c962e8b924e10514853143c5b70da779ea9c26eafe48178b28c1fb33c Feb 20 12:21:14.112449 master-0 kubenswrapper[31420]: W0220 12:21:14.112407 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf8da94d_28af_4392_bdf9_c0d1c6eaeda4.slice/crio-7a40a84ded642be9ec948b94db305ace26b24fc61add1d5319dd61f7781d0159 WatchSource:0}: Error finding container 7a40a84ded642be9ec948b94db305ace26b24fc61add1d5319dd61f7781d0159: Status 404 returned error can't find the container with id 7a40a84ded642be9ec948b94db305ace26b24fc61add1d5319dd61f7781d0159 Feb 20 12:21:14.276938 master-0 kubenswrapper[31420]: I0220 12:21:14.274292 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8jmg4" event={"ID":"603ae118-f4a7-48ea-b99b-8a71f297b617","Type":"ContainerStarted","Data":"9bb96d56826f96af093dc2e3d3c8daa2d93bb851667bba16f7ab39fea2290048"} Feb 20 12:21:14.276938 master-0 kubenswrapper[31420]: I0220 12:21:14.274353 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8jmg4" event={"ID":"603ae118-f4a7-48ea-b99b-8a71f297b617","Type":"ContainerStarted","Data":"a24cbc6706db6434d1f9543e7c0cf590b4299decf82a29e4c62dd0a453917e05"} Feb 20 12:21:14.295897 master-0 kubenswrapper[31420]: I0220 12:21:14.280287 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f6bb-account-create-update-bmnfz" event={"ID":"af8da94d-28af-4392-bdf9-c0d1c6eaeda4","Type":"ContainerStarted","Data":"7a40a84ded642be9ec948b94db305ace26b24fc61add1d5319dd61f7781d0159"} Feb 20 12:21:14.295897 master-0 kubenswrapper[31420]: I0220 12:21:14.281896 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rkmxn" event={"ID":"1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710","Type":"ContainerStarted","Data":"902b368c962e8b924e10514853143c5b70da779ea9c26eafe48178b28c1fb33c"} Feb 20 12:21:14.295897 master-0 kubenswrapper[31420]: I0220 12:21:14.283264 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6c6dd6f84-vtwv4" event={"ID":"eb0da685-95b9-432d-85ba-f6d0389844cb","Type":"ContainerStarted","Data":"fad5998ef0dddc0b5edd2bcf478efe1da75797938453931c975432c0d804938f"} Feb 20 12:21:14.295897 master-0 kubenswrapper[31420]: I0220 12:21:14.283286 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6c6dd6f84-vtwv4" event={"ID":"eb0da685-95b9-432d-85ba-f6d0389844cb","Type":"ContainerStarted","Data":"e047f6a31ad3c1a9e3cedff01526d016a4902af155ef0709d01a1220872d75b7"} Feb 20 12:21:14.295897 master-0 kubenswrapper[31420]: I0220 12:21:14.293668 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-zz7jl" Feb 20 12:21:14.300565 master-0 kubenswrapper[31420]: I0220 12:21:14.297009 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"50774a30-2089-4dcc-9b00-51a5a600c68b","Type":"ContainerStarted","Data":"7985ad39e3745a1257a93305e6990b31165af0d4a6ef24b871038cf14ab51afe"} Feb 20 12:21:14.346650 master-0 kubenswrapper[31420]: I0220 12:21:14.344395 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-8jmg4" podStartSLOduration=2.344373514 podStartE2EDuration="2.344373514s" podCreationTimestamp="2026-02-20 12:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:21:14.29755235 +0000 UTC m=+979.016790601" watchObservedRunningTime="2026-02-20 12:21:14.344373514 +0000 UTC m=+979.063611755" Feb 20 12:21:14.480376 master-0 kubenswrapper[31420]: I0220 12:21:14.479710 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ace2-account-create-update-8fqwl"] Feb 20 12:21:14.536042 master-0 kubenswrapper[31420]: I0220 12:21:14.523619 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zhh86"] Feb 20 12:21:14.567480 master-0 kubenswrapper[31420]: I0220 12:21:14.567344 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.729269303 podStartE2EDuration="12.5671229s" podCreationTimestamp="2026-02-20 12:21:02 +0000 UTC" firstStartedPulling="2026-02-20 12:21:03.40029028 +0000 UTC m=+968.119528521" lastFinishedPulling="2026-02-20 12:21:13.238143877 +0000 UTC m=+977.957382118" observedRunningTime="2026-02-20 12:21:14.413286312 +0000 UTC m=+979.132524563" watchObservedRunningTime="2026-02-20 12:21:14.5671229 +0000 UTC m=+979.286361141" Feb 20 12:21:14.580966 master-0 kubenswrapper[31420]: W0220 12:21:14.580923 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd142bf65_a53f_4fd8_95b0_46c05306e168.slice/crio-bcadcf6eaa487e804e8bab857bd98bd2b2d11963239510e622e0a5e946155d8f WatchSource:0}: Error finding container bcadcf6eaa487e804e8bab857bd98bd2b2d11963239510e622e0a5e946155d8f: Status 404 returned error can't find the container with id bcadcf6eaa487e804e8bab857bd98bd2b2d11963239510e622e0a5e946155d8f Feb 20 12:21:14.619086 master-0 kubenswrapper[31420]: I0220 12:21:14.619016 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bb02-account-create-update-gjp7p"] Feb 20 12:21:15.227742 master-0 kubenswrapper[31420]: I0220 12:21:15.227683 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76f9f66bcc-zjvh2"] Feb 20 12:21:15.229876 master-0 kubenswrapper[31420]: I0220 12:21:15.229845 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.240681 master-0 kubenswrapper[31420]: I0220 12:21:15.240183 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76f9f66bcc-zjvh2"] Feb 20 12:21:15.320086 master-0 kubenswrapper[31420]: I0220 12:21:15.317560 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-config\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.320086 master-0 kubenswrapper[31420]: I0220 12:21:15.317615 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgs5s\" (UniqueName: \"kubernetes.io/projected/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-kube-api-access-dgs5s\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.320086 master-0 kubenswrapper[31420]: I0220 12:21:15.317658 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-ovsdbserver-nb\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.320086 master-0 kubenswrapper[31420]: I0220 12:21:15.317686 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-ovsdbserver-sb\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.320086 master-0 kubenswrapper[31420]: I0220 12:21:15.317784 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-dns-svc\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.320086 master-0 kubenswrapper[31420]: I0220 12:21:15.317801 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-dns-swift-storage-0\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.323954 master-0 kubenswrapper[31420]: I0220 12:21:15.323050 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6c6dd6f84-vtwv4" event={"ID":"eb0da685-95b9-432d-85ba-f6d0389844cb","Type":"ContainerStarted","Data":"80731ff3591f8c2faec7467abe28b8e60a30f0c4d7530dfa36dd9bbf0b2ed8a0"} Feb 20 12:21:15.326446 master-0 kubenswrapper[31420]: I0220 12:21:15.324378 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:15.326446 master-0 kubenswrapper[31420]: I0220 12:21:15.324409 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:15.341154 master-0 kubenswrapper[31420]: I0220 12:21:15.341100 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 12:21:15.344434 master-0 kubenswrapper[31420]: I0220 12:21:15.344290 31420 generic.go:334] "Generic (PLEG): container finished" podID="5dc933f4-f7a5-4ae9-8d43-336be86a5f34" containerID="05c49cede32294792bf46180583c0b8580b76f9952709092062e13e31fc8fdf7" exitCode=0 Feb 20 12:21:15.348580 master-0 kubenswrapper[31420]: I0220 12:21:15.347868 31420 generic.go:334] "Generic (PLEG): container finished" podID="603ae118-f4a7-48ea-b99b-8a71f297b617" containerID="9bb96d56826f96af093dc2e3d3c8daa2d93bb851667bba16f7ab39fea2290048" exitCode=0 Feb 20 12:21:15.361910 master-0 kubenswrapper[31420]: I0220 12:21:15.352807 31420 generic.go:334] "Generic (PLEG): container finished" podID="d142bf65-a53f-4fd8-95b0-46c05306e168" containerID="9f50b16b977160a1f15ee6b7ba30314830fd1f4637aa06b39d33a96e7ac62bd9" exitCode=0 Feb 20 12:21:15.361910 master-0 kubenswrapper[31420]: I0220 12:21:15.354409 31420 generic.go:334] "Generic (PLEG): container finished" podID="af8da94d-28af-4392-bdf9-c0d1c6eaeda4" containerID="3815ba6310d328f53d67ea5b0ebb678596af8ce615bdef33afbcd4e1ce09a5af" exitCode=0 Feb 20 12:21:15.361910 master-0 kubenswrapper[31420]: I0220 12:21:15.355552 31420 generic.go:334] "Generic (PLEG): container finished" podID="d1383ebf-b51c-4b56-bdec-191e09ab35ac" containerID="63a8f9b946b53f0ce79cb3ae39e1d46c27f94a427c48979fa883ebc4942a0c06" exitCode=0 Feb 20 12:21:15.374853 master-0 kubenswrapper[31420]: I0220 12:21:15.364867 31420 generic.go:334] "Generic (PLEG): container finished" podID="1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710" containerID="8fb243e514b9ab6000eb1df79f67dfa417c2ac187ba67b476973a21322589132" exitCode=0 Feb 20 12:21:15.374853 master-0 kubenswrapper[31420]: I0220 12:21:15.370735 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ace2-account-create-update-8fqwl" event={"ID":"5dc933f4-f7a5-4ae9-8d43-336be86a5f34","Type":"ContainerDied","Data":"05c49cede32294792bf46180583c0b8580b76f9952709092062e13e31fc8fdf7"} Feb 20 12:21:15.374853 master-0 kubenswrapper[31420]: I0220 12:21:15.370790 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ace2-account-create-update-8fqwl" event={"ID":"5dc933f4-f7a5-4ae9-8d43-336be86a5f34","Type":"ContainerStarted","Data":"7a9566d6d543ade03743083877fb4512a6270a61c55d244f56de607e7ec1a975"} Feb 20 12:21:15.374853 master-0 kubenswrapper[31420]: I0220 12:21:15.370801 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8jmg4" event={"ID":"603ae118-f4a7-48ea-b99b-8a71f297b617","Type":"ContainerDied","Data":"9bb96d56826f96af093dc2e3d3c8daa2d93bb851667bba16f7ab39fea2290048"} Feb 20 12:21:15.374853 master-0 kubenswrapper[31420]: I0220 12:21:15.370815 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bb02-account-create-update-gjp7p" event={"ID":"d142bf65-a53f-4fd8-95b0-46c05306e168","Type":"ContainerDied","Data":"9f50b16b977160a1f15ee6b7ba30314830fd1f4637aa06b39d33a96e7ac62bd9"} Feb 20 12:21:15.374853 master-0 kubenswrapper[31420]: I0220 12:21:15.370826 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bb02-account-create-update-gjp7p" event={"ID":"d142bf65-a53f-4fd8-95b0-46c05306e168","Type":"ContainerStarted","Data":"bcadcf6eaa487e804e8bab857bd98bd2b2d11963239510e622e0a5e946155d8f"} Feb 20 12:21:15.374853 master-0 kubenswrapper[31420]: I0220 12:21:15.370836 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f6bb-account-create-update-bmnfz" event={"ID":"af8da94d-28af-4392-bdf9-c0d1c6eaeda4","Type":"ContainerDied","Data":"3815ba6310d328f53d67ea5b0ebb678596af8ce615bdef33afbcd4e1ce09a5af"} Feb 20 12:21:15.374853 master-0 kubenswrapper[31420]: I0220 12:21:15.370847 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zhh86" event={"ID":"d1383ebf-b51c-4b56-bdec-191e09ab35ac","Type":"ContainerDied","Data":"63a8f9b946b53f0ce79cb3ae39e1d46c27f94a427c48979fa883ebc4942a0c06"} Feb 20 12:21:15.374853 master-0 kubenswrapper[31420]: I0220 12:21:15.370857 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zhh86" event={"ID":"d1383ebf-b51c-4b56-bdec-191e09ab35ac","Type":"ContainerStarted","Data":"c1372661caae77c8a1a19a03e66026ed4ec8c5d15364085cc7bd557840e0d05c"} Feb 20 12:21:15.374853 master-0 kubenswrapper[31420]: I0220 12:21:15.370866 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rkmxn" event={"ID":"1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710","Type":"ContainerDied","Data":"8fb243e514b9ab6000eb1df79f67dfa417c2ac187ba67b476973a21322589132"} Feb 20 12:21:15.374853 master-0 kubenswrapper[31420]: I0220 12:21:15.370951 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 20 12:21:15.401776 master-0 kubenswrapper[31420]: I0220 12:21:15.385730 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 20 12:21:15.401776 master-0 kubenswrapper[31420]: I0220 12:21:15.385849 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 20 12:21:15.401776 master-0 kubenswrapper[31420]: I0220 12:21:15.385954 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 20 12:21:15.401776 master-0 kubenswrapper[31420]: I0220 12:21:15.387498 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 12:21:15.402315 master-0 kubenswrapper[31420]: I0220 12:21:15.401903 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6c6dd6f84-vtwv4" podStartSLOduration=8.401856314 podStartE2EDuration="8.401856314s" podCreationTimestamp="2026-02-20 12:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:21:15.365041053 +0000 UTC m=+980.084279304" watchObservedRunningTime="2026-02-20 12:21:15.401856314 +0000 UTC m=+980.121094565" Feb 20 12:21:15.421554 master-0 kubenswrapper[31420]: I0220 12:21:15.420039 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-config\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.421554 master-0 kubenswrapper[31420]: I0220 12:21:15.420095 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgs5s\" (UniqueName: \"kubernetes.io/projected/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-kube-api-access-dgs5s\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.421554 master-0 kubenswrapper[31420]: I0220 12:21:15.420135 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-ovsdbserver-nb\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.421554 master-0 kubenswrapper[31420]: I0220 12:21:15.420166 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-ovsdbserver-sb\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.421554 master-0 kubenswrapper[31420]: I0220 12:21:15.420287 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-dns-svc\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.421554 master-0 kubenswrapper[31420]: I0220 12:21:15.420306 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-dns-swift-storage-0\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.421554 master-0 kubenswrapper[31420]: I0220 12:21:15.421116 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-dns-swift-storage-0\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.422294 master-0 kubenswrapper[31420]: I0220 12:21:15.421938 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-ovsdbserver-nb\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.422294 master-0 kubenswrapper[31420]: I0220 12:21:15.422207 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-dns-svc\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.425659 master-0 kubenswrapper[31420]: I0220 12:21:15.423635 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-config\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.432621 master-0 kubenswrapper[31420]: I0220 12:21:15.429965 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-ovsdbserver-sb\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.498703 master-0 kubenswrapper[31420]: I0220 12:21:15.495886 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgs5s\" (UniqueName: \"kubernetes.io/projected/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-kube-api-access-dgs5s\") pod \"dnsmasq-dns-76f9f66bcc-zjvh2\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.503633 master-0 kubenswrapper[31420]: I0220 12:21:15.500622 31420 scope.go:117] "RemoveContainer" containerID="d75e45c881bc6b74bbd7bbf98c740ad8034658bd845e06076fa2e3543bc15c05" Feb 20 12:21:15.524337 master-0 kubenswrapper[31420]: I0220 12:21:15.524191 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d206824b-935b-484a-b791-c83386ba4dca-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.524598 master-0 kubenswrapper[31420]: I0220 12:21:15.524509 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-scripts\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.524903 master-0 kubenswrapper[31420]: I0220 12:21:15.524869 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.524903 master-0 kubenswrapper[31420]: I0220 12:21:15.524901 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jblfp\" (UniqueName: \"kubernetes.io/projected/d206824b-935b-484a-b791-c83386ba4dca-kube-api-access-jblfp\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.525107 master-0 kubenswrapper[31420]: I0220 12:21:15.525060 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-config\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.525164 master-0 kubenswrapper[31420]: I0220 12:21:15.525122 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d206824b-935b-484a-b791-c83386ba4dca-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.525164 master-0 kubenswrapper[31420]: I0220 12:21:15.525141 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d206824b-935b-484a-b791-c83386ba4dca-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.629371 master-0 kubenswrapper[31420]: I0220 12:21:15.629301 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.629371 master-0 kubenswrapper[31420]: I0220 12:21:15.629368 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jblfp\" (UniqueName: \"kubernetes.io/projected/d206824b-935b-484a-b791-c83386ba4dca-kube-api-access-jblfp\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.629724 master-0 kubenswrapper[31420]: I0220 12:21:15.629578 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-config\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.629724 master-0 kubenswrapper[31420]: I0220 12:21:15.629671 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d206824b-935b-484a-b791-c83386ba4dca-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.629724 master-0 kubenswrapper[31420]: I0220 12:21:15.629696 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d206824b-935b-484a-b791-c83386ba4dca-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.629956 master-0 kubenswrapper[31420]: I0220 12:21:15.629917 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d206824b-935b-484a-b791-c83386ba4dca-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.630629 master-0 kubenswrapper[31420]: I0220 12:21:15.630025 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-scripts\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.633030 master-0 kubenswrapper[31420]: I0220 12:21:15.632666 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d206824b-935b-484a-b791-c83386ba4dca-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.633030 master-0 kubenswrapper[31420]: I0220 12:21:15.632872 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d206824b-935b-484a-b791-c83386ba4dca-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.638412 master-0 kubenswrapper[31420]: I0220 12:21:15.636803 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-scripts\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.639007 master-0 kubenswrapper[31420]: I0220 12:21:15.638968 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d206824b-935b-484a-b791-c83386ba4dca-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.644507 master-0 kubenswrapper[31420]: I0220 12:21:15.644448 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.644956 master-0 kubenswrapper[31420]: I0220 12:21:15.644912 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-config\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.650613 master-0 kubenswrapper[31420]: I0220 12:21:15.650584 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jblfp\" (UniqueName: \"kubernetes.io/projected/d206824b-935b-484a-b791-c83386ba4dca-kube-api-access-jblfp\") pod \"ironic-inspector-0\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:15.697732 master-0 kubenswrapper[31420]: I0220 12:21:15.697468 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:15.780443 master-0 kubenswrapper[31420]: I0220 12:21:15.780025 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 20 12:21:15.803621 master-0 kubenswrapper[31420]: I0220 12:21:15.801044 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e60fa-default-internal-api-0"] Feb 20 12:21:15.803621 master-0 kubenswrapper[31420]: I0220 12:21:15.801316 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-e60fa-default-internal-api-0" podUID="2092a975-d3a8-4034-9301-d99c84087164" containerName="glance-log" containerID="cri-o://101e552f3ad99c0eddb9a1eb81c89162720cb2fc3e3410a3f72dba0dfa7ff9b7" gracePeriod=30 Feb 20 12:21:15.803621 master-0 kubenswrapper[31420]: I0220 12:21:15.801638 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-e60fa-default-internal-api-0" podUID="2092a975-d3a8-4034-9301-d99c84087164" containerName="glance-httpd" containerID="cri-o://a7973c0efdfc99ea933c9dbc97a7ec14c2ad36e9892e4b5a8fae43fb661baad8" gracePeriod=30 Feb 20 12:21:16.426858 master-0 kubenswrapper[31420]: I0220 12:21:16.426793 31420 generic.go:334] "Generic (PLEG): container finished" podID="2092a975-d3a8-4034-9301-d99c84087164" containerID="101e552f3ad99c0eddb9a1eb81c89162720cb2fc3e3410a3f72dba0dfa7ff9b7" exitCode=143 Feb 20 12:21:16.427108 master-0 kubenswrapper[31420]: I0220 12:21:16.426851 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-internal-api-0" event={"ID":"2092a975-d3a8-4034-9301-d99c84087164","Type":"ContainerDied","Data":"101e552f3ad99c0eddb9a1eb81c89162720cb2fc3e3410a3f72dba0dfa7ff9b7"} Feb 20 12:21:16.439826 master-0 kubenswrapper[31420]: I0220 12:21:16.437263 31420 generic.go:334] "Generic (PLEG): container finished" podID="4f6becc0-4062-4971-9300-fe40c0538d25" containerID="63c35f90eed454fde6afe3d0e77f139ad569d08726242fa8e3eec960ee4204cf" exitCode=0 Feb 20 12:21:16.439826 master-0 kubenswrapper[31420]: I0220 12:21:16.437332 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c69cf75d-mfgck" event={"ID":"4f6becc0-4062-4971-9300-fe40c0538d25","Type":"ContainerDied","Data":"63c35f90eed454fde6afe3d0e77f139ad569d08726242fa8e3eec960ee4204cf"} Feb 20 12:21:16.440781 master-0 kubenswrapper[31420]: I0220 12:21:16.440742 31420 generic.go:334] "Generic (PLEG): container finished" podID="f95db2f7-4dcd-43a4-93fc-650f8cf79f68" containerID="1724846740ee17916c57bb8036a6f78794e53d3205353094d9eaaa4dc6bce4ee" exitCode=0 Feb 20 12:21:16.440877 master-0 kubenswrapper[31420]: I0220 12:21:16.440802 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-external-api-0" event={"ID":"f95db2f7-4dcd-43a4-93fc-650f8cf79f68","Type":"ContainerDied","Data":"1724846740ee17916c57bb8036a6f78794e53d3205353094d9eaaa4dc6bce4ee"} Feb 20 12:21:16.444229 master-0 kubenswrapper[31420]: I0220 12:21:16.444193 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" event={"ID":"d2416044-6dc6-4ce7-8b30-574bce497d5e","Type":"ContainerStarted","Data":"a9fcd90752ffd12b2d55cf7172f5e90ca78bf631554d629bc28f588044a58353"} Feb 20 12:21:16.446184 master-0 kubenswrapper[31420]: I0220 12:21:16.446156 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:21:16.451540 master-0 kubenswrapper[31420]: I0220 12:21:16.451475 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76f9f66bcc-zjvh2"] Feb 20 12:21:16.511194 master-0 kubenswrapper[31420]: W0220 12:21:16.510958 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1ed8d7a_ef1c_4d32_9bdd_a1e29632c688.slice/crio-1b557924cbb21ae4f28bf02626ce149857b1850b096dd2807b2dc0579e75a2d7 WatchSource:0}: Error finding container 1b557924cbb21ae4f28bf02626ce149857b1850b096dd2807b2dc0579e75a2d7: Status 404 returned error can't find the container with id 1b557924cbb21ae4f28bf02626ce149857b1850b096dd2807b2dc0579e75a2d7 Feb 20 12:21:16.787198 master-0 kubenswrapper[31420]: I0220 12:21:16.787098 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:16.898743 master-0 kubenswrapper[31420]: I0220 12:21:16.898675 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-httpd-run\") pod \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " Feb 20 12:21:16.899078 master-0 kubenswrapper[31420]: I0220 12:21:16.899062 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-scripts\") pod \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " Feb 20 12:21:16.899230 master-0 kubenswrapper[31420]: I0220 12:21:16.899214 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-public-tls-certs\") pod \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " Feb 20 12:21:16.899376 master-0 kubenswrapper[31420]: I0220 12:21:16.899360 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-combined-ca-bundle\") pod \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " Feb 20 12:21:16.899545 master-0 kubenswrapper[31420]: I0220 12:21:16.899532 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-config-data\") pod \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " Feb 20 12:21:16.899695 master-0 kubenswrapper[31420]: I0220 12:21:16.899680 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-logs\") pod \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " Feb 20 12:21:16.900138 master-0 kubenswrapper[31420]: I0220 12:21:16.900124 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") pod \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " Feb 20 12:21:16.900241 master-0 kubenswrapper[31420]: I0220 12:21:16.900229 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9drk4\" (UniqueName: \"kubernetes.io/projected/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-kube-api-access-9drk4\") pod \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\" (UID: \"f95db2f7-4dcd-43a4-93fc-650f8cf79f68\") " Feb 20 12:21:16.910172 master-0 kubenswrapper[31420]: I0220 12:21:16.909985 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f95db2f7-4dcd-43a4-93fc-650f8cf79f68" (UID: "f95db2f7-4dcd-43a4-93fc-650f8cf79f68"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:21:16.910385 master-0 kubenswrapper[31420]: I0220 12:21:16.910291 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-logs" (OuterVolumeSpecName: "logs") pod "f95db2f7-4dcd-43a4-93fc-650f8cf79f68" (UID: "f95db2f7-4dcd-43a4-93fc-650f8cf79f68"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:21:16.911942 master-0 kubenswrapper[31420]: I0220 12:21:16.910628 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-kube-api-access-9drk4" (OuterVolumeSpecName: "kube-api-access-9drk4") pod "f95db2f7-4dcd-43a4-93fc-650f8cf79f68" (UID: "f95db2f7-4dcd-43a4-93fc-650f8cf79f68"). InnerVolumeSpecName "kube-api-access-9drk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:21:16.912172 master-0 kubenswrapper[31420]: I0220 12:21:16.912154 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-scripts" (OuterVolumeSpecName: "scripts") pod "f95db2f7-4dcd-43a4-93fc-650f8cf79f68" (UID: "f95db2f7-4dcd-43a4-93fc-650f8cf79f68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:16.941178 master-0 kubenswrapper[31420]: I0220 12:21:16.941121 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a" (OuterVolumeSpecName: "glance") pod "f95db2f7-4dcd-43a4-93fc-650f8cf79f68" (UID: "f95db2f7-4dcd-43a4-93fc-650f8cf79f68"). InnerVolumeSpecName "pvc-935c0235-ccde-432b-b98e-92716df30f4a". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 20 12:21:16.951879 master-0 kubenswrapper[31420]: I0220 12:21:16.951828 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f95db2f7-4dcd-43a4-93fc-650f8cf79f68" (UID: "f95db2f7-4dcd-43a4-93fc-650f8cf79f68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:17.003603 master-0 kubenswrapper[31420]: I0220 12:21:17.000908 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f95db2f7-4dcd-43a4-93fc-650f8cf79f68" (UID: "f95db2f7-4dcd-43a4-93fc-650f8cf79f68"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:17.003603 master-0 kubenswrapper[31420]: I0220 12:21:17.002836 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:17.003603 master-0 kubenswrapper[31420]: I0220 12:21:17.002873 31420 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:17.003603 master-0 kubenswrapper[31420]: I0220 12:21:17.002889 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:17.003603 master-0 kubenswrapper[31420]: I0220 12:21:17.002903 31420 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:17.003603 master-0 kubenswrapper[31420]: I0220 12:21:17.002930 31420 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") on node \"master-0\" " Feb 20 12:21:17.003603 master-0 kubenswrapper[31420]: I0220 12:21:17.002946 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9drk4\" (UniqueName: \"kubernetes.io/projected/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-kube-api-access-9drk4\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:17.003603 master-0 kubenswrapper[31420]: I0220 12:21:17.002960 31420 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:17.030344 master-0 kubenswrapper[31420]: I0220 12:21:17.030256 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-config-data" (OuterVolumeSpecName: "config-data") pod "f95db2f7-4dcd-43a4-93fc-650f8cf79f68" (UID: "f95db2f7-4dcd-43a4-93fc-650f8cf79f68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:17.048875 master-0 kubenswrapper[31420]: I0220 12:21:17.042074 31420 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 20 12:21:17.048875 master-0 kubenswrapper[31420]: I0220 12:21:17.042233 31420 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-935c0235-ccde-432b-b98e-92716df30f4a" (UniqueName: "kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a") on node "master-0" Feb 20 12:21:17.106425 master-0 kubenswrapper[31420]: I0220 12:21:17.106350 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95db2f7-4dcd-43a4-93fc-650f8cf79f68-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:17.106425 master-0 kubenswrapper[31420]: I0220 12:21:17.106399 31420 reconciler_common.go:293] "Volume detached for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:17.481008 master-0 kubenswrapper[31420]: I0220 12:21:17.480865 31420 generic.go:334] "Generic (PLEG): container finished" podID="c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" containerID="17edb73705584b4279cf7fbffa34a282fdc33f1e12b4fb4f27afb80cef231a21" exitCode=0 Feb 20 12:21:17.481008 master-0 kubenswrapper[31420]: I0220 12:21:17.480931 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" event={"ID":"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688","Type":"ContainerDied","Data":"17edb73705584b4279cf7fbffa34a282fdc33f1e12b4fb4f27afb80cef231a21"} Feb 20 12:21:17.481008 master-0 kubenswrapper[31420]: I0220 12:21:17.480956 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" event={"ID":"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688","Type":"ContainerStarted","Data":"1b557924cbb21ae4f28bf02626ce149857b1850b096dd2807b2dc0579e75a2d7"} Feb 20 12:21:17.487520 master-0 kubenswrapper[31420]: I0220 12:21:17.487374 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:17.487520 master-0 kubenswrapper[31420]: I0220 12:21:17.487455 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-external-api-0" event={"ID":"f95db2f7-4dcd-43a4-93fc-650f8cf79f68","Type":"ContainerDied","Data":"2835cfd96341f9e53665b810b042d43598ec62257baa536e632637d0c70bf674"} Feb 20 12:21:17.488084 master-0 kubenswrapper[31420]: I0220 12:21:17.487980 31420 scope.go:117] "RemoveContainer" containerID="1724846740ee17916c57bb8036a6f78794e53d3205353094d9eaaa4dc6bce4ee" Feb 20 12:21:17.954104 master-0 kubenswrapper[31420]: I0220 12:21:17.954012 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e60fa-default-external-api-0"] Feb 20 12:21:17.987861 master-0 kubenswrapper[31420]: I0220 12:21:17.987799 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e60fa-default-external-api-0"] Feb 20 12:21:17.999987 master-0 kubenswrapper[31420]: I0220 12:21:17.999937 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e60fa-default-external-api-0"] Feb 20 12:21:18.000461 master-0 kubenswrapper[31420]: E0220 12:21:18.000440 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f95db2f7-4dcd-43a4-93fc-650f8cf79f68" containerName="glance-log" Feb 20 12:21:18.000461 master-0 kubenswrapper[31420]: I0220 12:21:18.000459 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="f95db2f7-4dcd-43a4-93fc-650f8cf79f68" containerName="glance-log" Feb 20 12:21:18.000575 master-0 kubenswrapper[31420]: E0220 12:21:18.000498 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f95db2f7-4dcd-43a4-93fc-650f8cf79f68" containerName="glance-httpd" Feb 20 12:21:18.000575 master-0 kubenswrapper[31420]: I0220 12:21:18.000505 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="f95db2f7-4dcd-43a4-93fc-650f8cf79f68" containerName="glance-httpd" Feb 20 12:21:18.000795 master-0 kubenswrapper[31420]: I0220 12:21:18.000777 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="f95db2f7-4dcd-43a4-93fc-650f8cf79f68" containerName="glance-log" Feb 20 12:21:18.000843 master-0 kubenswrapper[31420]: I0220 12:21:18.000803 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="f95db2f7-4dcd-43a4-93fc-650f8cf79f68" containerName="glance-httpd" Feb 20 12:21:18.002137 master-0 kubenswrapper[31420]: I0220 12:21:18.002094 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.010614 master-0 kubenswrapper[31420]: I0220 12:21:18.010553 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-e60fa-default-external-config-data" Feb 20 12:21:18.011039 master-0 kubenswrapper[31420]: I0220 12:21:18.011018 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 20 12:21:18.013196 master-0 kubenswrapper[31420]: I0220 12:21:18.013123 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e60fa-default-external-api-0"] Feb 20 12:21:18.071294 master-0 kubenswrapper[31420]: I0220 12:21:18.070666 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 12:21:18.139962 master-0 kubenswrapper[31420]: I0220 12:21:18.139901 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-scripts\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.140224 master-0 kubenswrapper[31420]: I0220 12:21:18.140031 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-combined-ca-bundle\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.140224 master-0 kubenswrapper[31420]: I0220 12:21:18.140205 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-config-data\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.140338 master-0 kubenswrapper[31420]: I0220 12:21:18.140235 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-logs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.140338 master-0 kubenswrapper[31420]: I0220 12:21:18.140270 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-public-tls-certs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.140338 master-0 kubenswrapper[31420]: I0220 12:21:18.140296 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-httpd-run\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.140478 master-0 kubenswrapper[31420]: I0220 12:21:18.140381 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7g2h\" (UniqueName: \"kubernetes.io/projected/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-kube-api-access-j7g2h\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.140478 master-0 kubenswrapper[31420]: I0220 12:21:18.140409 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.242984 master-0 kubenswrapper[31420]: I0220 12:21:18.242812 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-config-data\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.242984 master-0 kubenswrapper[31420]: I0220 12:21:18.242896 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-logs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.242984 master-0 kubenswrapper[31420]: I0220 12:21:18.242947 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-public-tls-certs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.244251 master-0 kubenswrapper[31420]: I0220 12:21:18.243320 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-httpd-run\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.244251 master-0 kubenswrapper[31420]: I0220 12:21:18.243487 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7g2h\" (UniqueName: \"kubernetes.io/projected/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-kube-api-access-j7g2h\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.244251 master-0 kubenswrapper[31420]: I0220 12:21:18.243519 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-logs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.244251 master-0 kubenswrapper[31420]: I0220 12:21:18.243551 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.244251 master-0 kubenswrapper[31420]: I0220 12:21:18.243642 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-scripts\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.244251 master-0 kubenswrapper[31420]: I0220 12:21:18.243733 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-combined-ca-bundle\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.244251 master-0 kubenswrapper[31420]: I0220 12:21:18.243899 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-httpd-run\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.246709 master-0 kubenswrapper[31420]: I0220 12:21:18.246396 31420 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 12:21:18.246709 master-0 kubenswrapper[31420]: I0220 12:21:18.246429 31420 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/368f719461f0c265cdc1ceb7166e6bad74c18134a381ef3f2ecc6c3c88bbea1f/globalmount\"" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.247972 master-0 kubenswrapper[31420]: I0220 12:21:18.247576 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-scripts\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.250521 master-0 kubenswrapper[31420]: I0220 12:21:18.250475 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-combined-ca-bundle\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.251461 master-0 kubenswrapper[31420]: I0220 12:21:18.251414 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-config-data\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.258794 master-0 kubenswrapper[31420]: I0220 12:21:18.258748 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-public-tls-certs\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:18.261956 master-0 kubenswrapper[31420]: I0220 12:21:18.261923 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7g2h\" (UniqueName: \"kubernetes.io/projected/d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d-kube-api-access-j7g2h\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:19.109874 master-0 kubenswrapper[31420]: I0220 12:21:19.109771 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-935c0235-ccde-432b-b98e-92716df30f4a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2b23ca0f-89e0-4c01-9b2c-40ebec193f9a\") pod \"glance-e60fa-default-external-api-0\" (UID: \"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d\") " pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:19.232564 master-0 kubenswrapper[31420]: I0220 12:21:19.232147 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:19.512967 master-0 kubenswrapper[31420]: I0220 12:21:19.512901 31420 generic.go:334] "Generic (PLEG): container finished" podID="2092a975-d3a8-4034-9301-d99c84087164" containerID="a7973c0efdfc99ea933c9dbc97a7ec14c2ad36e9892e4b5a8fae43fb661baad8" exitCode=0 Feb 20 12:21:19.514787 master-0 kubenswrapper[31420]: I0220 12:21:19.514759 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f95db2f7-4dcd-43a4-93fc-650f8cf79f68" path="/var/lib/kubelet/pods/f95db2f7-4dcd-43a4-93fc-650f8cf79f68/volumes" Feb 20 12:21:19.518184 master-0 kubenswrapper[31420]: I0220 12:21:19.518129 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-internal-api-0" event={"ID":"2092a975-d3a8-4034-9301-d99c84087164","Type":"ContainerDied","Data":"a7973c0efdfc99ea933c9dbc97a7ec14c2ad36e9892e4b5a8fae43fb661baad8"} Feb 20 12:21:19.569553 master-0 kubenswrapper[31420]: I0220 12:21:19.560731 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 12:21:21.559205 master-0 kubenswrapper[31420]: I0220 12:21:21.559121 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8jmg4" event={"ID":"603ae118-f4a7-48ea-b99b-8a71f297b617","Type":"ContainerDied","Data":"a24cbc6706db6434d1f9543e7c0cf590b4299decf82a29e4c62dd0a453917e05"} Feb 20 12:21:21.559205 master-0 kubenswrapper[31420]: I0220 12:21:21.559175 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a24cbc6706db6434d1f9543e7c0cf590b4299decf82a29e4c62dd0a453917e05" Feb 20 12:21:21.562091 master-0 kubenswrapper[31420]: I0220 12:21:21.561930 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d206824b-935b-484a-b791-c83386ba4dca","Type":"ContainerStarted","Data":"0786f0326ab0f0f5e027ff7275aaea9f60ec1040053fd2426528dcd19e10ddbc"} Feb 20 12:21:21.566966 master-0 kubenswrapper[31420]: I0220 12:21:21.566855 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bb02-account-create-update-gjp7p" event={"ID":"d142bf65-a53f-4fd8-95b0-46c05306e168","Type":"ContainerDied","Data":"bcadcf6eaa487e804e8bab857bd98bd2b2d11963239510e622e0a5e946155d8f"} Feb 20 12:21:21.566966 master-0 kubenswrapper[31420]: I0220 12:21:21.566917 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcadcf6eaa487e804e8bab857bd98bd2b2d11963239510e622e0a5e946155d8f" Feb 20 12:21:21.577982 master-0 kubenswrapper[31420]: I0220 12:21:21.576855 31420 scope.go:117] "RemoveContainer" containerID="356157ba73415aa002f3e37c1ee802ff68584de48c801ce706ce3de5e01cc676" Feb 20 12:21:21.631516 master-0 kubenswrapper[31420]: I0220 12:21:21.631114 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c69cf75d-mfgck" event={"ID":"4f6becc0-4062-4971-9300-fe40c0538d25","Type":"ContainerDied","Data":"437ec292af3d31dca563cc2b357a655612ea9a1a6c3727e13becc7ee8d483d78"} Feb 20 12:21:21.631516 master-0 kubenswrapper[31420]: I0220 12:21:21.631194 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="437ec292af3d31dca563cc2b357a655612ea9a1a6c3727e13becc7ee8d483d78" Feb 20 12:21:21.634657 master-0 kubenswrapper[31420]: I0220 12:21:21.634600 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f6bb-account-create-update-bmnfz" event={"ID":"af8da94d-28af-4392-bdf9-c0d1c6eaeda4","Type":"ContainerDied","Data":"7a40a84ded642be9ec948b94db305ace26b24fc61add1d5319dd61f7781d0159"} Feb 20 12:21:21.634814 master-0 kubenswrapper[31420]: I0220 12:21:21.634662 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a40a84ded642be9ec948b94db305ace26b24fc61add1d5319dd61f7781d0159" Feb 20 12:21:21.637015 master-0 kubenswrapper[31420]: I0220 12:21:21.636916 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zhh86" event={"ID":"d1383ebf-b51c-4b56-bdec-191e09ab35ac","Type":"ContainerDied","Data":"c1372661caae77c8a1a19a03e66026ed4ec8c5d15364085cc7bd557840e0d05c"} Feb 20 12:21:21.637121 master-0 kubenswrapper[31420]: I0220 12:21:21.637024 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1372661caae77c8a1a19a03e66026ed4ec8c5d15364085cc7bd557840e0d05c" Feb 20 12:21:21.638488 master-0 kubenswrapper[31420]: I0220 12:21:21.638456 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rkmxn" event={"ID":"1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710","Type":"ContainerDied","Data":"902b368c962e8b924e10514853143c5b70da779ea9c26eafe48178b28c1fb33c"} Feb 20 12:21:21.638604 master-0 kubenswrapper[31420]: I0220 12:21:21.638487 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="902b368c962e8b924e10514853143c5b70da779ea9c26eafe48178b28c1fb33c" Feb 20 12:21:21.650557 master-0 kubenswrapper[31420]: I0220 12:21:21.650477 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ace2-account-create-update-8fqwl" event={"ID":"5dc933f4-f7a5-4ae9-8d43-336be86a5f34","Type":"ContainerDied","Data":"7a9566d6d543ade03743083877fb4512a6270a61c55d244f56de607e7ec1a975"} Feb 20 12:21:21.650557 master-0 kubenswrapper[31420]: I0220 12:21:21.650557 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a9566d6d543ade03743083877fb4512a6270a61c55d244f56de607e7ec1a975" Feb 20 12:21:21.776061 master-0 kubenswrapper[31420]: I0220 12:21:21.775997 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkmxn" Feb 20 12:21:21.811925 master-0 kubenswrapper[31420]: I0220 12:21:21.803589 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f6bb-account-create-update-bmnfz" Feb 20 12:21:21.862759 master-0 kubenswrapper[31420]: I0220 12:21:21.862686 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z476h\" (UniqueName: \"kubernetes.io/projected/1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710-kube-api-access-z476h\") pod \"1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710\" (UID: \"1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710\") " Feb 20 12:21:21.862759 master-0 kubenswrapper[31420]: I0220 12:21:21.862738 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq2sr\" (UniqueName: \"kubernetes.io/projected/af8da94d-28af-4392-bdf9-c0d1c6eaeda4-kube-api-access-bq2sr\") pod \"af8da94d-28af-4392-bdf9-c0d1c6eaeda4\" (UID: \"af8da94d-28af-4392-bdf9-c0d1c6eaeda4\") " Feb 20 12:21:21.863046 master-0 kubenswrapper[31420]: I0220 12:21:21.862813 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710-operator-scripts\") pod \"1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710\" (UID: \"1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710\") " Feb 20 12:21:21.863046 master-0 kubenswrapper[31420]: I0220 12:21:21.862940 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af8da94d-28af-4392-bdf9-c0d1c6eaeda4-operator-scripts\") pod \"af8da94d-28af-4392-bdf9-c0d1c6eaeda4\" (UID: \"af8da94d-28af-4392-bdf9-c0d1c6eaeda4\") " Feb 20 12:21:21.864029 master-0 kubenswrapper[31420]: I0220 12:21:21.863920 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710" (UID: "1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:21:21.864029 master-0 kubenswrapper[31420]: I0220 12:21:21.863944 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af8da94d-28af-4392-bdf9-c0d1c6eaeda4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af8da94d-28af-4392-bdf9-c0d1c6eaeda4" (UID: "af8da94d-28af-4392-bdf9-c0d1c6eaeda4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:21:21.889562 master-0 kubenswrapper[31420]: I0220 12:21:21.867843 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8da94d-28af-4392-bdf9-c0d1c6eaeda4-kube-api-access-bq2sr" (OuterVolumeSpecName: "kube-api-access-bq2sr") pod "af8da94d-28af-4392-bdf9-c0d1c6eaeda4" (UID: "af8da94d-28af-4392-bdf9-c0d1c6eaeda4"). InnerVolumeSpecName "kube-api-access-bq2sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:21:21.889562 master-0 kubenswrapper[31420]: I0220 12:21:21.875435 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710-kube-api-access-z476h" (OuterVolumeSpecName: "kube-api-access-z476h") pod "1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710" (UID: "1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710"). InnerVolumeSpecName "kube-api-access-z476h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:21:21.889562 master-0 kubenswrapper[31420]: I0220 12:21:21.889269 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:21:21.943651 master-0 kubenswrapper[31420]: I0220 12:21:21.920359 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bb02-account-create-update-gjp7p" Feb 20 12:21:21.949763 master-0 kubenswrapper[31420]: I0220 12:21:21.949058 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ace2-account-create-update-8fqwl" Feb 20 12:21:21.963384 master-0 kubenswrapper[31420]: I0220 12:21:21.963345 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zhh86" Feb 20 12:21:21.965363 master-0 kubenswrapper[31420]: I0220 12:21:21.965297 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dc933f4-f7a5-4ae9-8d43-336be86a5f34-operator-scripts\") pod \"5dc933f4-f7a5-4ae9-8d43-336be86a5f34\" (UID: \"5dc933f4-f7a5-4ae9-8d43-336be86a5f34\") " Feb 20 12:21:21.965363 master-0 kubenswrapper[31420]: I0220 12:21:21.965350 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-combined-ca-bundle\") pod \"4f6becc0-4062-4971-9300-fe40c0538d25\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " Feb 20 12:21:21.966489 master-0 kubenswrapper[31420]: I0220 12:21:21.965436 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcw4w\" (UniqueName: \"kubernetes.io/projected/5dc933f4-f7a5-4ae9-8d43-336be86a5f34-kube-api-access-vcw4w\") pod \"5dc933f4-f7a5-4ae9-8d43-336be86a5f34\" (UID: \"5dc933f4-f7a5-4ae9-8d43-336be86a5f34\") " Feb 20 12:21:21.966489 master-0 kubenswrapper[31420]: I0220 12:21:21.965482 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5bfk\" (UniqueName: \"kubernetes.io/projected/4f6becc0-4062-4971-9300-fe40c0538d25-kube-api-access-l5bfk\") pod \"4f6becc0-4062-4971-9300-fe40c0538d25\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " Feb 20 12:21:21.966489 master-0 kubenswrapper[31420]: I0220 12:21:21.965702 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-ovndb-tls-certs\") pod \"4f6becc0-4062-4971-9300-fe40c0538d25\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " Feb 20 12:21:21.966489 master-0 kubenswrapper[31420]: I0220 12:21:21.965753 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-config\") pod \"4f6becc0-4062-4971-9300-fe40c0538d25\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " Feb 20 12:21:21.966489 master-0 kubenswrapper[31420]: I0220 12:21:21.965895 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smf77\" (UniqueName: \"kubernetes.io/projected/d142bf65-a53f-4fd8-95b0-46c05306e168-kube-api-access-smf77\") pod \"d142bf65-a53f-4fd8-95b0-46c05306e168\" (UID: \"d142bf65-a53f-4fd8-95b0-46c05306e168\") " Feb 20 12:21:21.966489 master-0 kubenswrapper[31420]: I0220 12:21:21.965929 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dc933f4-f7a5-4ae9-8d43-336be86a5f34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5dc933f4-f7a5-4ae9-8d43-336be86a5f34" (UID: "5dc933f4-f7a5-4ae9-8d43-336be86a5f34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:21:21.966489 master-0 kubenswrapper[31420]: I0220 12:21:21.965945 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d142bf65-a53f-4fd8-95b0-46c05306e168-operator-scripts\") pod \"d142bf65-a53f-4fd8-95b0-46c05306e168\" (UID: \"d142bf65-a53f-4fd8-95b0-46c05306e168\") " Feb 20 12:21:21.966489 master-0 kubenswrapper[31420]: I0220 12:21:21.966356 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d142bf65-a53f-4fd8-95b0-46c05306e168-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d142bf65-a53f-4fd8-95b0-46c05306e168" (UID: "d142bf65-a53f-4fd8-95b0-46c05306e168"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:21:21.966865 master-0 kubenswrapper[31420]: I0220 12:21:21.966679 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-httpd-config\") pod \"4f6becc0-4062-4971-9300-fe40c0538d25\" (UID: \"4f6becc0-4062-4971-9300-fe40c0538d25\") " Feb 20 12:21:21.968186 master-0 kubenswrapper[31420]: I0220 12:21:21.968152 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z476h\" (UniqueName: \"kubernetes.io/projected/1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710-kube-api-access-z476h\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:21.968186 master-0 kubenswrapper[31420]: I0220 12:21:21.968184 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq2sr\" (UniqueName: \"kubernetes.io/projected/af8da94d-28af-4392-bdf9-c0d1c6eaeda4-kube-api-access-bq2sr\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:21.968303 master-0 kubenswrapper[31420]: I0220 12:21:21.968197 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dc933f4-f7a5-4ae9-8d43-336be86a5f34-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:21.968303 master-0 kubenswrapper[31420]: I0220 12:21:21.968210 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:21.968303 master-0 kubenswrapper[31420]: I0220 12:21:21.968224 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af8da94d-28af-4392-bdf9-c0d1c6eaeda4-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:21.968303 master-0 kubenswrapper[31420]: I0220 12:21:21.968235 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d142bf65-a53f-4fd8-95b0-46c05306e168-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:21.971562 master-0 kubenswrapper[31420]: I0220 12:21:21.971504 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4f6becc0-4062-4971-9300-fe40c0538d25" (UID: "4f6becc0-4062-4971-9300-fe40c0538d25"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:21.974058 master-0 kubenswrapper[31420]: I0220 12:21:21.972870 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d142bf65-a53f-4fd8-95b0-46c05306e168-kube-api-access-smf77" (OuterVolumeSpecName: "kube-api-access-smf77") pod "d142bf65-a53f-4fd8-95b0-46c05306e168" (UID: "d142bf65-a53f-4fd8-95b0-46c05306e168"). InnerVolumeSpecName "kube-api-access-smf77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:21:21.974058 master-0 kubenswrapper[31420]: I0220 12:21:21.973024 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f6becc0-4062-4971-9300-fe40c0538d25-kube-api-access-l5bfk" (OuterVolumeSpecName: "kube-api-access-l5bfk") pod "4f6becc0-4062-4971-9300-fe40c0538d25" (UID: "4f6becc0-4062-4971-9300-fe40c0538d25"). InnerVolumeSpecName "kube-api-access-l5bfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:21:21.979940 master-0 kubenswrapper[31420]: I0220 12:21:21.979891 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8jmg4" Feb 20 12:21:21.991999 master-0 kubenswrapper[31420]: I0220 12:21:21.991943 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dc933f4-f7a5-4ae9-8d43-336be86a5f34-kube-api-access-vcw4w" (OuterVolumeSpecName: "kube-api-access-vcw4w") pod "5dc933f4-f7a5-4ae9-8d43-336be86a5f34" (UID: "5dc933f4-f7a5-4ae9-8d43-336be86a5f34"). InnerVolumeSpecName "kube-api-access-vcw4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:21:22.069997 master-0 kubenswrapper[31420]: I0220 12:21:22.069918 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-688d4\" (UniqueName: \"kubernetes.io/projected/603ae118-f4a7-48ea-b99b-8a71f297b617-kube-api-access-688d4\") pod \"603ae118-f4a7-48ea-b99b-8a71f297b617\" (UID: \"603ae118-f4a7-48ea-b99b-8a71f297b617\") " Feb 20 12:21:22.069997 master-0 kubenswrapper[31420]: I0220 12:21:22.069994 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1383ebf-b51c-4b56-bdec-191e09ab35ac-operator-scripts\") pod \"d1383ebf-b51c-4b56-bdec-191e09ab35ac\" (UID: \"d1383ebf-b51c-4b56-bdec-191e09ab35ac\") " Feb 20 12:21:22.070402 master-0 kubenswrapper[31420]: I0220 12:21:22.070142 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvgp9\" (UniqueName: \"kubernetes.io/projected/d1383ebf-b51c-4b56-bdec-191e09ab35ac-kube-api-access-lvgp9\") pod \"d1383ebf-b51c-4b56-bdec-191e09ab35ac\" (UID: \"d1383ebf-b51c-4b56-bdec-191e09ab35ac\") " Feb 20 12:21:22.070402 master-0 kubenswrapper[31420]: I0220 12:21:22.070168 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603ae118-f4a7-48ea-b99b-8a71f297b617-operator-scripts\") pod \"603ae118-f4a7-48ea-b99b-8a71f297b617\" (UID: \"603ae118-f4a7-48ea-b99b-8a71f297b617\") " Feb 20 12:21:22.070798 master-0 kubenswrapper[31420]: I0220 12:21:22.070766 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcw4w\" (UniqueName: \"kubernetes.io/projected/5dc933f4-f7a5-4ae9-8d43-336be86a5f34-kube-api-access-vcw4w\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.070798 master-0 kubenswrapper[31420]: I0220 12:21:22.070785 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5bfk\" (UniqueName: \"kubernetes.io/projected/4f6becc0-4062-4971-9300-fe40c0538d25-kube-api-access-l5bfk\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.070919 master-0 kubenswrapper[31420]: I0220 12:21:22.070816 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smf77\" (UniqueName: \"kubernetes.io/projected/d142bf65-a53f-4fd8-95b0-46c05306e168-kube-api-access-smf77\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.070919 master-0 kubenswrapper[31420]: I0220 12:21:22.070827 31420 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-httpd-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.071271 master-0 kubenswrapper[31420]: I0220 12:21:22.071234 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/603ae118-f4a7-48ea-b99b-8a71f297b617-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "603ae118-f4a7-48ea-b99b-8a71f297b617" (UID: "603ae118-f4a7-48ea-b99b-8a71f297b617"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:21:22.072200 master-0 kubenswrapper[31420]: I0220 12:21:22.072156 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1383ebf-b51c-4b56-bdec-191e09ab35ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1383ebf-b51c-4b56-bdec-191e09ab35ac" (UID: "d1383ebf-b51c-4b56-bdec-191e09ab35ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:21:22.080087 master-0 kubenswrapper[31420]: I0220 12:21:22.079889 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1383ebf-b51c-4b56-bdec-191e09ab35ac-kube-api-access-lvgp9" (OuterVolumeSpecName: "kube-api-access-lvgp9") pod "d1383ebf-b51c-4b56-bdec-191e09ab35ac" (UID: "d1383ebf-b51c-4b56-bdec-191e09ab35ac"). InnerVolumeSpecName "kube-api-access-lvgp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:21:22.082150 master-0 kubenswrapper[31420]: I0220 12:21:22.082075 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603ae118-f4a7-48ea-b99b-8a71f297b617-kube-api-access-688d4" (OuterVolumeSpecName: "kube-api-access-688d4") pod "603ae118-f4a7-48ea-b99b-8a71f297b617" (UID: "603ae118-f4a7-48ea-b99b-8a71f297b617"). InnerVolumeSpecName "kube-api-access-688d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:21:22.098179 master-0 kubenswrapper[31420]: I0220 12:21:22.098085 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:22.099692 master-0 kubenswrapper[31420]: I0220 12:21:22.099596 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-config" (OuterVolumeSpecName: "config") pod "4f6becc0-4062-4971-9300-fe40c0538d25" (UID: "4f6becc0-4062-4971-9300-fe40c0538d25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:22.129419 master-0 kubenswrapper[31420]: I0220 12:21:22.128802 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f6becc0-4062-4971-9300-fe40c0538d25" (UID: "4f6becc0-4062-4971-9300-fe40c0538d25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:22.172860 master-0 kubenswrapper[31420]: I0220 12:21:22.171502 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-config-data\") pod \"2092a975-d3a8-4034-9301-d99c84087164\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " Feb 20 12:21:22.172860 master-0 kubenswrapper[31420]: I0220 12:21:22.171731 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-scripts\") pod \"2092a975-d3a8-4034-9301-d99c84087164\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " Feb 20 12:21:22.172860 master-0 kubenswrapper[31420]: I0220 12:21:22.171763 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-internal-tls-certs\") pod \"2092a975-d3a8-4034-9301-d99c84087164\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " Feb 20 12:21:22.172860 master-0 kubenswrapper[31420]: I0220 12:21:22.171800 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxpjf\" (UniqueName: \"kubernetes.io/projected/2092a975-d3a8-4034-9301-d99c84087164-kube-api-access-pxpjf\") pod \"2092a975-d3a8-4034-9301-d99c84087164\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " Feb 20 12:21:22.172860 master-0 kubenswrapper[31420]: I0220 12:21:22.171894 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2092a975-d3a8-4034-9301-d99c84087164-httpd-run\") pod \"2092a975-d3a8-4034-9301-d99c84087164\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " Feb 20 12:21:22.172860 master-0 kubenswrapper[31420]: I0220 12:21:22.171994 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a16d4698-b9da-481e-bdf6-e66bfc4f546b\") pod \"2092a975-d3a8-4034-9301-d99c84087164\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " Feb 20 12:21:22.172860 master-0 kubenswrapper[31420]: I0220 12:21:22.172067 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-combined-ca-bundle\") pod \"2092a975-d3a8-4034-9301-d99c84087164\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " Feb 20 12:21:22.172860 master-0 kubenswrapper[31420]: I0220 12:21:22.172096 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2092a975-d3a8-4034-9301-d99c84087164-logs\") pod \"2092a975-d3a8-4034-9301-d99c84087164\" (UID: \"2092a975-d3a8-4034-9301-d99c84087164\") " Feb 20 12:21:22.172860 master-0 kubenswrapper[31420]: I0220 12:21:22.172672 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.172860 master-0 kubenswrapper[31420]: I0220 12:21:22.172687 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-688d4\" (UniqueName: \"kubernetes.io/projected/603ae118-f4a7-48ea-b99b-8a71f297b617-kube-api-access-688d4\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.172860 master-0 kubenswrapper[31420]: I0220 12:21:22.172701 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1383ebf-b51c-4b56-bdec-191e09ab35ac-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.172860 master-0 kubenswrapper[31420]: I0220 12:21:22.172710 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.172860 master-0 kubenswrapper[31420]: I0220 12:21:22.172719 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvgp9\" (UniqueName: \"kubernetes.io/projected/d1383ebf-b51c-4b56-bdec-191e09ab35ac-kube-api-access-lvgp9\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.172860 master-0 kubenswrapper[31420]: I0220 12:21:22.172728 31420 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603ae118-f4a7-48ea-b99b-8a71f297b617-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.174139 master-0 kubenswrapper[31420]: I0220 12:21:22.174078 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2092a975-d3a8-4034-9301-d99c84087164-logs" (OuterVolumeSpecName: "logs") pod "2092a975-d3a8-4034-9301-d99c84087164" (UID: "2092a975-d3a8-4034-9301-d99c84087164"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:21:22.184778 master-0 kubenswrapper[31420]: I0220 12:21:22.184733 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2092a975-d3a8-4034-9301-d99c84087164-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2092a975-d3a8-4034-9301-d99c84087164" (UID: "2092a975-d3a8-4034-9301-d99c84087164"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:21:22.190418 master-0 kubenswrapper[31420]: I0220 12:21:22.189053 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-scripts" (OuterVolumeSpecName: "scripts") pod "2092a975-d3a8-4034-9301-d99c84087164" (UID: "2092a975-d3a8-4034-9301-d99c84087164"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:22.198553 master-0 kubenswrapper[31420]: I0220 12:21:22.193674 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2092a975-d3a8-4034-9301-d99c84087164-kube-api-access-pxpjf" (OuterVolumeSpecName: "kube-api-access-pxpjf") pod "2092a975-d3a8-4034-9301-d99c84087164" (UID: "2092a975-d3a8-4034-9301-d99c84087164"). InnerVolumeSpecName "kube-api-access-pxpjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:21:22.212013 master-0 kubenswrapper[31420]: I0220 12:21:22.211964 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^a16d4698-b9da-481e-bdf6-e66bfc4f546b" (OuterVolumeSpecName: "glance") pod "2092a975-d3a8-4034-9301-d99c84087164" (UID: "2092a975-d3a8-4034-9301-d99c84087164"). InnerVolumeSpecName "pvc-02654c43-ecb4-4424-b56e-75484a4f3f54". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 20 12:21:22.216792 master-0 kubenswrapper[31420]: I0220 12:21:22.216736 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-5db78c68bd-t4cm6" Feb 20 12:21:22.236422 master-0 kubenswrapper[31420]: I0220 12:21:22.232753 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2092a975-d3a8-4034-9301-d99c84087164" (UID: "2092a975-d3a8-4034-9301-d99c84087164"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:22.280491 master-0 kubenswrapper[31420]: I0220 12:21:22.279074 31420 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2092a975-d3a8-4034-9301-d99c84087164-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.280491 master-0 kubenswrapper[31420]: I0220 12:21:22.279143 31420 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-02654c43-ecb4-4424-b56e-75484a4f3f54\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a16d4698-b9da-481e-bdf6-e66bfc4f546b\") on node \"master-0\" " Feb 20 12:21:22.280491 master-0 kubenswrapper[31420]: I0220 12:21:22.279157 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.280491 master-0 kubenswrapper[31420]: I0220 12:21:22.279168 31420 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2092a975-d3a8-4034-9301-d99c84087164-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.280491 master-0 kubenswrapper[31420]: I0220 12:21:22.279177 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.280491 master-0 kubenswrapper[31420]: I0220 12:21:22.279186 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxpjf\" (UniqueName: \"kubernetes.io/projected/2092a975-d3a8-4034-9301-d99c84087164-kube-api-access-pxpjf\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.321562 master-0 kubenswrapper[31420]: I0220 12:21:22.314690 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4f6becc0-4062-4971-9300-fe40c0538d25" (UID: "4f6becc0-4062-4971-9300-fe40c0538d25"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:22.321562 master-0 kubenswrapper[31420]: I0220 12:21:22.317052 31420 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 20 12:21:22.321562 master-0 kubenswrapper[31420]: I0220 12:21:22.317173 31420 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-02654c43-ecb4-4424-b56e-75484a4f3f54" (UniqueName: "kubernetes.io/csi/topolvm.io^a16d4698-b9da-481e-bdf6-e66bfc4f546b") on node "master-0" Feb 20 12:21:22.379320 master-0 kubenswrapper[31420]: I0220 12:21:22.379256 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-config-data" (OuterVolumeSpecName: "config-data") pod "2092a975-d3a8-4034-9301-d99c84087164" (UID: "2092a975-d3a8-4034-9301-d99c84087164"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:22.387575 master-0 kubenswrapper[31420]: I0220 12:21:22.384696 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2092a975-d3a8-4034-9301-d99c84087164" (UID: "2092a975-d3a8-4034-9301-d99c84087164"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:22.387575 master-0 kubenswrapper[31420]: I0220 12:21:22.386094 31420 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.387575 master-0 kubenswrapper[31420]: I0220 12:21:22.386120 31420 reconciler_common.go:293] "Volume detached for volume \"pvc-02654c43-ecb4-4424-b56e-75484a4f3f54\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a16d4698-b9da-481e-bdf6-e66bfc4f546b\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.387575 master-0 kubenswrapper[31420]: I0220 12:21:22.386132 31420 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6becc0-4062-4971-9300-fe40c0538d25-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.387575 master-0 kubenswrapper[31420]: I0220 12:21:22.386141 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2092a975-d3a8-4034-9301-d99c84087164-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:22.396549 master-0 kubenswrapper[31420]: I0220 12:21:22.394319 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e60fa-default-external-api-0"] Feb 20 12:21:22.591986 master-0 kubenswrapper[31420]: I0220 12:21:22.588895 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:21:22.679479 master-0 kubenswrapper[31420]: I0220 12:21:22.679408 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-internal-api-0" event={"ID":"2092a975-d3a8-4034-9301-d99c84087164","Type":"ContainerDied","Data":"ffc21f80eb6e08b57d951e581b7f08fb1ca94c23ef7c689bdb9c9b9fb4448a96"} Feb 20 12:21:22.679479 master-0 kubenswrapper[31420]: I0220 12:21:22.679477 31420 scope.go:117] "RemoveContainer" containerID="a7973c0efdfc99ea933c9dbc97a7ec14c2ad36e9892e4b5a8fae43fb661baad8" Feb 20 12:21:22.679731 master-0 kubenswrapper[31420]: I0220 12:21:22.679639 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:22.689571 master-0 kubenswrapper[31420]: I0220 12:21:22.689478 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" event={"ID":"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688","Type":"ContainerStarted","Data":"7369da29a0812fd00f07b854d5d50cc5dbfbeea5c83695a2f73d14d2dd36617e"} Feb 20 12:21:22.689844 master-0 kubenswrapper[31420]: I0220 12:21:22.689804 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:22.699194 master-0 kubenswrapper[31420]: I0220 12:21:22.696551 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-external-api-0" event={"ID":"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d","Type":"ContainerStarted","Data":"80c40303d4718f7a3199d965d058a761a5146a86de1e207927ee7c82f9296f37"} Feb 20 12:21:22.702477 master-0 kubenswrapper[31420]: I0220 12:21:22.702432 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:22.713126 master-0 kubenswrapper[31420]: I0220 12:21:22.708960 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6c6dd6f84-vtwv4" Feb 20 12:21:22.713126 master-0 kubenswrapper[31420]: I0220 12:21:22.709603 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c","Type":"ContainerStarted","Data":"38ae6bc125342b23cb307b6eb1ff82e5af4d84d190f3d4fa5e1dd55ce9ca762d"} Feb 20 12:21:22.713126 master-0 kubenswrapper[31420]: I0220 12:21:22.711695 31420 scope.go:117] "RemoveContainer" containerID="101e552f3ad99c0eddb9a1eb81c89162720cb2fc3e3410a3f72dba0dfa7ff9b7" Feb 20 12:21:22.715016 master-0 kubenswrapper[31420]: I0220 12:21:22.714885 31420 generic.go:334] "Generic (PLEG): container finished" podID="d206824b-935b-484a-b791-c83386ba4dca" containerID="48cb7fc9a85c70af92323a23f7cb2aede48c384782f5dbe345725014908c6391" exitCode=0 Feb 20 12:21:22.715016 master-0 kubenswrapper[31420]: I0220 12:21:22.715011 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ace2-account-create-update-8fqwl" Feb 20 12:21:22.716488 master-0 kubenswrapper[31420]: I0220 12:21:22.715816 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bb02-account-create-update-gjp7p" Feb 20 12:21:22.716488 master-0 kubenswrapper[31420]: I0220 12:21:22.716039 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d206824b-935b-484a-b791-c83386ba4dca","Type":"ContainerDied","Data":"48cb7fc9a85c70af92323a23f7cb2aede48c384782f5dbe345725014908c6391"} Feb 20 12:21:22.716748 master-0 kubenswrapper[31420]: I0220 12:21:22.716692 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f6bb-account-create-update-bmnfz" Feb 20 12:21:22.716949 master-0 kubenswrapper[31420]: I0220 12:21:22.716849 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8jmg4" Feb 20 12:21:22.716949 master-0 kubenswrapper[31420]: I0220 12:21:22.716700 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c69cf75d-mfgck" Feb 20 12:21:22.716949 master-0 kubenswrapper[31420]: I0220 12:21:22.716921 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zhh86" Feb 20 12:21:22.717744 master-0 kubenswrapper[31420]: I0220 12:21:22.717332 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkmxn" Feb 20 12:21:22.742843 master-0 kubenswrapper[31420]: I0220 12:21:22.742762 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" podStartSLOduration=7.742740314 podStartE2EDuration="7.742740314s" podCreationTimestamp="2026-02-20 12:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:21:22.714768673 +0000 UTC m=+987.434006914" watchObservedRunningTime="2026-02-20 12:21:22.742740314 +0000 UTC m=+987.461978565" Feb 20 12:21:22.850565 master-0 kubenswrapper[31420]: I0220 12:21:22.847401 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e60fa-default-internal-api-0"] Feb 20 12:21:22.965849 master-0 kubenswrapper[31420]: I0220 12:21:22.965685 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e60fa-default-internal-api-0"] Feb 20 12:21:23.077000 master-0 kubenswrapper[31420]: I0220 12:21:23.076943 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e60fa-default-internal-api-0"] Feb 20 12:21:23.077598 master-0 kubenswrapper[31420]: E0220 12:21:23.077568 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1383ebf-b51c-4b56-bdec-191e09ab35ac" containerName="mariadb-database-create" Feb 20 12:21:23.077598 master-0 kubenswrapper[31420]: I0220 12:21:23.077594 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1383ebf-b51c-4b56-bdec-191e09ab35ac" containerName="mariadb-database-create" Feb 20 12:21:23.077824 master-0 kubenswrapper[31420]: E0220 12:21:23.077632 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f6becc0-4062-4971-9300-fe40c0538d25" containerName="neutron-httpd" Feb 20 12:21:23.077824 master-0 kubenswrapper[31420]: I0220 12:21:23.077643 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f6becc0-4062-4971-9300-fe40c0538d25" containerName="neutron-httpd" Feb 20 12:21:23.077824 master-0 kubenswrapper[31420]: E0220 12:21:23.077657 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d142bf65-a53f-4fd8-95b0-46c05306e168" containerName="mariadb-account-create-update" Feb 20 12:21:23.077824 master-0 kubenswrapper[31420]: I0220 12:21:23.077665 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="d142bf65-a53f-4fd8-95b0-46c05306e168" containerName="mariadb-account-create-update" Feb 20 12:21:23.077824 master-0 kubenswrapper[31420]: E0220 12:21:23.077684 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8da94d-28af-4392-bdf9-c0d1c6eaeda4" containerName="mariadb-account-create-update" Feb 20 12:21:23.077824 master-0 kubenswrapper[31420]: I0220 12:21:23.077692 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8da94d-28af-4392-bdf9-c0d1c6eaeda4" containerName="mariadb-account-create-update" Feb 20 12:21:23.077824 master-0 kubenswrapper[31420]: E0220 12:21:23.077720 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2092a975-d3a8-4034-9301-d99c84087164" containerName="glance-log" Feb 20 12:21:23.077824 master-0 kubenswrapper[31420]: I0220 12:21:23.077732 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="2092a975-d3a8-4034-9301-d99c84087164" containerName="glance-log" Feb 20 12:21:23.077824 master-0 kubenswrapper[31420]: E0220 12:21:23.077765 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603ae118-f4a7-48ea-b99b-8a71f297b617" containerName="mariadb-database-create" Feb 20 12:21:23.077824 master-0 kubenswrapper[31420]: I0220 12:21:23.077776 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="603ae118-f4a7-48ea-b99b-8a71f297b617" containerName="mariadb-database-create" Feb 20 12:21:23.077824 master-0 kubenswrapper[31420]: E0220 12:21:23.077803 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710" containerName="mariadb-database-create" Feb 20 12:21:23.077824 master-0 kubenswrapper[31420]: I0220 12:21:23.077813 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710" containerName="mariadb-database-create" Feb 20 12:21:23.077824 master-0 kubenswrapper[31420]: E0220 12:21:23.077829 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f6becc0-4062-4971-9300-fe40c0538d25" containerName="neutron-api" Feb 20 12:21:23.078361 master-0 kubenswrapper[31420]: I0220 12:21:23.077837 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f6becc0-4062-4971-9300-fe40c0538d25" containerName="neutron-api" Feb 20 12:21:23.078361 master-0 kubenswrapper[31420]: E0220 12:21:23.077851 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc933f4-f7a5-4ae9-8d43-336be86a5f34" containerName="mariadb-account-create-update" Feb 20 12:21:23.078361 master-0 kubenswrapper[31420]: I0220 12:21:23.077859 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc933f4-f7a5-4ae9-8d43-336be86a5f34" containerName="mariadb-account-create-update" Feb 20 12:21:23.078361 master-0 kubenswrapper[31420]: E0220 12:21:23.077884 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2092a975-d3a8-4034-9301-d99c84087164" containerName="glance-httpd" Feb 20 12:21:23.078361 master-0 kubenswrapper[31420]: I0220 12:21:23.077893 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="2092a975-d3a8-4034-9301-d99c84087164" containerName="glance-httpd" Feb 20 12:21:23.078361 master-0 kubenswrapper[31420]: I0220 12:21:23.078169 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="2092a975-d3a8-4034-9301-d99c84087164" containerName="glance-log" Feb 20 12:21:23.078361 master-0 kubenswrapper[31420]: I0220 12:21:23.078209 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8da94d-28af-4392-bdf9-c0d1c6eaeda4" containerName="mariadb-account-create-update" Feb 20 12:21:23.078361 master-0 kubenswrapper[31420]: I0220 12:21:23.078230 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f6becc0-4062-4971-9300-fe40c0538d25" containerName="neutron-api" Feb 20 12:21:23.078361 master-0 kubenswrapper[31420]: I0220 12:21:23.078254 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dc933f4-f7a5-4ae9-8d43-336be86a5f34" containerName="mariadb-account-create-update" Feb 20 12:21:23.078361 master-0 kubenswrapper[31420]: I0220 12:21:23.078270 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="603ae118-f4a7-48ea-b99b-8a71f297b617" containerName="mariadb-database-create" Feb 20 12:21:23.078361 master-0 kubenswrapper[31420]: I0220 12:21:23.078297 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1383ebf-b51c-4b56-bdec-191e09ab35ac" containerName="mariadb-database-create" Feb 20 12:21:23.078361 master-0 kubenswrapper[31420]: I0220 12:21:23.078310 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f6becc0-4062-4971-9300-fe40c0538d25" containerName="neutron-httpd" Feb 20 12:21:23.078361 master-0 kubenswrapper[31420]: I0220 12:21:23.078327 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="2092a975-d3a8-4034-9301-d99c84087164" containerName="glance-httpd" Feb 20 12:21:23.078361 master-0 kubenswrapper[31420]: I0220 12:21:23.078339 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="d142bf65-a53f-4fd8-95b0-46c05306e168" containerName="mariadb-account-create-update" Feb 20 12:21:23.078361 master-0 kubenswrapper[31420]: I0220 12:21:23.078362 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710" containerName="mariadb-database-create" Feb 20 12:21:23.080111 master-0 kubenswrapper[31420]: I0220 12:21:23.080017 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.088037 master-0 kubenswrapper[31420]: I0220 12:21:23.087972 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-e60fa-default-internal-config-data" Feb 20 12:21:23.088283 master-0 kubenswrapper[31420]: I0220 12:21:23.088111 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 20 12:21:23.111645 master-0 kubenswrapper[31420]: I0220 12:21:23.111570 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e60fa-default-internal-api-0"] Feb 20 12:21:23.116505 master-0 kubenswrapper[31420]: I0220 12:21:23.116468 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c4f4ddf86-w8dng" Feb 20 12:21:23.229853 master-0 kubenswrapper[31420]: I0220 12:21:23.229794 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f49670ed-6985-4875-98d4-8edc26c85fa7-httpd-run\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.229853 master-0 kubenswrapper[31420]: I0220 12:21:23.229847 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f49670ed-6985-4875-98d4-8edc26c85fa7-config-data\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.230051 master-0 kubenswrapper[31420]: I0220 12:21:23.229891 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4k7g\" (UniqueName: \"kubernetes.io/projected/f49670ed-6985-4875-98d4-8edc26c85fa7-kube-api-access-k4k7g\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.230051 master-0 kubenswrapper[31420]: I0220 12:21:23.230038 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-02654c43-ecb4-4424-b56e-75484a4f3f54\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a16d4698-b9da-481e-bdf6-e66bfc4f546b\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.230114 master-0 kubenswrapper[31420]: I0220 12:21:23.230055 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f49670ed-6985-4875-98d4-8edc26c85fa7-internal-tls-certs\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.230114 master-0 kubenswrapper[31420]: I0220 12:21:23.230075 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f49670ed-6985-4875-98d4-8edc26c85fa7-combined-ca-bundle\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.230114 master-0 kubenswrapper[31420]: I0220 12:21:23.230102 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f49670ed-6985-4875-98d4-8edc26c85fa7-scripts\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.230196 master-0 kubenswrapper[31420]: I0220 12:21:23.230118 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f49670ed-6985-4875-98d4-8edc26c85fa7-logs\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.333552 master-0 kubenswrapper[31420]: I0220 12:21:23.331942 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c69cf75d-mfgck"] Feb 20 12:21:23.333552 master-0 kubenswrapper[31420]: I0220 12:21:23.332256 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-02654c43-ecb4-4424-b56e-75484a4f3f54\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a16d4698-b9da-481e-bdf6-e66bfc4f546b\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.333552 master-0 kubenswrapper[31420]: I0220 12:21:23.332308 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f49670ed-6985-4875-98d4-8edc26c85fa7-internal-tls-certs\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.333552 master-0 kubenswrapper[31420]: I0220 12:21:23.332330 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f49670ed-6985-4875-98d4-8edc26c85fa7-combined-ca-bundle\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.333552 master-0 kubenswrapper[31420]: I0220 12:21:23.332361 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f49670ed-6985-4875-98d4-8edc26c85fa7-scripts\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.333552 master-0 kubenswrapper[31420]: I0220 12:21:23.332378 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f49670ed-6985-4875-98d4-8edc26c85fa7-logs\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.333552 master-0 kubenswrapper[31420]: I0220 12:21:23.332454 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f49670ed-6985-4875-98d4-8edc26c85fa7-httpd-run\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.333552 master-0 kubenswrapper[31420]: I0220 12:21:23.332513 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f49670ed-6985-4875-98d4-8edc26c85fa7-config-data\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.333552 master-0 kubenswrapper[31420]: I0220 12:21:23.332595 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4k7g\" (UniqueName: \"kubernetes.io/projected/f49670ed-6985-4875-98d4-8edc26c85fa7-kube-api-access-k4k7g\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.336679 master-0 kubenswrapper[31420]: I0220 12:21:23.336034 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f49670ed-6985-4875-98d4-8edc26c85fa7-internal-tls-certs\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.336679 master-0 kubenswrapper[31420]: I0220 12:21:23.336481 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f49670ed-6985-4875-98d4-8edc26c85fa7-httpd-run\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.336912 master-0 kubenswrapper[31420]: I0220 12:21:23.336833 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f49670ed-6985-4875-98d4-8edc26c85fa7-logs\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.339215 master-0 kubenswrapper[31420]: I0220 12:21:23.337606 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f49670ed-6985-4875-98d4-8edc26c85fa7-config-data\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.339215 master-0 kubenswrapper[31420]: I0220 12:21:23.338355 31420 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 12:21:23.339215 master-0 kubenswrapper[31420]: I0220 12:21:23.338379 31420 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-02654c43-ecb4-4424-b56e-75484a4f3f54\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a16d4698-b9da-481e-bdf6-e66bfc4f546b\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/7fdfa428e75fa078c9f92f14e5c5b13eb7c249d4961d8f3276e21bff102641ef/globalmount\"" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.339215 master-0 kubenswrapper[31420]: I0220 12:21:23.338602 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f49670ed-6985-4875-98d4-8edc26c85fa7-combined-ca-bundle\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.340498 master-0 kubenswrapper[31420]: I0220 12:21:23.340476 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f49670ed-6985-4875-98d4-8edc26c85fa7-scripts\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.354510 master-0 kubenswrapper[31420]: I0220 12:21:23.354433 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7c69cf75d-mfgck"] Feb 20 12:21:23.507981 master-0 kubenswrapper[31420]: I0220 12:21:23.507906 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4k7g\" (UniqueName: \"kubernetes.io/projected/f49670ed-6985-4875-98d4-8edc26c85fa7-kube-api-access-k4k7g\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:23.508187 master-0 kubenswrapper[31420]: E0220 12:21:23.507874 31420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1383ebf_b51c_4b56_bdec_191e09ab35ac.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd142bf65_a53f_4fd8_95b0_46c05306e168.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod603ae118_f4a7_48ea_b99b_8a71f297b617.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f6becc0_4062_4971_9300_fe40c0538d25.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ec6ab0a_82f4_4cbd_bba7_b3fe8f9a1710.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1383ebf_b51c_4b56_bdec_191e09ab35ac.slice/crio-c1372661caae77c8a1a19a03e66026ed4ec8c5d15364085cc7bd557840e0d05c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f6becc0_4062_4971_9300_fe40c0538d25.slice/crio-437ec292af3d31dca563cc2b357a655612ea9a1a6c3727e13becc7ee8d483d78\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dc933f4_f7a5_4ae9_8d43_336be86a5f34.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod603ae118_f4a7_48ea_b99b_8a71f297b617.slice/crio-a24cbc6706db6434d1f9543e7c0cf590b4299decf82a29e4c62dd0a453917e05\": RecentStats: unable to find data in memory cache]" Feb 20 12:21:23.513682 master-0 kubenswrapper[31420]: I0220 12:21:23.513410 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2092a975-d3a8-4034-9301-d99c84087164" path="/var/lib/kubelet/pods/2092a975-d3a8-4034-9301-d99c84087164/volumes" Feb 20 12:21:23.514626 master-0 kubenswrapper[31420]: I0220 12:21:23.514414 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f6becc0-4062-4971-9300-fe40c0538d25" path="/var/lib/kubelet/pods/4f6becc0-4062-4971-9300-fe40c0538d25/volumes" Feb 20 12:21:23.755319 master-0 kubenswrapper[31420]: I0220 12:21:23.755278 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 20 12:21:23.762969 master-0 kubenswrapper[31420]: I0220 12:21:23.762836 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-external-api-0" event={"ID":"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d","Type":"ContainerStarted","Data":"0b3d2e5aff7d820828862571680e0e3d5d734e914c4c18e41467f79a885f7d06"} Feb 20 12:21:23.768138 master-0 kubenswrapper[31420]: I0220 12:21:23.767848 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 20 12:21:23.768534 master-0 kubenswrapper[31420]: I0220 12:21:23.768472 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d206824b-935b-484a-b791-c83386ba4dca","Type":"ContainerDied","Data":"0786f0326ab0f0f5e027ff7275aaea9f60ec1040053fd2426528dcd19e10ddbc"} Feb 20 12:21:23.769716 master-0 kubenswrapper[31420]: I0220 12:21:23.769666 31420 scope.go:117] "RemoveContainer" containerID="48cb7fc9a85c70af92323a23f7cb2aede48c384782f5dbe345725014908c6391" Feb 20 12:21:23.799436 master-0 kubenswrapper[31420]: I0220 12:21:23.799348 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7556d48dcd-b785x"] Feb 20 12:21:23.799808 master-0 kubenswrapper[31420]: I0220 12:21:23.799761 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7556d48dcd-b785x" podUID="3b5f8852-b52b-402c-bc6b-f21c99991432" containerName="placement-log" containerID="cri-o://b7f17e6556dffbecfe0e85e616524c58fb2ab554cbabbf0e3b7d11b8c852c3f4" gracePeriod=30 Feb 20 12:21:23.799950 master-0 kubenswrapper[31420]: I0220 12:21:23.799920 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7556d48dcd-b785x" podUID="3b5f8852-b52b-402c-bc6b-f21c99991432" containerName="placement-api" containerID="cri-o://7682996cc4ca0aabb34c0e54c27bbacce47e83b2793b1113029a21a214c17252" gracePeriod=30 Feb 20 12:21:23.854296 master-0 kubenswrapper[31420]: I0220 12:21:23.854238 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-scripts\") pod \"d206824b-935b-484a-b791-c83386ba4dca\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " Feb 20 12:21:23.854459 master-0 kubenswrapper[31420]: I0220 12:21:23.854313 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-combined-ca-bundle\") pod \"d206824b-935b-484a-b791-c83386ba4dca\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " Feb 20 12:21:23.854931 master-0 kubenswrapper[31420]: I0220 12:21:23.854891 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d206824b-935b-484a-b791-c83386ba4dca-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"d206824b-935b-484a-b791-c83386ba4dca\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " Feb 20 12:21:23.854931 master-0 kubenswrapper[31420]: I0220 12:21:23.854931 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d206824b-935b-484a-b791-c83386ba4dca-var-lib-ironic\") pod \"d206824b-935b-484a-b791-c83386ba4dca\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " Feb 20 12:21:23.855168 master-0 kubenswrapper[31420]: I0220 12:21:23.855016 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-config\") pod \"d206824b-935b-484a-b791-c83386ba4dca\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " Feb 20 12:21:23.855168 master-0 kubenswrapper[31420]: I0220 12:21:23.855078 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d206824b-935b-484a-b791-c83386ba4dca-etc-podinfo\") pod \"d206824b-935b-484a-b791-c83386ba4dca\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " Feb 20 12:21:23.855168 master-0 kubenswrapper[31420]: I0220 12:21:23.855161 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jblfp\" (UniqueName: \"kubernetes.io/projected/d206824b-935b-484a-b791-c83386ba4dca-kube-api-access-jblfp\") pod \"d206824b-935b-484a-b791-c83386ba4dca\" (UID: \"d206824b-935b-484a-b791-c83386ba4dca\") " Feb 20 12:21:23.855702 master-0 kubenswrapper[31420]: I0220 12:21:23.855309 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d206824b-935b-484a-b791-c83386ba4dca-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "d206824b-935b-484a-b791-c83386ba4dca" (UID: "d206824b-935b-484a-b791-c83386ba4dca"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:21:23.855702 master-0 kubenswrapper[31420]: I0220 12:21:23.855622 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d206824b-935b-484a-b791-c83386ba4dca-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "d206824b-935b-484a-b791-c83386ba4dca" (UID: "d206824b-935b-484a-b791-c83386ba4dca"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:21:23.856671 master-0 kubenswrapper[31420]: I0220 12:21:23.856109 31420 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d206824b-935b-484a-b791-c83386ba4dca-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:23.856671 master-0 kubenswrapper[31420]: I0220 12:21:23.856140 31420 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d206824b-935b-484a-b791-c83386ba4dca-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:23.859667 master-0 kubenswrapper[31420]: I0220 12:21:23.859164 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d206824b-935b-484a-b791-c83386ba4dca-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "d206824b-935b-484a-b791-c83386ba4dca" (UID: "d206824b-935b-484a-b791-c83386ba4dca"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 20 12:21:23.860614 master-0 kubenswrapper[31420]: I0220 12:21:23.860477 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-config" (OuterVolumeSpecName: "config") pod "d206824b-935b-484a-b791-c83386ba4dca" (UID: "d206824b-935b-484a-b791-c83386ba4dca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:23.864858 master-0 kubenswrapper[31420]: I0220 12:21:23.864797 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-scripts" (OuterVolumeSpecName: "scripts") pod "d206824b-935b-484a-b791-c83386ba4dca" (UID: "d206824b-935b-484a-b791-c83386ba4dca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:23.875390 master-0 kubenswrapper[31420]: I0220 12:21:23.875345 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d206824b-935b-484a-b791-c83386ba4dca-kube-api-access-jblfp" (OuterVolumeSpecName: "kube-api-access-jblfp") pod "d206824b-935b-484a-b791-c83386ba4dca" (UID: "d206824b-935b-484a-b791-c83386ba4dca"). InnerVolumeSpecName "kube-api-access-jblfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:21:23.894617 master-0 kubenswrapper[31420]: I0220 12:21:23.894519 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d206824b-935b-484a-b791-c83386ba4dca" (UID: "d206824b-935b-484a-b791-c83386ba4dca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:23.958518 master-0 kubenswrapper[31420]: I0220 12:21:23.958451 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jblfp\" (UniqueName: \"kubernetes.io/projected/d206824b-935b-484a-b791-c83386ba4dca-kube-api-access-jblfp\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:23.958518 master-0 kubenswrapper[31420]: I0220 12:21:23.958506 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:23.958518 master-0 kubenswrapper[31420]: I0220 12:21:23.958520 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:23.958518 master-0 kubenswrapper[31420]: I0220 12:21:23.958542 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d206824b-935b-484a-b791-c83386ba4dca-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:23.958848 master-0 kubenswrapper[31420]: I0220 12:21:23.958554 31420 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d206824b-935b-484a-b791-c83386ba4dca-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:24.222051 master-0 kubenswrapper[31420]: I0220 12:21:24.221976 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-02654c43-ecb4-4424-b56e-75484a4f3f54\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a16d4698-b9da-481e-bdf6-e66bfc4f546b\") pod \"glance-e60fa-default-internal-api-0\" (UID: \"f49670ed-6985-4875-98d4-8edc26c85fa7\") " pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:24.471154 master-0 kubenswrapper[31420]: I0220 12:21:24.469789 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 12:21:24.501550 master-0 kubenswrapper[31420]: I0220 12:21:24.490613 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 12:21:24.501550 master-0 kubenswrapper[31420]: I0220 12:21:24.499666 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nd8sh"] Feb 20 12:21:24.501550 master-0 kubenswrapper[31420]: E0220 12:21:24.500174 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d206824b-935b-484a-b791-c83386ba4dca" containerName="ironic-python-agent-init" Feb 20 12:21:24.501550 master-0 kubenswrapper[31420]: I0220 12:21:24.500186 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="d206824b-935b-484a-b791-c83386ba4dca" containerName="ironic-python-agent-init" Feb 20 12:21:24.501550 master-0 kubenswrapper[31420]: I0220 12:21:24.500411 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="d206824b-935b-484a-b791-c83386ba4dca" containerName="ironic-python-agent-init" Feb 20 12:21:24.501550 master-0 kubenswrapper[31420]: I0220 12:21:24.501249 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:24.514560 master-0 kubenswrapper[31420]: I0220 12:21:24.511376 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 20 12:21:24.514560 master-0 kubenswrapper[31420]: I0220 12:21:24.511574 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 20 12:21:24.560421 master-0 kubenswrapper[31420]: I0220 12:21:24.554822 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nd8sh"] Feb 20 12:21:24.595270 master-0 kubenswrapper[31420]: I0220 12:21:24.595227 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nd8sh\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:24.596174 master-0 kubenswrapper[31420]: I0220 12:21:24.595622 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjbjp\" (UniqueName: \"kubernetes.io/projected/41df128e-94f0-4150-b0d6-2e81542c1ab7-kube-api-access-qjbjp\") pod \"nova-cell0-conductor-db-sync-nd8sh\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:24.596424 master-0 kubenswrapper[31420]: I0220 12:21:24.596406 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-config-data\") pod \"nova-cell0-conductor-db-sync-nd8sh\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:24.596564 master-0 kubenswrapper[31420]: I0220 12:21:24.596549 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-scripts\") pod \"nova-cell0-conductor-db-sync-nd8sh\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:24.602855 master-0 kubenswrapper[31420]: I0220 12:21:24.602698 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 12:21:24.615014 master-0 kubenswrapper[31420]: I0220 12:21:24.614665 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:24.621575 master-0 kubenswrapper[31420]: I0220 12:21:24.621497 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 20 12:21:24.624741 master-0 kubenswrapper[31420]: I0220 12:21:24.624701 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 20 12:21:24.627213 master-0 kubenswrapper[31420]: I0220 12:21:24.624809 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 20 12:21:24.627747 master-0 kubenswrapper[31420]: I0220 12:21:24.624925 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 20 12:21:24.627926 master-0 kubenswrapper[31420]: I0220 12:21:24.625023 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Feb 20 12:21:24.628099 master-0 kubenswrapper[31420]: I0220 12:21:24.625057 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Feb 20 12:21:24.663352 master-0 kubenswrapper[31420]: I0220 12:21:24.663284 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 12:21:24.703553 master-0 kubenswrapper[31420]: I0220 12:21:24.699291 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-config\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.703553 master-0 kubenswrapper[31420]: I0220 12:21:24.699382 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-config-data\") pod \"nova-cell0-conductor-db-sync-nd8sh\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:24.703553 master-0 kubenswrapper[31420]: I0220 12:21:24.699425 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.703553 master-0 kubenswrapper[31420]: I0220 12:21:24.699479 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-scripts\") pod \"nova-cell0-conductor-db-sync-nd8sh\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:24.703553 master-0 kubenswrapper[31420]: I0220 12:21:24.699700 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nd8sh\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:24.703553 master-0 kubenswrapper[31420]: I0220 12:21:24.699777 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.703553 master-0 kubenswrapper[31420]: I0220 12:21:24.699833 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-scripts\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.703553 master-0 kubenswrapper[31420]: I0220 12:21:24.699870 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.703553 master-0 kubenswrapper[31420]: I0220 12:21:24.699929 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.703553 master-0 kubenswrapper[31420]: I0220 12:21:24.699958 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.703553 master-0 kubenswrapper[31420]: I0220 12:21:24.700110 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjbjp\" (UniqueName: \"kubernetes.io/projected/41df128e-94f0-4150-b0d6-2e81542c1ab7-kube-api-access-qjbjp\") pod \"nova-cell0-conductor-db-sync-nd8sh\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:24.703553 master-0 kubenswrapper[31420]: I0220 12:21:24.700309 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws97x\" (UniqueName: \"kubernetes.io/projected/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-kube-api-access-ws97x\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.703553 master-0 kubenswrapper[31420]: I0220 12:21:24.700382 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.708548 master-0 kubenswrapper[31420]: I0220 12:21:24.704469 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-config-data\") pod \"nova-cell0-conductor-db-sync-nd8sh\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:24.708548 master-0 kubenswrapper[31420]: I0220 12:21:24.707131 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-scripts\") pod \"nova-cell0-conductor-db-sync-nd8sh\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:24.712550 master-0 kubenswrapper[31420]: I0220 12:21:24.710095 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nd8sh\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:24.724053 master-0 kubenswrapper[31420]: I0220 12:21:24.723956 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjbjp\" (UniqueName: \"kubernetes.io/projected/41df128e-94f0-4150-b0d6-2e81542c1ab7-kube-api-access-qjbjp\") pod \"nova-cell0-conductor-db-sync-nd8sh\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:24.791307 master-0 kubenswrapper[31420]: I0220 12:21:24.790463 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-external-api-0" event={"ID":"d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d","Type":"ContainerStarted","Data":"69da3a84591b44b62029e7c719db35ad0be6fc0317a7e4a7fea687577767cd71"} Feb 20 12:21:24.795165 master-0 kubenswrapper[31420]: I0220 12:21:24.792229 31420 generic.go:334] "Generic (PLEG): container finished" podID="3b5f8852-b52b-402c-bc6b-f21c99991432" containerID="b7f17e6556dffbecfe0e85e616524c58fb2ab554cbabbf0e3b7d11b8c852c3f4" exitCode=143 Feb 20 12:21:24.795165 master-0 kubenswrapper[31420]: I0220 12:21:24.792270 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7556d48dcd-b785x" event={"ID":"3b5f8852-b52b-402c-bc6b-f21c99991432","Type":"ContainerDied","Data":"b7f17e6556dffbecfe0e85e616524c58fb2ab554cbabbf0e3b7d11b8c852c3f4"} Feb 20 12:21:24.802193 master-0 kubenswrapper[31420]: I0220 12:21:24.802028 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws97x\" (UniqueName: \"kubernetes.io/projected/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-kube-api-access-ws97x\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.802193 master-0 kubenswrapper[31420]: I0220 12:21:24.802083 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.807021 master-0 kubenswrapper[31420]: I0220 12:21:24.806986 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-config\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.807140 master-0 kubenswrapper[31420]: I0220 12:21:24.807051 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.807140 master-0 kubenswrapper[31420]: I0220 12:21:24.807114 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.807246 master-0 kubenswrapper[31420]: I0220 12:21:24.807153 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-scripts\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.807246 master-0 kubenswrapper[31420]: I0220 12:21:24.807177 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.807687 master-0 kubenswrapper[31420]: I0220 12:21:24.807661 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.807964 master-0 kubenswrapper[31420]: I0220 12:21:24.807838 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.807964 master-0 kubenswrapper[31420]: I0220 12:21:24.807938 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.808079 master-0 kubenswrapper[31420]: I0220 12:21:24.807981 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.808895 master-0 kubenswrapper[31420]: I0220 12:21:24.808820 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.811346 master-0 kubenswrapper[31420]: I0220 12:21:24.811315 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-config\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.812507 master-0 kubenswrapper[31420]: I0220 12:21:24.812387 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-scripts\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.812937 master-0 kubenswrapper[31420]: I0220 12:21:24.812896 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.813317 master-0 kubenswrapper[31420]: I0220 12:21:24.813286 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.817296 master-0 kubenswrapper[31420]: I0220 12:21:24.817246 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.822104 master-0 kubenswrapper[31420]: I0220 12:21:24.822058 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws97x\" (UniqueName: \"kubernetes.io/projected/f61d99d3-557f-4054-9f41-b5fa83cb1ba9-kube-api-access-ws97x\") pod \"ironic-inspector-0\" (UID: \"f61d99d3-557f-4054-9f41-b5fa83cb1ba9\") " pod="openstack/ironic-inspector-0" Feb 20 12:21:24.853065 master-0 kubenswrapper[31420]: I0220 12:21:24.852783 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e60fa-default-external-api-0" podStartSLOduration=7.852707022 podStartE2EDuration="7.852707022s" podCreationTimestamp="2026-02-20 12:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:21:24.83567049 +0000 UTC m=+989.554908731" watchObservedRunningTime="2026-02-20 12:21:24.852707022 +0000 UTC m=+989.571945263" Feb 20 12:21:24.897558 master-0 kubenswrapper[31420]: I0220 12:21:24.895061 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:25.109791 master-0 kubenswrapper[31420]: I0220 12:21:25.107816 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 20 12:21:25.241816 master-0 kubenswrapper[31420]: I0220 12:21:25.241750 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e60fa-default-internal-api-0"] Feb 20 12:21:25.388136 master-0 kubenswrapper[31420]: I0220 12:21:25.387245 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nd8sh"] Feb 20 12:21:25.515778 master-0 kubenswrapper[31420]: I0220 12:21:25.515702 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d206824b-935b-484a-b791-c83386ba4dca" path="/var/lib/kubelet/pods/d206824b-935b-484a-b791-c83386ba4dca/volumes" Feb 20 12:21:25.684004 master-0 kubenswrapper[31420]: I0220 12:21:25.683922 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 12:21:25.841992 master-0 kubenswrapper[31420]: I0220 12:21:25.841913 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nd8sh" event={"ID":"41df128e-94f0-4150-b0d6-2e81542c1ab7","Type":"ContainerStarted","Data":"ead4eec4276552029220e85f5687fac39b5a524fe1d037b46465d7efdf15e41b"} Feb 20 12:21:25.844123 master-0 kubenswrapper[31420]: I0220 12:21:25.844069 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-internal-api-0" event={"ID":"f49670ed-6985-4875-98d4-8edc26c85fa7","Type":"ContainerStarted","Data":"f519f2f989b6fe403ac97d0cef73023bc2255e2722e1de5238bf1d8f1a82953b"} Feb 20 12:21:25.847204 master-0 kubenswrapper[31420]: I0220 12:21:25.846262 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f61d99d3-557f-4054-9f41-b5fa83cb1ba9","Type":"ContainerStarted","Data":"a101bb2a2ea6cc7e426eae3a2f41c39ed92c22a86c5a8d2671b995f1a001720a"} Feb 20 12:21:26.863400 master-0 kubenswrapper[31420]: I0220 12:21:26.863310 31420 generic.go:334] "Generic (PLEG): container finished" podID="1c15d66e-eaa8-4305-a5cb-1fa14e718d2c" containerID="38ae6bc125342b23cb307b6eb1ff82e5af4d84d190f3d4fa5e1dd55ce9ca762d" exitCode=0 Feb 20 12:21:26.864858 master-0 kubenswrapper[31420]: I0220 12:21:26.863416 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c","Type":"ContainerDied","Data":"38ae6bc125342b23cb307b6eb1ff82e5af4d84d190f3d4fa5e1dd55ce9ca762d"} Feb 20 12:21:26.867386 master-0 kubenswrapper[31420]: I0220 12:21:26.867235 31420 generic.go:334] "Generic (PLEG): container finished" podID="f61d99d3-557f-4054-9f41-b5fa83cb1ba9" containerID="f49ad1e39439a3368f9932f17718c3ff2f8c6798c7fc5df4a170a06479973f50" exitCode=0 Feb 20 12:21:26.867386 master-0 kubenswrapper[31420]: I0220 12:21:26.867306 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f61d99d3-557f-4054-9f41-b5fa83cb1ba9","Type":"ContainerDied","Data":"f49ad1e39439a3368f9932f17718c3ff2f8c6798c7fc5df4a170a06479973f50"} Feb 20 12:21:26.874294 master-0 kubenswrapper[31420]: I0220 12:21:26.873115 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-internal-api-0" event={"ID":"f49670ed-6985-4875-98d4-8edc26c85fa7","Type":"ContainerStarted","Data":"ce399ee1613f10d4d8df3032dcb07704578099a0cbad4169b3556e2ac15dfce0"} Feb 20 12:21:26.874294 master-0 kubenswrapper[31420]: I0220 12:21:26.873182 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e60fa-default-internal-api-0" event={"ID":"f49670ed-6985-4875-98d4-8edc26c85fa7","Type":"ContainerStarted","Data":"76fd52648fcd0c74da47ceed4b8afb193d3d13fbc5ccc2aeca82b826725308d4"} Feb 20 12:21:27.245301 master-0 kubenswrapper[31420]: I0220 12:21:27.245194 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e60fa-default-internal-api-0" podStartSLOduration=5.245173095 podStartE2EDuration="5.245173095s" podCreationTimestamp="2026-02-20 12:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:21:27.237191219 +0000 UTC m=+991.956429460" watchObservedRunningTime="2026-02-20 12:21:27.245173095 +0000 UTC m=+991.964411336" Feb 20 12:21:27.521040 master-0 kubenswrapper[31420]: I0220 12:21:27.520998 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:21:27.612348 master-0 kubenswrapper[31420]: I0220 12:21:27.609478 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-public-tls-certs\") pod \"3b5f8852-b52b-402c-bc6b-f21c99991432\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " Feb 20 12:21:27.612348 master-0 kubenswrapper[31420]: I0220 12:21:27.609820 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-config-data\") pod \"3b5f8852-b52b-402c-bc6b-f21c99991432\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " Feb 20 12:21:27.612348 master-0 kubenswrapper[31420]: I0220 12:21:27.609868 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-internal-tls-certs\") pod \"3b5f8852-b52b-402c-bc6b-f21c99991432\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " Feb 20 12:21:27.612348 master-0 kubenswrapper[31420]: I0220 12:21:27.610089 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b5f8852-b52b-402c-bc6b-f21c99991432-logs\") pod \"3b5f8852-b52b-402c-bc6b-f21c99991432\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " Feb 20 12:21:27.612348 master-0 kubenswrapper[31420]: I0220 12:21:27.610135 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cljq9\" (UniqueName: \"kubernetes.io/projected/3b5f8852-b52b-402c-bc6b-f21c99991432-kube-api-access-cljq9\") pod \"3b5f8852-b52b-402c-bc6b-f21c99991432\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " Feb 20 12:21:27.612348 master-0 kubenswrapper[31420]: I0220 12:21:27.610217 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-combined-ca-bundle\") pod \"3b5f8852-b52b-402c-bc6b-f21c99991432\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " Feb 20 12:21:27.612348 master-0 kubenswrapper[31420]: I0220 12:21:27.610305 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-scripts\") pod \"3b5f8852-b52b-402c-bc6b-f21c99991432\" (UID: \"3b5f8852-b52b-402c-bc6b-f21c99991432\") " Feb 20 12:21:27.612348 master-0 kubenswrapper[31420]: I0220 12:21:27.610508 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b5f8852-b52b-402c-bc6b-f21c99991432-logs" (OuterVolumeSpecName: "logs") pod "3b5f8852-b52b-402c-bc6b-f21c99991432" (UID: "3b5f8852-b52b-402c-bc6b-f21c99991432"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:21:27.612348 master-0 kubenswrapper[31420]: I0220 12:21:27.611518 31420 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b5f8852-b52b-402c-bc6b-f21c99991432-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:27.615030 master-0 kubenswrapper[31420]: I0220 12:21:27.614943 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-scripts" (OuterVolumeSpecName: "scripts") pod "3b5f8852-b52b-402c-bc6b-f21c99991432" (UID: "3b5f8852-b52b-402c-bc6b-f21c99991432"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:27.615887 master-0 kubenswrapper[31420]: I0220 12:21:27.615840 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b5f8852-b52b-402c-bc6b-f21c99991432-kube-api-access-cljq9" (OuterVolumeSpecName: "kube-api-access-cljq9") pod "3b5f8852-b52b-402c-bc6b-f21c99991432" (UID: "3b5f8852-b52b-402c-bc6b-f21c99991432"). InnerVolumeSpecName "kube-api-access-cljq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:21:27.684670 master-0 kubenswrapper[31420]: I0220 12:21:27.684600 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-config-data" (OuterVolumeSpecName: "config-data") pod "3b5f8852-b52b-402c-bc6b-f21c99991432" (UID: "3b5f8852-b52b-402c-bc6b-f21c99991432"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:27.713676 master-0 kubenswrapper[31420]: I0220 12:21:27.713567 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:27.713676 master-0 kubenswrapper[31420]: I0220 12:21:27.713618 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cljq9\" (UniqueName: \"kubernetes.io/projected/3b5f8852-b52b-402c-bc6b-f21c99991432-kube-api-access-cljq9\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:27.713676 master-0 kubenswrapper[31420]: I0220 12:21:27.713634 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:27.714243 master-0 kubenswrapper[31420]: I0220 12:21:27.714190 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b5f8852-b52b-402c-bc6b-f21c99991432" (UID: "3b5f8852-b52b-402c-bc6b-f21c99991432"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:27.740564 master-0 kubenswrapper[31420]: I0220 12:21:27.740472 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3b5f8852-b52b-402c-bc6b-f21c99991432" (UID: "3b5f8852-b52b-402c-bc6b-f21c99991432"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:27.793402 master-0 kubenswrapper[31420]: I0220 12:21:27.793342 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3b5f8852-b52b-402c-bc6b-f21c99991432" (UID: "3b5f8852-b52b-402c-bc6b-f21c99991432"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:27.820317 master-0 kubenswrapper[31420]: I0220 12:21:27.820246 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:27.820448 master-0 kubenswrapper[31420]: I0220 12:21:27.820328 31420 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:27.820448 master-0 kubenswrapper[31420]: I0220 12:21:27.820344 31420 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b5f8852-b52b-402c-bc6b-f21c99991432-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:27.891671 master-0 kubenswrapper[31420]: I0220 12:21:27.891586 31420 generic.go:334] "Generic (PLEG): container finished" podID="3b5f8852-b52b-402c-bc6b-f21c99991432" containerID="7682996cc4ca0aabb34c0e54c27bbacce47e83b2793b1113029a21a214c17252" exitCode=0 Feb 20 12:21:27.892895 master-0 kubenswrapper[31420]: I0220 12:21:27.892853 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7556d48dcd-b785x" Feb 20 12:21:27.893766 master-0 kubenswrapper[31420]: I0220 12:21:27.893672 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7556d48dcd-b785x" event={"ID":"3b5f8852-b52b-402c-bc6b-f21c99991432","Type":"ContainerDied","Data":"7682996cc4ca0aabb34c0e54c27bbacce47e83b2793b1113029a21a214c17252"} Feb 20 12:21:27.893855 master-0 kubenswrapper[31420]: I0220 12:21:27.893791 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7556d48dcd-b785x" event={"ID":"3b5f8852-b52b-402c-bc6b-f21c99991432","Type":"ContainerDied","Data":"89bd61052748ca0169c372476387958e2c9e4dbbf7e4b9932eb1e732cd333afe"} Feb 20 12:21:27.893855 master-0 kubenswrapper[31420]: I0220 12:21:27.893816 31420 scope.go:117] "RemoveContainer" containerID="7682996cc4ca0aabb34c0e54c27bbacce47e83b2793b1113029a21a214c17252" Feb 20 12:21:27.937793 master-0 kubenswrapper[31420]: I0220 12:21:27.937660 31420 scope.go:117] "RemoveContainer" containerID="b7f17e6556dffbecfe0e85e616524c58fb2ab554cbabbf0e3b7d11b8c852c3f4" Feb 20 12:21:27.949827 master-0 kubenswrapper[31420]: I0220 12:21:27.943092 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7556d48dcd-b785x"] Feb 20 12:21:27.956843 master-0 kubenswrapper[31420]: I0220 12:21:27.956765 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7556d48dcd-b785x"] Feb 20 12:21:27.984307 master-0 kubenswrapper[31420]: I0220 12:21:27.984263 31420 scope.go:117] "RemoveContainer" containerID="7682996cc4ca0aabb34c0e54c27bbacce47e83b2793b1113029a21a214c17252" Feb 20 12:21:27.984888 master-0 kubenswrapper[31420]: E0220 12:21:27.984832 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7682996cc4ca0aabb34c0e54c27bbacce47e83b2793b1113029a21a214c17252\": container with ID starting with 7682996cc4ca0aabb34c0e54c27bbacce47e83b2793b1113029a21a214c17252 not found: ID does not exist" containerID="7682996cc4ca0aabb34c0e54c27bbacce47e83b2793b1113029a21a214c17252" Feb 20 12:21:27.985001 master-0 kubenswrapper[31420]: I0220 12:21:27.984882 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7682996cc4ca0aabb34c0e54c27bbacce47e83b2793b1113029a21a214c17252"} err="failed to get container status \"7682996cc4ca0aabb34c0e54c27bbacce47e83b2793b1113029a21a214c17252\": rpc error: code = NotFound desc = could not find container \"7682996cc4ca0aabb34c0e54c27bbacce47e83b2793b1113029a21a214c17252\": container with ID starting with 7682996cc4ca0aabb34c0e54c27bbacce47e83b2793b1113029a21a214c17252 not found: ID does not exist" Feb 20 12:21:27.985001 master-0 kubenswrapper[31420]: I0220 12:21:27.984911 31420 scope.go:117] "RemoveContainer" containerID="b7f17e6556dffbecfe0e85e616524c58fb2ab554cbabbf0e3b7d11b8c852c3f4" Feb 20 12:21:27.985283 master-0 kubenswrapper[31420]: E0220 12:21:27.985233 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7f17e6556dffbecfe0e85e616524c58fb2ab554cbabbf0e3b7d11b8c852c3f4\": container with ID starting with b7f17e6556dffbecfe0e85e616524c58fb2ab554cbabbf0e3b7d11b8c852c3f4 not found: ID does not exist" containerID="b7f17e6556dffbecfe0e85e616524c58fb2ab554cbabbf0e3b7d11b8c852c3f4" Feb 20 12:21:27.985421 master-0 kubenswrapper[31420]: I0220 12:21:27.985276 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7f17e6556dffbecfe0e85e616524c58fb2ab554cbabbf0e3b7d11b8c852c3f4"} err="failed to get container status \"b7f17e6556dffbecfe0e85e616524c58fb2ab554cbabbf0e3b7d11b8c852c3f4\": rpc error: code = NotFound desc = could not find container \"b7f17e6556dffbecfe0e85e616524c58fb2ab554cbabbf0e3b7d11b8c852c3f4\": container with ID starting with b7f17e6556dffbecfe0e85e616524c58fb2ab554cbabbf0e3b7d11b8c852c3f4 not found: ID does not exist" Feb 20 12:21:29.233122 master-0 kubenswrapper[31420]: I0220 12:21:29.233026 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:29.233122 master-0 kubenswrapper[31420]: I0220 12:21:29.233099 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:29.270154 master-0 kubenswrapper[31420]: I0220 12:21:29.270024 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:29.284704 master-0 kubenswrapper[31420]: I0220 12:21:29.284646 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:29.516821 master-0 kubenswrapper[31420]: I0220 12:21:29.516764 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b5f8852-b52b-402c-bc6b-f21c99991432" path="/var/lib/kubelet/pods/3b5f8852-b52b-402c-bc6b-f21c99991432/volumes" Feb 20 12:21:29.929268 master-0 kubenswrapper[31420]: I0220 12:21:29.929132 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:29.929750 master-0 kubenswrapper[31420]: I0220 12:21:29.929654 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:30.699762 master-0 kubenswrapper[31420]: I0220 12:21:30.699686 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:21:31.159556 master-0 kubenswrapper[31420]: I0220 12:21:31.155281 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5589979f4f-n6jwb"] Feb 20 12:21:31.159556 master-0 kubenswrapper[31420]: I0220 12:21:31.155554 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" podUID="bedd35bf-2057-4ed2-b1bd-5b65e43788b0" containerName="dnsmasq-dns" containerID="cri-o://e7e08004d7636b14fa970434cb4dff81287ef80692e718ae5831a963bda9c13a" gracePeriod=10 Feb 20 12:21:31.890116 master-0 kubenswrapper[31420]: I0220 12:21:31.890044 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:31.936022 master-0 kubenswrapper[31420]: I0220 12:21:31.932905 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e60fa-default-external-api-0" Feb 20 12:21:31.961111 master-0 kubenswrapper[31420]: I0220 12:21:31.961055 31420 generic.go:334] "Generic (PLEG): container finished" podID="bedd35bf-2057-4ed2-b1bd-5b65e43788b0" containerID="e7e08004d7636b14fa970434cb4dff81287ef80692e718ae5831a963bda9c13a" exitCode=0 Feb 20 12:21:31.961339 master-0 kubenswrapper[31420]: I0220 12:21:31.961141 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" event={"ID":"bedd35bf-2057-4ed2-b1bd-5b65e43788b0","Type":"ContainerDied","Data":"e7e08004d7636b14fa970434cb4dff81287ef80692e718ae5831a963bda9c13a"} Feb 20 12:21:32.722934 master-0 kubenswrapper[31420]: I0220 12:21:32.722860 31420 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" podUID="bedd35bf-2057-4ed2-b1bd-5b65e43788b0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.235:5353: connect: connection refused" Feb 20 12:21:34.400905 master-0 kubenswrapper[31420]: I0220 12:21:34.400829 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:21:34.441541 master-0 kubenswrapper[31420]: I0220 12:21:34.441462 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-config\") pod \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " Feb 20 12:21:34.441751 master-0 kubenswrapper[31420]: I0220 12:21:34.441675 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-dns-svc\") pod \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " Feb 20 12:21:34.441874 master-0 kubenswrapper[31420]: I0220 12:21:34.441833 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-ovsdbserver-nb\") pod \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " Feb 20 12:21:34.442339 master-0 kubenswrapper[31420]: I0220 12:21:34.442312 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdrjl\" (UniqueName: \"kubernetes.io/projected/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-kube-api-access-wdrjl\") pod \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " Feb 20 12:21:34.442388 master-0 kubenswrapper[31420]: I0220 12:21:34.442341 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-dns-swift-storage-0\") pod \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " Feb 20 12:21:34.442388 master-0 kubenswrapper[31420]: I0220 12:21:34.442383 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-ovsdbserver-sb\") pod \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\" (UID: \"bedd35bf-2057-4ed2-b1bd-5b65e43788b0\") " Feb 20 12:21:34.445241 master-0 kubenswrapper[31420]: I0220 12:21:34.445186 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-kube-api-access-wdrjl" (OuterVolumeSpecName: "kube-api-access-wdrjl") pod "bedd35bf-2057-4ed2-b1bd-5b65e43788b0" (UID: "bedd35bf-2057-4ed2-b1bd-5b65e43788b0"). InnerVolumeSpecName "kube-api-access-wdrjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:21:34.502570 master-0 kubenswrapper[31420]: I0220 12:21:34.502238 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bedd35bf-2057-4ed2-b1bd-5b65e43788b0" (UID: "bedd35bf-2057-4ed2-b1bd-5b65e43788b0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:21:34.503741 master-0 kubenswrapper[31420]: I0220 12:21:34.503673 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bedd35bf-2057-4ed2-b1bd-5b65e43788b0" (UID: "bedd35bf-2057-4ed2-b1bd-5b65e43788b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:21:34.514338 master-0 kubenswrapper[31420]: I0220 12:21:34.514234 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bedd35bf-2057-4ed2-b1bd-5b65e43788b0" (UID: "bedd35bf-2057-4ed2-b1bd-5b65e43788b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:21:34.515754 master-0 kubenswrapper[31420]: I0220 12:21:34.515699 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bedd35bf-2057-4ed2-b1bd-5b65e43788b0" (UID: "bedd35bf-2057-4ed2-b1bd-5b65e43788b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:21:34.591412 master-0 kubenswrapper[31420]: I0220 12:21:34.589502 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-config" (OuterVolumeSpecName: "config") pod "bedd35bf-2057-4ed2-b1bd-5b65e43788b0" (UID: "bedd35bf-2057-4ed2-b1bd-5b65e43788b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:21:34.591412 master-0 kubenswrapper[31420]: I0220 12:21:34.590560 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:34.591412 master-0 kubenswrapper[31420]: I0220 12:21:34.590602 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdrjl\" (UniqueName: \"kubernetes.io/projected/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-kube-api-access-wdrjl\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:34.591412 master-0 kubenswrapper[31420]: I0220 12:21:34.590614 31420 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:34.591412 master-0 kubenswrapper[31420]: I0220 12:21:34.590622 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:34.591412 master-0 kubenswrapper[31420]: I0220 12:21:34.590634 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:34.591412 master-0 kubenswrapper[31420]: I0220 12:21:34.590644 31420 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bedd35bf-2057-4ed2-b1bd-5b65e43788b0-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:34.615228 master-0 kubenswrapper[31420]: I0220 12:21:34.615174 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:34.615352 master-0 kubenswrapper[31420]: I0220 12:21:34.615235 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:34.658977 master-0 kubenswrapper[31420]: I0220 12:21:34.656167 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:34.679431 master-0 kubenswrapper[31420]: I0220 12:21:34.679365 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:35.009065 master-0 kubenswrapper[31420]: I0220 12:21:35.008998 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" event={"ID":"bedd35bf-2057-4ed2-b1bd-5b65e43788b0","Type":"ContainerDied","Data":"ee909ce32fda6e59b257bb37cdb642db6d50c72a1f9069025dc58635e080816d"} Feb 20 12:21:35.009065 master-0 kubenswrapper[31420]: I0220 12:21:35.009079 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:35.009432 master-0 kubenswrapper[31420]: I0220 12:21:35.009102 31420 scope.go:117] "RemoveContainer" containerID="e7e08004d7636b14fa970434cb4dff81287ef80692e718ae5831a963bda9c13a" Feb 20 12:21:35.009432 master-0 kubenswrapper[31420]: I0220 12:21:35.009145 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5589979f4f-n6jwb" Feb 20 12:21:35.009873 master-0 kubenswrapper[31420]: I0220 12:21:35.009806 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:35.071193 master-0 kubenswrapper[31420]: I0220 12:21:35.071110 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5589979f4f-n6jwb"] Feb 20 12:21:35.087022 master-0 kubenswrapper[31420]: I0220 12:21:35.086959 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5589979f4f-n6jwb"] Feb 20 12:21:35.525757 master-0 kubenswrapper[31420]: I0220 12:21:35.520859 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bedd35bf-2057-4ed2-b1bd-5b65e43788b0" path="/var/lib/kubelet/pods/bedd35bf-2057-4ed2-b1bd-5b65e43788b0/volumes" Feb 20 12:21:36.572860 master-0 kubenswrapper[31420]: I0220 12:21:36.571764 31420 scope.go:117] "RemoveContainer" containerID="8cfb56d9455925a4695afd81e26d997b24beb7da151d9c019d68eece4909ad64" Feb 20 12:21:36.879234 master-0 kubenswrapper[31420]: I0220 12:21:36.879157 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:37.024171 master-0 kubenswrapper[31420]: I0220 12:21:37.024115 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-e60fa-default-internal-api-0" Feb 20 12:21:37.035499 master-0 kubenswrapper[31420]: I0220 12:21:37.035424 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nd8sh" event={"ID":"41df128e-94f0-4150-b0d6-2e81542c1ab7","Type":"ContainerStarted","Data":"58130a99e04289ab8fafdad5b078f40dd208f13581bfb5b6885307aaf49fd8fe"} Feb 20 12:21:37.097409 master-0 kubenswrapper[31420]: I0220 12:21:37.097295 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-nd8sh" podStartSLOduration=1.864272613 podStartE2EDuration="13.097274364s" podCreationTimestamp="2026-02-20 12:21:24 +0000 UTC" firstStartedPulling="2026-02-20 12:21:25.393036354 +0000 UTC m=+990.112274595" lastFinishedPulling="2026-02-20 12:21:36.626038115 +0000 UTC m=+1001.345276346" observedRunningTime="2026-02-20 12:21:37.078956626 +0000 UTC m=+1001.798194867" watchObservedRunningTime="2026-02-20 12:21:37.097274364 +0000 UTC m=+1001.816512605" Feb 20 12:21:38.065834 master-0 kubenswrapper[31420]: I0220 12:21:38.065744 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c","Type":"ContainerStarted","Data":"820a512e2b71aab05b9066efe531a8183824f64e2868f5d1b2b925302df68ef1"} Feb 20 12:21:38.068924 master-0 kubenswrapper[31420]: I0220 12:21:38.068863 31420 generic.go:334] "Generic (PLEG): container finished" podID="f61d99d3-557f-4054-9f41-b5fa83cb1ba9" containerID="8c8387c57bd0e785759e6ae02421f0356d153d239440f40696cb28ac249556cd" exitCode=0 Feb 20 12:21:38.069035 master-0 kubenswrapper[31420]: I0220 12:21:38.068944 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f61d99d3-557f-4054-9f41-b5fa83cb1ba9","Type":"ContainerDied","Data":"8c8387c57bd0e785759e6ae02421f0356d153d239440f40696cb28ac249556cd"} Feb 20 12:21:39.108389 master-0 kubenswrapper[31420]: I0220 12:21:39.107473 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f61d99d3-557f-4054-9f41-b5fa83cb1ba9","Type":"ContainerStarted","Data":"428a616dd699d8426e4079549f6f6b7d21f228259cd1d60aac367c4b5e3d513b"} Feb 20 12:21:39.108389 master-0 kubenswrapper[31420]: I0220 12:21:39.107558 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f61d99d3-557f-4054-9f41-b5fa83cb1ba9","Type":"ContainerStarted","Data":"cdbc6e157d64144fb81e0900165e4f0ec7c2160ed31cd49b4b3d803d97e75555"} Feb 20 12:21:40.131775 master-0 kubenswrapper[31420]: I0220 12:21:40.131712 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f61d99d3-557f-4054-9f41-b5fa83cb1ba9","Type":"ContainerStarted","Data":"4ec578703b412cb1ce4423f0f2a640e59102008f43fe048748519aa9fe0adc7c"} Feb 20 12:21:41.152376 master-0 kubenswrapper[31420]: I0220 12:21:41.152133 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f61d99d3-557f-4054-9f41-b5fa83cb1ba9","Type":"ContainerStarted","Data":"7774606439ce81130c986f9b06b237070d2e2a03d79fbed172b97be0dd4a6295"} Feb 20 12:21:41.152376 master-0 kubenswrapper[31420]: I0220 12:21:41.152189 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f61d99d3-557f-4054-9f41-b5fa83cb1ba9","Type":"ContainerStarted","Data":"236175c2a1e3ed52adb95d408e168eec6ce5426c5e9d90741c7fff9a82c836b9"} Feb 20 12:21:42.167122 master-0 kubenswrapper[31420]: I0220 12:21:42.165003 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 20 12:21:42.167122 master-0 kubenswrapper[31420]: I0220 12:21:42.165079 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 20 12:21:42.209095 master-0 kubenswrapper[31420]: I0220 12:21:42.208997 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=8.460749453 podStartE2EDuration="18.208979726s" podCreationTimestamp="2026-02-20 12:21:24 +0000 UTC" firstStartedPulling="2026-02-20 12:21:26.870776153 +0000 UTC m=+991.590014404" lastFinishedPulling="2026-02-20 12:21:36.619006436 +0000 UTC m=+1001.338244677" observedRunningTime="2026-02-20 12:21:42.202326558 +0000 UTC m=+1006.921564799" watchObservedRunningTime="2026-02-20 12:21:42.208979726 +0000 UTC m=+1006.928217967" Feb 20 12:21:44.190611 master-0 kubenswrapper[31420]: I0220 12:21:44.190550 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 20 12:21:44.233916 master-0 kubenswrapper[31420]: I0220 12:21:44.233857 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 20 12:21:45.108318 master-0 kubenswrapper[31420]: I0220 12:21:45.108206 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 20 12:21:45.108318 master-0 kubenswrapper[31420]: I0220 12:21:45.108314 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 20 12:21:45.108656 master-0 kubenswrapper[31420]: I0220 12:21:45.108337 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 20 12:21:45.108656 master-0 kubenswrapper[31420]: I0220 12:21:45.108357 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 20 12:21:45.132098 master-0 kubenswrapper[31420]: I0220 12:21:45.132029 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 20 12:21:45.133099 master-0 kubenswrapper[31420]: I0220 12:21:45.133044 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 20 12:21:45.206945 master-0 kubenswrapper[31420]: I0220 12:21:45.206870 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 20 12:21:45.208507 master-0 kubenswrapper[31420]: I0220 12:21:45.208466 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 20 12:21:53.335683 master-0 kubenswrapper[31420]: I0220 12:21:53.335631 31420 generic.go:334] "Generic (PLEG): container finished" podID="41df128e-94f0-4150-b0d6-2e81542c1ab7" containerID="58130a99e04289ab8fafdad5b078f40dd208f13581bfb5b6885307aaf49fd8fe" exitCode=0 Feb 20 12:21:53.336340 master-0 kubenswrapper[31420]: I0220 12:21:53.335791 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nd8sh" event={"ID":"41df128e-94f0-4150-b0d6-2e81542c1ab7","Type":"ContainerDied","Data":"58130a99e04289ab8fafdad5b078f40dd208f13581bfb5b6885307aaf49fd8fe"} Feb 20 12:21:54.836337 master-0 kubenswrapper[31420]: I0220 12:21:54.836258 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:54.899773 master-0 kubenswrapper[31420]: I0220 12:21:54.899690 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-scripts\") pod \"41df128e-94f0-4150-b0d6-2e81542c1ab7\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " Feb 20 12:21:54.900020 master-0 kubenswrapper[31420]: I0220 12:21:54.899903 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-config-data\") pod \"41df128e-94f0-4150-b0d6-2e81542c1ab7\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " Feb 20 12:21:54.900162 master-0 kubenswrapper[31420]: I0220 12:21:54.900105 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-combined-ca-bundle\") pod \"41df128e-94f0-4150-b0d6-2e81542c1ab7\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " Feb 20 12:21:54.900848 master-0 kubenswrapper[31420]: I0220 12:21:54.900806 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjbjp\" (UniqueName: \"kubernetes.io/projected/41df128e-94f0-4150-b0d6-2e81542c1ab7-kube-api-access-qjbjp\") pod \"41df128e-94f0-4150-b0d6-2e81542c1ab7\" (UID: \"41df128e-94f0-4150-b0d6-2e81542c1ab7\") " Feb 20 12:21:54.905142 master-0 kubenswrapper[31420]: I0220 12:21:54.905088 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41df128e-94f0-4150-b0d6-2e81542c1ab7-kube-api-access-qjbjp" (OuterVolumeSpecName: "kube-api-access-qjbjp") pod "41df128e-94f0-4150-b0d6-2e81542c1ab7" (UID: "41df128e-94f0-4150-b0d6-2e81542c1ab7"). InnerVolumeSpecName "kube-api-access-qjbjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:21:54.908199 master-0 kubenswrapper[31420]: I0220 12:21:54.908111 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-scripts" (OuterVolumeSpecName: "scripts") pod "41df128e-94f0-4150-b0d6-2e81542c1ab7" (UID: "41df128e-94f0-4150-b0d6-2e81542c1ab7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:54.942862 master-0 kubenswrapper[31420]: I0220 12:21:54.942789 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41df128e-94f0-4150-b0d6-2e81542c1ab7" (UID: "41df128e-94f0-4150-b0d6-2e81542c1ab7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:54.963264 master-0 kubenswrapper[31420]: I0220 12:21:54.963184 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-config-data" (OuterVolumeSpecName: "config-data") pod "41df128e-94f0-4150-b0d6-2e81542c1ab7" (UID: "41df128e-94f0-4150-b0d6-2e81542c1ab7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:21:55.004986 master-0 kubenswrapper[31420]: I0220 12:21:55.004925 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:55.004986 master-0 kubenswrapper[31420]: I0220 12:21:55.004965 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:55.004986 master-0 kubenswrapper[31420]: I0220 12:21:55.004980 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjbjp\" (UniqueName: \"kubernetes.io/projected/41df128e-94f0-4150-b0d6-2e81542c1ab7-kube-api-access-qjbjp\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:55.004986 master-0 kubenswrapper[31420]: I0220 12:21:55.004994 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41df128e-94f0-4150-b0d6-2e81542c1ab7-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:21:55.371720 master-0 kubenswrapper[31420]: I0220 12:21:55.371623 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nd8sh" event={"ID":"41df128e-94f0-4150-b0d6-2e81542c1ab7","Type":"ContainerDied","Data":"ead4eec4276552029220e85f5687fac39b5a524fe1d037b46465d7efdf15e41b"} Feb 20 12:21:55.371720 master-0 kubenswrapper[31420]: I0220 12:21:55.371719 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ead4eec4276552029220e85f5687fac39b5a524fe1d037b46465d7efdf15e41b" Feb 20 12:21:55.372052 master-0 kubenswrapper[31420]: I0220 12:21:55.371826 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nd8sh" Feb 20 12:21:55.539853 master-0 kubenswrapper[31420]: I0220 12:21:55.539741 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 20 12:21:55.540489 master-0 kubenswrapper[31420]: E0220 12:21:55.540457 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b5f8852-b52b-402c-bc6b-f21c99991432" containerName="placement-log" Feb 20 12:21:55.540548 master-0 kubenswrapper[31420]: I0220 12:21:55.540486 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b5f8852-b52b-402c-bc6b-f21c99991432" containerName="placement-log" Feb 20 12:21:55.540548 master-0 kubenswrapper[31420]: E0220 12:21:55.540507 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b5f8852-b52b-402c-bc6b-f21c99991432" containerName="placement-api" Feb 20 12:21:55.540548 master-0 kubenswrapper[31420]: I0220 12:21:55.540516 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b5f8852-b52b-402c-bc6b-f21c99991432" containerName="placement-api" Feb 20 12:21:55.540653 master-0 kubenswrapper[31420]: E0220 12:21:55.540546 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedd35bf-2057-4ed2-b1bd-5b65e43788b0" containerName="dnsmasq-dns" Feb 20 12:21:55.540653 master-0 kubenswrapper[31420]: I0220 12:21:55.540556 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedd35bf-2057-4ed2-b1bd-5b65e43788b0" containerName="dnsmasq-dns" Feb 20 12:21:55.540653 master-0 kubenswrapper[31420]: E0220 12:21:55.540584 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41df128e-94f0-4150-b0d6-2e81542c1ab7" containerName="nova-cell0-conductor-db-sync" Feb 20 12:21:55.540653 master-0 kubenswrapper[31420]: I0220 12:21:55.540593 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="41df128e-94f0-4150-b0d6-2e81542c1ab7" containerName="nova-cell0-conductor-db-sync" Feb 20 12:21:55.540653 master-0 kubenswrapper[31420]: E0220 12:21:55.540642 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedd35bf-2057-4ed2-b1bd-5b65e43788b0" containerName="init" Feb 20 12:21:55.540653 master-0 kubenswrapper[31420]: I0220 12:21:55.540652 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedd35bf-2057-4ed2-b1bd-5b65e43788b0" containerName="init" Feb 20 12:21:55.540949 master-0 kubenswrapper[31420]: I0220 12:21:55.540924 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="bedd35bf-2057-4ed2-b1bd-5b65e43788b0" containerName="dnsmasq-dns" Feb 20 12:21:55.540996 master-0 kubenswrapper[31420]: I0220 12:21:55.540972 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="41df128e-94f0-4150-b0d6-2e81542c1ab7" containerName="nova-cell0-conductor-db-sync" Feb 20 12:21:55.541040 master-0 kubenswrapper[31420]: I0220 12:21:55.540994 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b5f8852-b52b-402c-bc6b-f21c99991432" containerName="placement-log" Feb 20 12:21:55.541040 master-0 kubenswrapper[31420]: I0220 12:21:55.541027 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b5f8852-b52b-402c-bc6b-f21c99991432" containerName="placement-api" Feb 20 12:21:55.542018 master-0 kubenswrapper[31420]: I0220 12:21:55.541986 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 20 12:21:55.553228 master-0 kubenswrapper[31420]: I0220 12:21:55.547915 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 20 12:21:55.563385 master-0 kubenswrapper[31420]: I0220 12:21:55.563346 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 20 12:21:55.649517 master-0 kubenswrapper[31420]: I0220 12:21:55.639999 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c77eb66-18a8-40b3-8194-ce0160ccfe8c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6c77eb66-18a8-40b3-8194-ce0160ccfe8c\") " pod="openstack/nova-cell0-conductor-0" Feb 20 12:21:55.649517 master-0 kubenswrapper[31420]: I0220 12:21:55.640126 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c77eb66-18a8-40b3-8194-ce0160ccfe8c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6c77eb66-18a8-40b3-8194-ce0160ccfe8c\") " pod="openstack/nova-cell0-conductor-0" Feb 20 12:21:55.649517 master-0 kubenswrapper[31420]: I0220 12:21:55.640216 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wv4k\" (UniqueName: \"kubernetes.io/projected/6c77eb66-18a8-40b3-8194-ce0160ccfe8c-kube-api-access-4wv4k\") pod \"nova-cell0-conductor-0\" (UID: \"6c77eb66-18a8-40b3-8194-ce0160ccfe8c\") " pod="openstack/nova-cell0-conductor-0" Feb 20 12:21:55.742409 master-0 kubenswrapper[31420]: I0220 12:21:55.742342 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wv4k\" (UniqueName: \"kubernetes.io/projected/6c77eb66-18a8-40b3-8194-ce0160ccfe8c-kube-api-access-4wv4k\") pod \"nova-cell0-conductor-0\" (UID: \"6c77eb66-18a8-40b3-8194-ce0160ccfe8c\") " pod="openstack/nova-cell0-conductor-0" Feb 20 12:21:55.742620 master-0 kubenswrapper[31420]: I0220 12:21:55.742487 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c77eb66-18a8-40b3-8194-ce0160ccfe8c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6c77eb66-18a8-40b3-8194-ce0160ccfe8c\") " pod="openstack/nova-cell0-conductor-0" Feb 20 12:21:55.742620 master-0 kubenswrapper[31420]: I0220 12:21:55.742589 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c77eb66-18a8-40b3-8194-ce0160ccfe8c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6c77eb66-18a8-40b3-8194-ce0160ccfe8c\") " pod="openstack/nova-cell0-conductor-0" Feb 20 12:21:55.747280 master-0 kubenswrapper[31420]: I0220 12:21:55.746567 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c77eb66-18a8-40b3-8194-ce0160ccfe8c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6c77eb66-18a8-40b3-8194-ce0160ccfe8c\") " pod="openstack/nova-cell0-conductor-0" Feb 20 12:21:55.748138 master-0 kubenswrapper[31420]: I0220 12:21:55.748109 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c77eb66-18a8-40b3-8194-ce0160ccfe8c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6c77eb66-18a8-40b3-8194-ce0160ccfe8c\") " pod="openstack/nova-cell0-conductor-0" Feb 20 12:21:55.762440 master-0 kubenswrapper[31420]: I0220 12:21:55.762382 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wv4k\" (UniqueName: \"kubernetes.io/projected/6c77eb66-18a8-40b3-8194-ce0160ccfe8c-kube-api-access-4wv4k\") pod \"nova-cell0-conductor-0\" (UID: \"6c77eb66-18a8-40b3-8194-ce0160ccfe8c\") " pod="openstack/nova-cell0-conductor-0" Feb 20 12:21:55.864742 master-0 kubenswrapper[31420]: I0220 12:21:55.864674 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 20 12:21:56.446494 master-0 kubenswrapper[31420]: I0220 12:21:56.446410 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 20 12:21:56.454143 master-0 kubenswrapper[31420]: W0220 12:21:56.454086 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c77eb66_18a8_40b3_8194_ce0160ccfe8c.slice/crio-e0b64ca5c341695cf18f03b062f5d00ad3d5ed2176016bc98b404bea3d6ad6c1 WatchSource:0}: Error finding container e0b64ca5c341695cf18f03b062f5d00ad3d5ed2176016bc98b404bea3d6ad6c1: Status 404 returned error can't find the container with id e0b64ca5c341695cf18f03b062f5d00ad3d5ed2176016bc98b404bea3d6ad6c1 Feb 20 12:21:57.415700 master-0 kubenswrapper[31420]: I0220 12:21:57.415637 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6c77eb66-18a8-40b3-8194-ce0160ccfe8c","Type":"ContainerStarted","Data":"6eecf6e3e3e30d47217f2d15460dca5bfaebc58b9d9b72ad23eb7896815e6059"} Feb 20 12:21:57.415700 master-0 kubenswrapper[31420]: I0220 12:21:57.415696 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6c77eb66-18a8-40b3-8194-ce0160ccfe8c","Type":"ContainerStarted","Data":"e0b64ca5c341695cf18f03b062f5d00ad3d5ed2176016bc98b404bea3d6ad6c1"} Feb 20 12:21:57.442808 master-0 kubenswrapper[31420]: I0220 12:21:57.442744 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.442725627 podStartE2EDuration="2.442725627s" podCreationTimestamp="2026-02-20 12:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:21:57.436277655 +0000 UTC m=+1022.155515896" watchObservedRunningTime="2026-02-20 12:21:57.442725627 +0000 UTC m=+1022.161963868" Feb 20 12:21:58.431792 master-0 kubenswrapper[31420]: I0220 12:21:58.431696 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 20 12:22:05.923884 master-0 kubenswrapper[31420]: I0220 12:22:05.923812 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 20 12:22:06.482578 master-0 kubenswrapper[31420]: I0220 12:22:06.482190 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-zdwhr"] Feb 20 12:22:06.484292 master-0 kubenswrapper[31420]: I0220 12:22:06.484263 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:06.487351 master-0 kubenswrapper[31420]: I0220 12:22:06.487308 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 20 12:22:06.487620 master-0 kubenswrapper[31420]: I0220 12:22:06.487450 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 20 12:22:06.524079 master-0 kubenswrapper[31420]: I0220 12:22:06.523150 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zdwhr"] Feb 20 12:22:06.541556 master-0 kubenswrapper[31420]: I0220 12:22:06.541468 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-config-data\") pod \"nova-cell0-cell-mapping-zdwhr\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:06.541825 master-0 kubenswrapper[31420]: I0220 12:22:06.541636 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zdwhr\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:06.541825 master-0 kubenswrapper[31420]: I0220 12:22:06.541695 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhlbg\" (UniqueName: \"kubernetes.io/projected/00959100-db68-42e4-9009-7424e5bdffe9-kube-api-access-rhlbg\") pod \"nova-cell0-cell-mapping-zdwhr\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:06.541825 master-0 kubenswrapper[31420]: I0220 12:22:06.541733 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-scripts\") pod \"nova-cell0-cell-mapping-zdwhr\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:06.615038 master-0 kubenswrapper[31420]: I0220 12:22:06.612990 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 20 12:22:06.615038 master-0 kubenswrapper[31420]: I0220 12:22:06.614870 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 12:22:06.620942 master-0 kubenswrapper[31420]: I0220 12:22:06.618752 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Feb 20 12:22:06.626157 master-0 kubenswrapper[31420]: I0220 12:22:06.622998 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 20 12:22:06.646733 master-0 kubenswrapper[31420]: I0220 12:22:06.646676 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zdwhr\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:06.647864 master-0 kubenswrapper[31420]: I0220 12:22:06.647815 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhlbg\" (UniqueName: \"kubernetes.io/projected/00959100-db68-42e4-9009-7424e5bdffe9-kube-api-access-rhlbg\") pod \"nova-cell0-cell-mapping-zdwhr\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:06.647968 master-0 kubenswrapper[31420]: I0220 12:22:06.647924 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-scripts\") pod \"nova-cell0-cell-mapping-zdwhr\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:06.648377 master-0 kubenswrapper[31420]: I0220 12:22:06.648351 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-config-data\") pod \"nova-cell0-cell-mapping-zdwhr\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:06.651932 master-0 kubenswrapper[31420]: I0220 12:22:06.651875 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-scripts\") pod \"nova-cell0-cell-mapping-zdwhr\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:06.652086 master-0 kubenswrapper[31420]: I0220 12:22:06.651944 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zdwhr\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:06.659941 master-0 kubenswrapper[31420]: I0220 12:22:06.654290 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-config-data\") pod \"nova-cell0-cell-mapping-zdwhr\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:06.704548 master-0 kubenswrapper[31420]: I0220 12:22:06.695469 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhlbg\" (UniqueName: \"kubernetes.io/projected/00959100-db68-42e4-9009-7424e5bdffe9-kube-api-access-rhlbg\") pod \"nova-cell0-cell-mapping-zdwhr\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:06.738739 master-0 kubenswrapper[31420]: I0220 12:22:06.737821 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 20 12:22:06.741161 master-0 kubenswrapper[31420]: I0220 12:22:06.740905 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 12:22:06.755546 master-0 kubenswrapper[31420]: I0220 12:22:06.745973 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 20 12:22:06.755546 master-0 kubenswrapper[31420]: I0220 12:22:06.753242 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aba54209-fbee-41c6-b8fa-a82b2534d9d7-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"aba54209-fbee-41c6-b8fa-a82b2534d9d7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 12:22:06.755546 master-0 kubenswrapper[31420]: I0220 12:22:06.753737 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml6br\" (UniqueName: \"kubernetes.io/projected/aba54209-fbee-41c6-b8fa-a82b2534d9d7-kube-api-access-ml6br\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"aba54209-fbee-41c6-b8fa-a82b2534d9d7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 12:22:06.755546 master-0 kubenswrapper[31420]: I0220 12:22:06.753906 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba54209-fbee-41c6-b8fa-a82b2534d9d7-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"aba54209-fbee-41c6-b8fa-a82b2534d9d7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 12:22:06.784634 master-0 kubenswrapper[31420]: I0220 12:22:06.768646 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:22:06.797578 master-0 kubenswrapper[31420]: I0220 12:22:06.797536 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 12:22:06.802491 master-0 kubenswrapper[31420]: I0220 12:22:06.800434 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 12:22:06.810833 master-0 kubenswrapper[31420]: I0220 12:22:06.805504 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 20 12:22:06.821241 master-0 kubenswrapper[31420]: I0220 12:22:06.820302 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 12:22:06.856695 master-0 kubenswrapper[31420]: I0220 12:22:06.856220 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d11c3184-d8c7-45dc-8988-d4eb7f86289d-logs\") pod \"nova-api-0\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " pod="openstack/nova-api-0" Feb 20 12:22:06.856695 master-0 kubenswrapper[31420]: I0220 12:22:06.856279 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aba54209-fbee-41c6-b8fa-a82b2534d9d7-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"aba54209-fbee-41c6-b8fa-a82b2534d9d7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 12:22:06.856695 master-0 kubenswrapper[31420]: I0220 12:22:06.856301 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee6d8dc-215d-4dc5-83df-120591fdddb7-config-data\") pod \"nova-scheduler-0\" (UID: \"8ee6d8dc-215d-4dc5-83df-120591fdddb7\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:06.859633 master-0 kubenswrapper[31420]: I0220 12:22:06.857322 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11c3184-d8c7-45dc-8988-d4eb7f86289d-config-data\") pod \"nova-api-0\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " pod="openstack/nova-api-0" Feb 20 12:22:06.859633 master-0 kubenswrapper[31420]: I0220 12:22:06.857455 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml6br\" (UniqueName: \"kubernetes.io/projected/aba54209-fbee-41c6-b8fa-a82b2534d9d7-kube-api-access-ml6br\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"aba54209-fbee-41c6-b8fa-a82b2534d9d7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 12:22:06.859633 master-0 kubenswrapper[31420]: I0220 12:22:06.857495 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee6d8dc-215d-4dc5-83df-120591fdddb7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8ee6d8dc-215d-4dc5-83df-120591fdddb7\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:06.859633 master-0 kubenswrapper[31420]: I0220 12:22:06.857542 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11c3184-d8c7-45dc-8988-d4eb7f86289d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " pod="openstack/nova-api-0" Feb 20 12:22:06.859633 master-0 kubenswrapper[31420]: I0220 12:22:06.857574 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba54209-fbee-41c6-b8fa-a82b2534d9d7-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"aba54209-fbee-41c6-b8fa-a82b2534d9d7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 12:22:06.869422 master-0 kubenswrapper[31420]: I0220 12:22:06.869302 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aba54209-fbee-41c6-b8fa-a82b2534d9d7-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"aba54209-fbee-41c6-b8fa-a82b2534d9d7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 12:22:06.879458 master-0 kubenswrapper[31420]: I0220 12:22:06.879417 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vkq7\" (UniqueName: \"kubernetes.io/projected/8ee6d8dc-215d-4dc5-83df-120591fdddb7-kube-api-access-2vkq7\") pod \"nova-scheduler-0\" (UID: \"8ee6d8dc-215d-4dc5-83df-120591fdddb7\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:06.883731 master-0 kubenswrapper[31420]: I0220 12:22:06.879430 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba54209-fbee-41c6-b8fa-a82b2534d9d7-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"aba54209-fbee-41c6-b8fa-a82b2534d9d7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 12:22:06.884011 master-0 kubenswrapper[31420]: I0220 12:22:06.882684 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:06.891889 master-0 kubenswrapper[31420]: I0220 12:22:06.887644 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 12:22:06.891889 master-0 kubenswrapper[31420]: I0220 12:22:06.889457 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:06.892308 master-0 kubenswrapper[31420]: I0220 12:22:06.892273 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpkvm\" (UniqueName: \"kubernetes.io/projected/d11c3184-d8c7-45dc-8988-d4eb7f86289d-kube-api-access-tpkvm\") pod \"nova-api-0\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " pod="openstack/nova-api-0" Feb 20 12:22:06.894329 master-0 kubenswrapper[31420]: I0220 12:22:06.894283 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 20 12:22:06.904997 master-0 kubenswrapper[31420]: I0220 12:22:06.903620 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 12:22:06.906244 master-0 kubenswrapper[31420]: I0220 12:22:06.906199 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml6br\" (UniqueName: \"kubernetes.io/projected/aba54209-fbee-41c6-b8fa-a82b2534d9d7-kube-api-access-ml6br\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"aba54209-fbee-41c6-b8fa-a82b2534d9d7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 12:22:06.949130 master-0 kubenswrapper[31420]: I0220 12:22:06.949045 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 20 12:22:06.951106 master-0 kubenswrapper[31420]: I0220 12:22:06.951014 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 12:22:06.952768 master-0 kubenswrapper[31420]: I0220 12:22:06.952654 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 20 12:22:06.979619 master-0 kubenswrapper[31420]: I0220 12:22:06.978200 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 12:22:07.007621 master-0 kubenswrapper[31420]: I0220 12:22:06.998762 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee6d8dc-215d-4dc5-83df-120591fdddb7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8ee6d8dc-215d-4dc5-83df-120591fdddb7\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:07.007621 master-0 kubenswrapper[31420]: I0220 12:22:06.998823 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11c3184-d8c7-45dc-8988-d4eb7f86289d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " pod="openstack/nova-api-0" Feb 20 12:22:07.007621 master-0 kubenswrapper[31420]: I0220 12:22:06.998865 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vkq7\" (UniqueName: \"kubernetes.io/projected/8ee6d8dc-215d-4dc5-83df-120591fdddb7-kube-api-access-2vkq7\") pod \"nova-scheduler-0\" (UID: \"8ee6d8dc-215d-4dc5-83df-120591fdddb7\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:07.007621 master-0 kubenswrapper[31420]: I0220 12:22:06.999603 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpkvm\" (UniqueName: \"kubernetes.io/projected/d11c3184-d8c7-45dc-8988-d4eb7f86289d-kube-api-access-tpkvm\") pod \"nova-api-0\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " pod="openstack/nova-api-0" Feb 20 12:22:07.007621 master-0 kubenswrapper[31420]: I0220 12:22:06.999696 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e8aa2e-fd54-4e46-96ae-34f769964346-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8e8aa2e-fd54-4e46-96ae-34f769964346\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:07.007621 master-0 kubenswrapper[31420]: I0220 12:22:06.999761 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e8aa2e-fd54-4e46-96ae-34f769964346-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8e8aa2e-fd54-4e46-96ae-34f769964346\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:07.007621 master-0 kubenswrapper[31420]: I0220 12:22:06.999780 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kwp6\" (UniqueName: \"kubernetes.io/projected/e8e8aa2e-fd54-4e46-96ae-34f769964346-kube-api-access-9kwp6\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8e8aa2e-fd54-4e46-96ae-34f769964346\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:07.007621 master-0 kubenswrapper[31420]: I0220 12:22:06.999831 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d11c3184-d8c7-45dc-8988-d4eb7f86289d-logs\") pod \"nova-api-0\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " pod="openstack/nova-api-0" Feb 20 12:22:07.007621 master-0 kubenswrapper[31420]: I0220 12:22:06.999865 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee6d8dc-215d-4dc5-83df-120591fdddb7-config-data\") pod \"nova-scheduler-0\" (UID: \"8ee6d8dc-215d-4dc5-83df-120591fdddb7\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:07.007621 master-0 kubenswrapper[31420]: I0220 12:22:06.999904 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11c3184-d8c7-45dc-8988-d4eb7f86289d-config-data\") pod \"nova-api-0\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " pod="openstack/nova-api-0" Feb 20 12:22:07.007621 master-0 kubenswrapper[31420]: I0220 12:22:07.004509 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11c3184-d8c7-45dc-8988-d4eb7f86289d-config-data\") pod \"nova-api-0\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " pod="openstack/nova-api-0" Feb 20 12:22:07.007621 master-0 kubenswrapper[31420]: I0220 12:22:07.004870 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d11c3184-d8c7-45dc-8988-d4eb7f86289d-logs\") pod \"nova-api-0\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " pod="openstack/nova-api-0" Feb 20 12:22:07.013348 master-0 kubenswrapper[31420]: I0220 12:22:07.013095 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee6d8dc-215d-4dc5-83df-120591fdddb7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8ee6d8dc-215d-4dc5-83df-120591fdddb7\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:07.027573 master-0 kubenswrapper[31420]: I0220 12:22:07.026510 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11c3184-d8c7-45dc-8988-d4eb7f86289d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " pod="openstack/nova-api-0" Feb 20 12:22:07.029443 master-0 kubenswrapper[31420]: I0220 12:22:07.028764 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee6d8dc-215d-4dc5-83df-120591fdddb7-config-data\") pod \"nova-scheduler-0\" (UID: \"8ee6d8dc-215d-4dc5-83df-120591fdddb7\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:07.038159 master-0 kubenswrapper[31420]: I0220 12:22:07.038122 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vkq7\" (UniqueName: \"kubernetes.io/projected/8ee6d8dc-215d-4dc5-83df-120591fdddb7-kube-api-access-2vkq7\") pod \"nova-scheduler-0\" (UID: \"8ee6d8dc-215d-4dc5-83df-120591fdddb7\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:07.061802 master-0 kubenswrapper[31420]: I0220 12:22:07.054365 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpkvm\" (UniqueName: \"kubernetes.io/projected/d11c3184-d8c7-45dc-8988-d4eb7f86289d-kube-api-access-tpkvm\") pod \"nova-api-0\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " pod="openstack/nova-api-0" Feb 20 12:22:07.130050 master-0 kubenswrapper[31420]: I0220 12:22:07.101567 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e8aa2e-fd54-4e46-96ae-34f769964346-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8e8aa2e-fd54-4e46-96ae-34f769964346\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:07.130050 master-0 kubenswrapper[31420]: I0220 12:22:07.101642 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ecfe8c-6fff-4f97-a962-e36c8f813070-logs\") pod \"nova-metadata-0\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " pod="openstack/nova-metadata-0" Feb 20 12:22:07.130050 master-0 kubenswrapper[31420]: I0220 12:22:07.101689 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e8aa2e-fd54-4e46-96ae-34f769964346-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8e8aa2e-fd54-4e46-96ae-34f769964346\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:07.130050 master-0 kubenswrapper[31420]: I0220 12:22:07.101715 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kwp6\" (UniqueName: \"kubernetes.io/projected/e8e8aa2e-fd54-4e46-96ae-34f769964346-kube-api-access-9kwp6\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8e8aa2e-fd54-4e46-96ae-34f769964346\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:07.130050 master-0 kubenswrapper[31420]: I0220 12:22:07.101789 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ecfe8c-6fff-4f97-a962-e36c8f813070-config-data\") pod \"nova-metadata-0\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " pod="openstack/nova-metadata-0" Feb 20 12:22:07.130050 master-0 kubenswrapper[31420]: I0220 12:22:07.101812 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rphz2\" (UniqueName: \"kubernetes.io/projected/51ecfe8c-6fff-4f97-a962-e36c8f813070-kube-api-access-rphz2\") pod \"nova-metadata-0\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " pod="openstack/nova-metadata-0" Feb 20 12:22:07.130050 master-0 kubenswrapper[31420]: I0220 12:22:07.101939 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ecfe8c-6fff-4f97-a962-e36c8f813070-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " pod="openstack/nova-metadata-0" Feb 20 12:22:07.130050 master-0 kubenswrapper[31420]: I0220 12:22:07.109877 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-694f5b8c75-kvdtj"] Feb 20 12:22:07.130050 master-0 kubenswrapper[31420]: I0220 12:22:07.111599 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.130050 master-0 kubenswrapper[31420]: I0220 12:22:07.116874 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e8aa2e-fd54-4e46-96ae-34f769964346-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8e8aa2e-fd54-4e46-96ae-34f769964346\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:07.130050 master-0 kubenswrapper[31420]: I0220 12:22:07.117624 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 12:22:07.130050 master-0 kubenswrapper[31420]: I0220 12:22:07.120226 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-694f5b8c75-kvdtj"] Feb 20 12:22:07.142345 master-0 kubenswrapper[31420]: I0220 12:22:07.142296 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 12:22:07.143339 master-0 kubenswrapper[31420]: I0220 12:22:07.143308 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e8aa2e-fd54-4e46-96ae-34f769964346-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8e8aa2e-fd54-4e46-96ae-34f769964346\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:07.155509 master-0 kubenswrapper[31420]: I0220 12:22:07.154089 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kwp6\" (UniqueName: \"kubernetes.io/projected/e8e8aa2e-fd54-4e46-96ae-34f769964346-kube-api-access-9kwp6\") pod \"nova-cell1-novncproxy-0\" (UID: \"e8e8aa2e-fd54-4e46-96ae-34f769964346\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:07.155509 master-0 kubenswrapper[31420]: I0220 12:22:07.155320 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 12:22:07.205965 master-0 kubenswrapper[31420]: I0220 12:22:07.203978 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-ovsdbserver-sb\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.205965 master-0 kubenswrapper[31420]: I0220 12:22:07.204054 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ecfe8c-6fff-4f97-a962-e36c8f813070-config-data\") pod \"nova-metadata-0\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " pod="openstack/nova-metadata-0" Feb 20 12:22:07.205965 master-0 kubenswrapper[31420]: I0220 12:22:07.204109 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rphz2\" (UniqueName: \"kubernetes.io/projected/51ecfe8c-6fff-4f97-a962-e36c8f813070-kube-api-access-rphz2\") pod \"nova-metadata-0\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " pod="openstack/nova-metadata-0" Feb 20 12:22:07.205965 master-0 kubenswrapper[31420]: I0220 12:22:07.204596 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ecfe8c-6fff-4f97-a962-e36c8f813070-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " pod="openstack/nova-metadata-0" Feb 20 12:22:07.205965 master-0 kubenswrapper[31420]: I0220 12:22:07.204677 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-dns-swift-storage-0\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.205965 master-0 kubenswrapper[31420]: I0220 12:22:07.204733 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-dns-svc\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.205965 master-0 kubenswrapper[31420]: I0220 12:22:07.204765 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-config\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.205965 master-0 kubenswrapper[31420]: I0220 12:22:07.204895 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-ovsdbserver-nb\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.205965 master-0 kubenswrapper[31420]: I0220 12:22:07.204972 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ecfe8c-6fff-4f97-a962-e36c8f813070-logs\") pod \"nova-metadata-0\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " pod="openstack/nova-metadata-0" Feb 20 12:22:07.205965 master-0 kubenswrapper[31420]: I0220 12:22:07.205076 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6stq\" (UniqueName: \"kubernetes.io/projected/c9303051-f88b-4545-b9ee-ddc0af81f1a7-kube-api-access-f6stq\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.205965 master-0 kubenswrapper[31420]: I0220 12:22:07.205595 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ecfe8c-6fff-4f97-a962-e36c8f813070-logs\") pod \"nova-metadata-0\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " pod="openstack/nova-metadata-0" Feb 20 12:22:07.209023 master-0 kubenswrapper[31420]: I0220 12:22:07.208990 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ecfe8c-6fff-4f97-a962-e36c8f813070-config-data\") pod \"nova-metadata-0\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " pod="openstack/nova-metadata-0" Feb 20 12:22:07.211776 master-0 kubenswrapper[31420]: I0220 12:22:07.211722 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ecfe8c-6fff-4f97-a962-e36c8f813070-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " pod="openstack/nova-metadata-0" Feb 20 12:22:07.238382 master-0 kubenswrapper[31420]: I0220 12:22:07.237732 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rphz2\" (UniqueName: \"kubernetes.io/projected/51ecfe8c-6fff-4f97-a962-e36c8f813070-kube-api-access-rphz2\") pod \"nova-metadata-0\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " pod="openstack/nova-metadata-0" Feb 20 12:22:07.307319 master-0 kubenswrapper[31420]: I0220 12:22:07.306933 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6stq\" (UniqueName: \"kubernetes.io/projected/c9303051-f88b-4545-b9ee-ddc0af81f1a7-kube-api-access-f6stq\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.307319 master-0 kubenswrapper[31420]: I0220 12:22:07.307022 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-ovsdbserver-sb\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.307319 master-0 kubenswrapper[31420]: I0220 12:22:07.307170 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-dns-swift-storage-0\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.307319 master-0 kubenswrapper[31420]: I0220 12:22:07.307208 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-dns-svc\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.307912 master-0 kubenswrapper[31420]: I0220 12:22:07.307889 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-ovsdbserver-sb\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.313552 master-0 kubenswrapper[31420]: I0220 12:22:07.310059 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-config\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.313552 master-0 kubenswrapper[31420]: I0220 12:22:07.310251 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-ovsdbserver-nb\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.313552 master-0 kubenswrapper[31420]: I0220 12:22:07.311305 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-ovsdbserver-nb\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.313552 master-0 kubenswrapper[31420]: I0220 12:22:07.311837 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-dns-svc\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.313552 master-0 kubenswrapper[31420]: I0220 12:22:07.312876 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-dns-swift-storage-0\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.313552 master-0 kubenswrapper[31420]: I0220 12:22:07.313442 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-config\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.369092 master-0 kubenswrapper[31420]: I0220 12:22:07.366423 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:07.448584 master-0 kubenswrapper[31420]: I0220 12:22:07.447996 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 12:22:07.483213 master-0 kubenswrapper[31420]: I0220 12:22:07.483165 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6stq\" (UniqueName: \"kubernetes.io/projected/c9303051-f88b-4545-b9ee-ddc0af81f1a7-kube-api-access-f6stq\") pod \"dnsmasq-dns-694f5b8c75-kvdtj\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:07.485411 master-0 kubenswrapper[31420]: I0220 12:22:07.484943 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:08.167046 master-0 kubenswrapper[31420]: I0220 12:22:08.166997 31420 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 12:22:08.167592 master-0 kubenswrapper[31420]: I0220 12:22:08.167429 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zdwhr"] Feb 20 12:22:08.193767 master-0 kubenswrapper[31420]: W0220 12:22:08.193715 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaba54209_fbee_41c6_b8fa_a82b2534d9d7.slice/crio-e20fba717638939c2dee166fe9704ba5b4f1a14552f4b1d6cbf4cf7e5432a8ff WatchSource:0}: Error finding container e20fba717638939c2dee166fe9704ba5b4f1a14552f4b1d6cbf4cf7e5432a8ff: Status 404 returned error can't find the container with id e20fba717638939c2dee166fe9704ba5b4f1a14552f4b1d6cbf4cf7e5432a8ff Feb 20 12:22:08.211974 master-0 kubenswrapper[31420]: I0220 12:22:08.211900 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:22:08.230604 master-0 kubenswrapper[31420]: W0220 12:22:08.230221 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8e8aa2e_fd54_4e46_96ae_34f769964346.slice/crio-462337f702831894ea2c5a6c7bd7cee2648516a8b416c395596914d689001e7b WatchSource:0}: Error finding container 462337f702831894ea2c5a6c7bd7cee2648516a8b416c395596914d689001e7b: Status 404 returned error can't find the container with id 462337f702831894ea2c5a6c7bd7cee2648516a8b416c395596914d689001e7b Feb 20 12:22:08.252442 master-0 kubenswrapper[31420]: I0220 12:22:08.252051 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 20 12:22:08.271315 master-0 kubenswrapper[31420]: I0220 12:22:08.270776 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 12:22:08.271315 master-0 kubenswrapper[31420]: W0220 12:22:08.270997 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51ecfe8c_6fff_4f97_a962_e36c8f813070.slice/crio-bec7769dff0ff11dfeb39bb6d98762be7014ac9feb72ac0dc64c1f6629ab1ad7 WatchSource:0}: Error finding container bec7769dff0ff11dfeb39bb6d98762be7014ac9feb72ac0dc64c1f6629ab1ad7: Status 404 returned error can't find the container with id bec7769dff0ff11dfeb39bb6d98762be7014ac9feb72ac0dc64c1f6629ab1ad7 Feb 20 12:22:08.274087 master-0 kubenswrapper[31420]: W0220 12:22:08.274032 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9303051_f88b_4545_b9ee_ddc0af81f1a7.slice/crio-c4cecfd88c4d60d77ed412a5e7e2a9696960040aa789e052e878fdd68ea81b3f WatchSource:0}: Error finding container c4cecfd88c4d60d77ed412a5e7e2a9696960040aa789e052e878fdd68ea81b3f: Status 404 returned error can't find the container with id c4cecfd88c4d60d77ed412a5e7e2a9696960040aa789e052e878fdd68ea81b3f Feb 20 12:22:08.303903 master-0 kubenswrapper[31420]: I0220 12:22:08.303786 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xfqx5"] Feb 20 12:22:08.306392 master-0 kubenswrapper[31420]: I0220 12:22:08.306313 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:08.308037 master-0 kubenswrapper[31420]: I0220 12:22:08.307987 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 20 12:22:08.310215 master-0 kubenswrapper[31420]: I0220 12:22:08.310171 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 20 12:22:08.335148 master-0 kubenswrapper[31420]: I0220 12:22:08.335082 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 12:22:08.354476 master-0 kubenswrapper[31420]: I0220 12:22:08.354392 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xfqx5"] Feb 20 12:22:08.367941 master-0 kubenswrapper[31420]: I0220 12:22:08.367877 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-694f5b8c75-kvdtj"] Feb 20 12:22:08.380483 master-0 kubenswrapper[31420]: I0220 12:22:08.380427 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 12:22:08.446390 master-0 kubenswrapper[31420]: I0220 12:22:08.443900 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-scripts\") pod \"nova-cell1-conductor-db-sync-xfqx5\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:08.446390 master-0 kubenswrapper[31420]: I0220 12:22:08.444032 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-config-data\") pod \"nova-cell1-conductor-db-sync-xfqx5\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:08.446390 master-0 kubenswrapper[31420]: I0220 12:22:08.444136 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xfqx5\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:08.446390 master-0 kubenswrapper[31420]: I0220 12:22:08.444171 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbmvb\" (UniqueName: \"kubernetes.io/projected/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-kube-api-access-hbmvb\") pod \"nova-cell1-conductor-db-sync-xfqx5\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:08.548557 master-0 kubenswrapper[31420]: I0220 12:22:08.548464 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-scripts\") pod \"nova-cell1-conductor-db-sync-xfqx5\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:08.548652 master-0 kubenswrapper[31420]: I0220 12:22:08.548594 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-config-data\") pod \"nova-cell1-conductor-db-sync-xfqx5\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:08.548696 master-0 kubenswrapper[31420]: I0220 12:22:08.548660 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xfqx5\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:08.548744 master-0 kubenswrapper[31420]: I0220 12:22:08.548721 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbmvb\" (UniqueName: \"kubernetes.io/projected/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-kube-api-access-hbmvb\") pod \"nova-cell1-conductor-db-sync-xfqx5\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:08.558232 master-0 kubenswrapper[31420]: I0220 12:22:08.558181 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-scripts\") pod \"nova-cell1-conductor-db-sync-xfqx5\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:08.559261 master-0 kubenswrapper[31420]: I0220 12:22:08.559175 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-config-data\") pod \"nova-cell1-conductor-db-sync-xfqx5\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:08.559326 master-0 kubenswrapper[31420]: I0220 12:22:08.559259 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xfqx5\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:08.572557 master-0 kubenswrapper[31420]: I0220 12:22:08.572489 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbmvb\" (UniqueName: \"kubernetes.io/projected/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-kube-api-access-hbmvb\") pod \"nova-cell1-conductor-db-sync-xfqx5\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:08.590130 master-0 kubenswrapper[31420]: I0220 12:22:08.590011 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"aba54209-fbee-41c6-b8fa-a82b2534d9d7","Type":"ContainerStarted","Data":"e20fba717638939c2dee166fe9704ba5b4f1a14552f4b1d6cbf4cf7e5432a8ff"} Feb 20 12:22:08.592255 master-0 kubenswrapper[31420]: I0220 12:22:08.592226 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zdwhr" event={"ID":"00959100-db68-42e4-9009-7424e5bdffe9","Type":"ContainerStarted","Data":"072d654a1934e8076b1f63d71c80bc248f6d5ec0809f229f6ef67ff035b59ba6"} Feb 20 12:22:08.592255 master-0 kubenswrapper[31420]: I0220 12:22:08.592257 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zdwhr" event={"ID":"00959100-db68-42e4-9009-7424e5bdffe9","Type":"ContainerStarted","Data":"da678b2f93aa8e1c2d5d6561eeff44da9fd7b68f7e0205ed4a33a27bd8c7f401"} Feb 20 12:22:08.595637 master-0 kubenswrapper[31420]: I0220 12:22:08.595596 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51ecfe8c-6fff-4f97-a962-e36c8f813070","Type":"ContainerStarted","Data":"bec7769dff0ff11dfeb39bb6d98762be7014ac9feb72ac0dc64c1f6629ab1ad7"} Feb 20 12:22:08.597762 master-0 kubenswrapper[31420]: I0220 12:22:08.597734 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e8e8aa2e-fd54-4e46-96ae-34f769964346","Type":"ContainerStarted","Data":"462337f702831894ea2c5a6c7bd7cee2648516a8b416c395596914d689001e7b"} Feb 20 12:22:08.599110 master-0 kubenswrapper[31420]: I0220 12:22:08.599084 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d11c3184-d8c7-45dc-8988-d4eb7f86289d","Type":"ContainerStarted","Data":"8417ac3156f92b3e56d51020562b6c3ae541e950e7eb89312073282747c679c0"} Feb 20 12:22:08.601424 master-0 kubenswrapper[31420]: I0220 12:22:08.601397 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8ee6d8dc-215d-4dc5-83df-120591fdddb7","Type":"ContainerStarted","Data":"193834c13d02ac8f6b17cebac6fe4e4f750177d37caa42ccffddf3f519073eae"} Feb 20 12:22:08.603230 master-0 kubenswrapper[31420]: I0220 12:22:08.603193 31420 generic.go:334] "Generic (PLEG): container finished" podID="c9303051-f88b-4545-b9ee-ddc0af81f1a7" containerID="eae0fcdf75faeb4886f06aeff56e7d31ce158eea1042d9fb67b1d4440245ed3a" exitCode=0 Feb 20 12:22:08.603230 master-0 kubenswrapper[31420]: I0220 12:22:08.603225 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" event={"ID":"c9303051-f88b-4545-b9ee-ddc0af81f1a7","Type":"ContainerDied","Data":"eae0fcdf75faeb4886f06aeff56e7d31ce158eea1042d9fb67b1d4440245ed3a"} Feb 20 12:22:08.603340 master-0 kubenswrapper[31420]: I0220 12:22:08.603244 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" event={"ID":"c9303051-f88b-4545-b9ee-ddc0af81f1a7","Type":"ContainerStarted","Data":"c4cecfd88c4d60d77ed412a5e7e2a9696960040aa789e052e878fdd68ea81b3f"} Feb 20 12:22:08.628974 master-0 kubenswrapper[31420]: I0220 12:22:08.628878 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-zdwhr" podStartSLOduration=2.628857343 podStartE2EDuration="2.628857343s" podCreationTimestamp="2026-02-20 12:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:22:08.616394791 +0000 UTC m=+1033.335633062" watchObservedRunningTime="2026-02-20 12:22:08.628857343 +0000 UTC m=+1033.348095584" Feb 20 12:22:08.695050 master-0 kubenswrapper[31420]: I0220 12:22:08.694989 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:09.253313 master-0 kubenswrapper[31420]: I0220 12:22:09.253215 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xfqx5"] Feb 20 12:22:09.287586 master-0 kubenswrapper[31420]: W0220 12:22:09.287481 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb05b1900_c2a2_4f2b_b61a_32fb0825fb42.slice/crio-681a80ec326eee5f691495bfed000946ecb27786c206626ee520c412ba36426a WatchSource:0}: Error finding container 681a80ec326eee5f691495bfed000946ecb27786c206626ee520c412ba36426a: Status 404 returned error can't find the container with id 681a80ec326eee5f691495bfed000946ecb27786c206626ee520c412ba36426a Feb 20 12:22:09.621618 master-0 kubenswrapper[31420]: I0220 12:22:09.621568 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xfqx5" event={"ID":"b05b1900-c2a2-4f2b-b61a-32fb0825fb42","Type":"ContainerStarted","Data":"952582ff816e7852a32903563543aa08297742db0d7e5cb97fa7a873499a3555"} Feb 20 12:22:09.621618 master-0 kubenswrapper[31420]: I0220 12:22:09.621619 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xfqx5" event={"ID":"b05b1900-c2a2-4f2b-b61a-32fb0825fb42","Type":"ContainerStarted","Data":"681a80ec326eee5f691495bfed000946ecb27786c206626ee520c412ba36426a"} Feb 20 12:22:09.643109 master-0 kubenswrapper[31420]: I0220 12:22:09.635716 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" event={"ID":"c9303051-f88b-4545-b9ee-ddc0af81f1a7","Type":"ContainerStarted","Data":"196847a0f30f1d3f92ca3b0096264e1e8d75ea6a188e90f925f06eba0b60df45"} Feb 20 12:22:09.643109 master-0 kubenswrapper[31420]: I0220 12:22:09.636292 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:09.658471 master-0 kubenswrapper[31420]: I0220 12:22:09.650482 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-xfqx5" podStartSLOduration=1.650464009 podStartE2EDuration="1.650464009s" podCreationTimestamp="2026-02-20 12:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:22:09.63987075 +0000 UTC m=+1034.359109001" watchObservedRunningTime="2026-02-20 12:22:09.650464009 +0000 UTC m=+1034.369702240" Feb 20 12:22:09.668598 master-0 kubenswrapper[31420]: I0220 12:22:09.668462 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" podStartSLOduration=2.668441787 podStartE2EDuration="2.668441787s" podCreationTimestamp="2026-02-20 12:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:22:09.659171505 +0000 UTC m=+1034.378409746" watchObservedRunningTime="2026-02-20 12:22:09.668441787 +0000 UTC m=+1034.387680028" Feb 20 12:22:12.548565 master-0 kubenswrapper[31420]: I0220 12:22:12.543581 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 12:22:12.584048 master-0 kubenswrapper[31420]: I0220 12:22:12.583971 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 12:22:13.702941 master-0 kubenswrapper[31420]: I0220 12:22:13.702808 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e8e8aa2e-fd54-4e46-96ae-34f769964346","Type":"ContainerStarted","Data":"f94b6604b63179e65d16ef147da2554a208fc08ba19eb028a782387a8bfa8aa1"} Feb 20 12:22:13.703726 master-0 kubenswrapper[31420]: I0220 12:22:13.703694 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e8e8aa2e-fd54-4e46-96ae-34f769964346" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f94b6604b63179e65d16ef147da2554a208fc08ba19eb028a782387a8bfa8aa1" gracePeriod=30 Feb 20 12:22:13.708747 master-0 kubenswrapper[31420]: I0220 12:22:13.708701 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d11c3184-d8c7-45dc-8988-d4eb7f86289d","Type":"ContainerStarted","Data":"55a36ac03714124e9dcb35b2689bc5ee9f829fe0b2d3569ce4a619e012ca25c9"} Feb 20 12:22:13.709012 master-0 kubenswrapper[31420]: I0220 12:22:13.708993 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d11c3184-d8c7-45dc-8988-d4eb7f86289d","Type":"ContainerStarted","Data":"e804a68338c7242b5c81824b2ea26b981b8602ecb87f753ce493145579ac0cf1"} Feb 20 12:22:13.716381 master-0 kubenswrapper[31420]: I0220 12:22:13.715915 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8ee6d8dc-215d-4dc5-83df-120591fdddb7","Type":"ContainerStarted","Data":"92cff045051382fa3c89b33f22f074a4ed0c1602991e3f8131a4eaf560421d6d"} Feb 20 12:22:13.718420 master-0 kubenswrapper[31420]: I0220 12:22:13.718367 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51ecfe8c-6fff-4f97-a962-e36c8f813070","Type":"ContainerStarted","Data":"8b114439f867f660a7e8a91df375bc0a6301205a9fbc89fe4871f9a998d50b08"} Feb 20 12:22:13.718544 master-0 kubenswrapper[31420]: I0220 12:22:13.718428 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51ecfe8c-6fff-4f97-a962-e36c8f813070","Type":"ContainerStarted","Data":"3ae035386e06f404c0ba4bf213cb34a3da15f9d06b31125d90235ab582e2991b"} Feb 20 12:22:13.718609 master-0 kubenswrapper[31420]: I0220 12:22:13.718562 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="51ecfe8c-6fff-4f97-a962-e36c8f813070" containerName="nova-metadata-log" containerID="cri-o://3ae035386e06f404c0ba4bf213cb34a3da15f9d06b31125d90235ab582e2991b" gracePeriod=30 Feb 20 12:22:13.718904 master-0 kubenswrapper[31420]: I0220 12:22:13.718873 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="51ecfe8c-6fff-4f97-a962-e36c8f813070" containerName="nova-metadata-metadata" containerID="cri-o://8b114439f867f660a7e8a91df375bc0a6301205a9fbc89fe4871f9a998d50b08" gracePeriod=30 Feb 20 12:22:13.744050 master-0 kubenswrapper[31420]: I0220 12:22:13.743932 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.129228196 podStartE2EDuration="7.743909441s" podCreationTimestamp="2026-02-20 12:22:06 +0000 UTC" firstStartedPulling="2026-02-20 12:22:08.258508145 +0000 UTC m=+1032.977746386" lastFinishedPulling="2026-02-20 12:22:12.87318939 +0000 UTC m=+1037.592427631" observedRunningTime="2026-02-20 12:22:13.724854422 +0000 UTC m=+1038.444092693" watchObservedRunningTime="2026-02-20 12:22:13.743909441 +0000 UTC m=+1038.463147682" Feb 20 12:22:13.760951 master-0 kubenswrapper[31420]: I0220 12:22:13.760844 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.174791454 podStartE2EDuration="7.760821529s" podCreationTimestamp="2026-02-20 12:22:06 +0000 UTC" firstStartedPulling="2026-02-20 12:22:08.290690025 +0000 UTC m=+1033.009928266" lastFinishedPulling="2026-02-20 12:22:12.8767201 +0000 UTC m=+1037.595958341" observedRunningTime="2026-02-20 12:22:13.756293061 +0000 UTC m=+1038.475531312" watchObservedRunningTime="2026-02-20 12:22:13.760821529 +0000 UTC m=+1038.480059780" Feb 20 12:22:13.815563 master-0 kubenswrapper[31420]: I0220 12:22:13.815446 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.109264521 podStartE2EDuration="7.815422562s" podCreationTimestamp="2026-02-20 12:22:06 +0000 UTC" firstStartedPulling="2026-02-20 12:22:08.166919866 +0000 UTC m=+1032.886158107" lastFinishedPulling="2026-02-20 12:22:12.873077907 +0000 UTC m=+1037.592316148" observedRunningTime="2026-02-20 12:22:13.776964155 +0000 UTC m=+1038.496202416" watchObservedRunningTime="2026-02-20 12:22:13.815422562 +0000 UTC m=+1038.534660803" Feb 20 12:22:13.831038 master-0 kubenswrapper[31420]: I0220 12:22:13.830945 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.1882100429999998 podStartE2EDuration="7.83092706s" podCreationTimestamp="2026-02-20 12:22:06 +0000 UTC" firstStartedPulling="2026-02-20 12:22:08.23038518 +0000 UTC m=+1032.949623421" lastFinishedPulling="2026-02-20 12:22:12.873102197 +0000 UTC m=+1037.592340438" observedRunningTime="2026-02-20 12:22:13.795721125 +0000 UTC m=+1038.514959376" watchObservedRunningTime="2026-02-20 12:22:13.83092706 +0000 UTC m=+1038.550165291" Feb 20 12:22:14.735845 master-0 kubenswrapper[31420]: I0220 12:22:14.735773 31420 generic.go:334] "Generic (PLEG): container finished" podID="51ecfe8c-6fff-4f97-a962-e36c8f813070" containerID="3ae035386e06f404c0ba4bf213cb34a3da15f9d06b31125d90235ab582e2991b" exitCode=143 Feb 20 12:22:14.736428 master-0 kubenswrapper[31420]: I0220 12:22:14.735868 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51ecfe8c-6fff-4f97-a962-e36c8f813070","Type":"ContainerDied","Data":"3ae035386e06f404c0ba4bf213cb34a3da15f9d06b31125d90235ab582e2991b"} Feb 20 12:22:15.750947 master-0 kubenswrapper[31420]: I0220 12:22:15.750838 31420 generic.go:334] "Generic (PLEG): container finished" podID="00959100-db68-42e4-9009-7424e5bdffe9" containerID="072d654a1934e8076b1f63d71c80bc248f6d5ec0809f229f6ef67ff035b59ba6" exitCode=0 Feb 20 12:22:15.750947 master-0 kubenswrapper[31420]: I0220 12:22:15.750891 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zdwhr" event={"ID":"00959100-db68-42e4-9009-7424e5bdffe9","Type":"ContainerDied","Data":"072d654a1934e8076b1f63d71c80bc248f6d5ec0809f229f6ef67ff035b59ba6"} Feb 20 12:22:17.141432 master-0 kubenswrapper[31420]: I0220 12:22:17.141267 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 20 12:22:17.142757 master-0 kubenswrapper[31420]: I0220 12:22:17.142686 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 20 12:22:17.155923 master-0 kubenswrapper[31420]: I0220 12:22:17.155829 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 20 12:22:17.156087 master-0 kubenswrapper[31420]: I0220 12:22:17.155936 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 20 12:22:17.194395 master-0 kubenswrapper[31420]: I0220 12:22:17.194341 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 20 12:22:17.370967 master-0 kubenswrapper[31420]: I0220 12:22:17.370845 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:17.449421 master-0 kubenswrapper[31420]: I0220 12:22:17.449269 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 20 12:22:17.449421 master-0 kubenswrapper[31420]: I0220 12:22:17.449356 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 20 12:22:17.488248 master-0 kubenswrapper[31420]: I0220 12:22:17.487682 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:17.584567 master-0 kubenswrapper[31420]: I0220 12:22:17.580814 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76f9f66bcc-zjvh2"] Feb 20 12:22:17.584567 master-0 kubenswrapper[31420]: I0220 12:22:17.581101 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" podUID="c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" containerName="dnsmasq-dns" containerID="cri-o://7369da29a0812fd00f07b854d5d50cc5dbfbeea5c83695a2f73d14d2dd36617e" gracePeriod=10 Feb 20 12:22:17.803535 master-0 kubenswrapper[31420]: I0220 12:22:17.803437 31420 generic.go:334] "Generic (PLEG): container finished" podID="c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" containerID="7369da29a0812fd00f07b854d5d50cc5dbfbeea5c83695a2f73d14d2dd36617e" exitCode=0 Feb 20 12:22:17.803742 master-0 kubenswrapper[31420]: I0220 12:22:17.803510 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" event={"ID":"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688","Type":"ContainerDied","Data":"7369da29a0812fd00f07b854d5d50cc5dbfbeea5c83695a2f73d14d2dd36617e"} Feb 20 12:22:17.841478 master-0 kubenswrapper[31420]: I0220 12:22:17.841338 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 20 12:22:18.223833 master-0 kubenswrapper[31420]: I0220 12:22:18.223698 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d11c3184-d8c7-45dc-8988-d4eb7f86289d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.3:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 20 12:22:18.223833 master-0 kubenswrapper[31420]: I0220 12:22:18.223732 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d11c3184-d8c7-45dc-8988-d4eb7f86289d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.3:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 20 12:22:20.699099 master-0 kubenswrapper[31420]: I0220 12:22:20.699031 31420 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" podUID="c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.250:5353: connect: connection refused" Feb 20 12:22:20.849673 master-0 kubenswrapper[31420]: I0220 12:22:20.848069 31420 generic.go:334] "Generic (PLEG): container finished" podID="1c15d66e-eaa8-4305-a5cb-1fa14e718d2c" containerID="820a512e2b71aab05b9066efe531a8183824f64e2868f5d1b2b925302df68ef1" exitCode=0 Feb 20 12:22:20.849673 master-0 kubenswrapper[31420]: I0220 12:22:20.848161 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c","Type":"ContainerDied","Data":"820a512e2b71aab05b9066efe531a8183824f64e2868f5d1b2b925302df68ef1"} Feb 20 12:22:20.853175 master-0 kubenswrapper[31420]: I0220 12:22:20.853121 31420 generic.go:334] "Generic (PLEG): container finished" podID="b05b1900-c2a2-4f2b-b61a-32fb0825fb42" containerID="952582ff816e7852a32903563543aa08297742db0d7e5cb97fa7a873499a3555" exitCode=0 Feb 20 12:22:20.853387 master-0 kubenswrapper[31420]: I0220 12:22:20.853166 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xfqx5" event={"ID":"b05b1900-c2a2-4f2b-b61a-32fb0825fb42","Type":"ContainerDied","Data":"952582ff816e7852a32903563543aa08297742db0d7e5cb97fa7a873499a3555"} Feb 20 12:22:21.382159 master-0 kubenswrapper[31420]: I0220 12:22:21.381947 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:21.482385 master-0 kubenswrapper[31420]: I0220 12:22:21.482331 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:22:21.560338 master-0 kubenswrapper[31420]: I0220 12:22:21.560295 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-scripts\") pod \"00959100-db68-42e4-9009-7424e5bdffe9\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " Feb 20 12:22:21.560544 master-0 kubenswrapper[31420]: I0220 12:22:21.560496 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-combined-ca-bundle\") pod \"00959100-db68-42e4-9009-7424e5bdffe9\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " Feb 20 12:22:21.560617 master-0 kubenswrapper[31420]: I0220 12:22:21.560589 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhlbg\" (UniqueName: \"kubernetes.io/projected/00959100-db68-42e4-9009-7424e5bdffe9-kube-api-access-rhlbg\") pod \"00959100-db68-42e4-9009-7424e5bdffe9\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " Feb 20 12:22:21.561405 master-0 kubenswrapper[31420]: I0220 12:22:21.561381 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-config-data\") pod \"00959100-db68-42e4-9009-7424e5bdffe9\" (UID: \"00959100-db68-42e4-9009-7424e5bdffe9\") " Feb 20 12:22:21.567301 master-0 kubenswrapper[31420]: I0220 12:22:21.567204 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-scripts" (OuterVolumeSpecName: "scripts") pod "00959100-db68-42e4-9009-7424e5bdffe9" (UID: "00959100-db68-42e4-9009-7424e5bdffe9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:21.569234 master-0 kubenswrapper[31420]: I0220 12:22:21.569085 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00959100-db68-42e4-9009-7424e5bdffe9-kube-api-access-rhlbg" (OuterVolumeSpecName: "kube-api-access-rhlbg") pod "00959100-db68-42e4-9009-7424e5bdffe9" (UID: "00959100-db68-42e4-9009-7424e5bdffe9"). InnerVolumeSpecName "kube-api-access-rhlbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:22:21.601908 master-0 kubenswrapper[31420]: I0220 12:22:21.601861 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00959100-db68-42e4-9009-7424e5bdffe9" (UID: "00959100-db68-42e4-9009-7424e5bdffe9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:21.611081 master-0 kubenswrapper[31420]: I0220 12:22:21.611032 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-config-data" (OuterVolumeSpecName: "config-data") pod "00959100-db68-42e4-9009-7424e5bdffe9" (UID: "00959100-db68-42e4-9009-7424e5bdffe9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:21.663431 master-0 kubenswrapper[31420]: I0220 12:22:21.663290 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-dns-swift-storage-0\") pod \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " Feb 20 12:22:21.664537 master-0 kubenswrapper[31420]: I0220 12:22:21.663709 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-config\") pod \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " Feb 20 12:22:21.664537 master-0 kubenswrapper[31420]: I0220 12:22:21.663744 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-ovsdbserver-nb\") pod \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " Feb 20 12:22:21.664537 master-0 kubenswrapper[31420]: I0220 12:22:21.663860 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgs5s\" (UniqueName: \"kubernetes.io/projected/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-kube-api-access-dgs5s\") pod \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " Feb 20 12:22:21.664537 master-0 kubenswrapper[31420]: I0220 12:22:21.663962 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-dns-svc\") pod \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " Feb 20 12:22:21.664537 master-0 kubenswrapper[31420]: I0220 12:22:21.663983 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-ovsdbserver-sb\") pod \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\" (UID: \"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688\") " Feb 20 12:22:21.665451 master-0 kubenswrapper[31420]: I0220 12:22:21.665131 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:21.665451 master-0 kubenswrapper[31420]: I0220 12:22:21.665159 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhlbg\" (UniqueName: \"kubernetes.io/projected/00959100-db68-42e4-9009-7424e5bdffe9-kube-api-access-rhlbg\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:21.665451 master-0 kubenswrapper[31420]: I0220 12:22:21.665172 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:21.665451 master-0 kubenswrapper[31420]: I0220 12:22:21.665186 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00959100-db68-42e4-9009-7424e5bdffe9-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:21.666879 master-0 kubenswrapper[31420]: I0220 12:22:21.666478 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-kube-api-access-dgs5s" (OuterVolumeSpecName: "kube-api-access-dgs5s") pod "c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" (UID: "c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688"). InnerVolumeSpecName "kube-api-access-dgs5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:22:21.725551 master-0 kubenswrapper[31420]: I0220 12:22:21.725431 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-config" (OuterVolumeSpecName: "config") pod "c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" (UID: "c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:22:21.729502 master-0 kubenswrapper[31420]: I0220 12:22:21.729435 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" (UID: "c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:22:21.731100 master-0 kubenswrapper[31420]: I0220 12:22:21.731045 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" (UID: "c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:22:21.734262 master-0 kubenswrapper[31420]: I0220 12:22:21.734198 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" (UID: "c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:22:21.736402 master-0 kubenswrapper[31420]: I0220 12:22:21.736350 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" (UID: "c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:22:21.766927 master-0 kubenswrapper[31420]: I0220 12:22:21.766878 31420 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:21.766927 master-0 kubenswrapper[31420]: I0220 12:22:21.766913 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:21.766927 master-0 kubenswrapper[31420]: I0220 12:22:21.766923 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:21.766927 master-0 kubenswrapper[31420]: I0220 12:22:21.766932 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgs5s\" (UniqueName: \"kubernetes.io/projected/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-kube-api-access-dgs5s\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:21.766927 master-0 kubenswrapper[31420]: I0220 12:22:21.766940 31420 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:21.766927 master-0 kubenswrapper[31420]: I0220 12:22:21.766949 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:21.867076 master-0 kubenswrapper[31420]: I0220 12:22:21.867012 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"aba54209-fbee-41c6-b8fa-a82b2534d9d7","Type":"ContainerStarted","Data":"96c5c5194ae2f43ee9480a22c022d640e3e12d6df636f50a2beb414a9bf77268"} Feb 20 12:22:21.868559 master-0 kubenswrapper[31420]: I0220 12:22:21.868513 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 12:22:21.872321 master-0 kubenswrapper[31420]: I0220 12:22:21.872279 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zdwhr" event={"ID":"00959100-db68-42e4-9009-7424e5bdffe9","Type":"ContainerDied","Data":"da678b2f93aa8e1c2d5d6561eeff44da9fd7b68f7e0205ed4a33a27bd8c7f401"} Feb 20 12:22:21.872321 master-0 kubenswrapper[31420]: I0220 12:22:21.872327 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da678b2f93aa8e1c2d5d6561eeff44da9fd7b68f7e0205ed4a33a27bd8c7f401" Feb 20 12:22:21.872666 master-0 kubenswrapper[31420]: I0220 12:22:21.872416 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zdwhr" Feb 20 12:22:21.875733 master-0 kubenswrapper[31420]: I0220 12:22:21.875671 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c","Type":"ContainerStarted","Data":"b502f8def7d068400d59f868474fd1a7de10ad1619cbaebe8d4db503e0482490"} Feb 20 12:22:21.881300 master-0 kubenswrapper[31420]: I0220 12:22:21.881213 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" Feb 20 12:22:21.883972 master-0 kubenswrapper[31420]: I0220 12:22:21.883587 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f9f66bcc-zjvh2" event={"ID":"c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688","Type":"ContainerDied","Data":"1b557924cbb21ae4f28bf02626ce149857b1850b096dd2807b2dc0579e75a2d7"} Feb 20 12:22:21.883972 master-0 kubenswrapper[31420]: I0220 12:22:21.883637 31420 scope.go:117] "RemoveContainer" containerID="7369da29a0812fd00f07b854d5d50cc5dbfbeea5c83695a2f73d14d2dd36617e" Feb 20 12:22:21.899861 master-0 kubenswrapper[31420]: I0220 12:22:21.899567 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=2.872923601 podStartE2EDuration="15.899549259s" podCreationTimestamp="2026-02-20 12:22:06 +0000 UTC" firstStartedPulling="2026-02-20 12:22:08.201965797 +0000 UTC m=+1032.921204038" lastFinishedPulling="2026-02-20 12:22:21.228591455 +0000 UTC m=+1045.947829696" observedRunningTime="2026-02-20 12:22:21.897710777 +0000 UTC m=+1046.616949028" watchObservedRunningTime="2026-02-20 12:22:21.899549259 +0000 UTC m=+1046.618787500" Feb 20 12:22:21.909263 master-0 kubenswrapper[31420]: I0220 12:22:21.909179 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 12:22:21.918673 master-0 kubenswrapper[31420]: I0220 12:22:21.918621 31420 scope.go:117] "RemoveContainer" containerID="17edb73705584b4279cf7fbffa34a282fdc33f1e12b4fb4f27afb80cef231a21" Feb 20 12:22:21.994673 master-0 kubenswrapper[31420]: I0220 12:22:21.994610 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76f9f66bcc-zjvh2"] Feb 20 12:22:22.031506 master-0 kubenswrapper[31420]: I0220 12:22:22.031396 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76f9f66bcc-zjvh2"] Feb 20 12:22:22.454090 master-0 kubenswrapper[31420]: I0220 12:22:22.454029 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:22.484883 master-0 kubenswrapper[31420]: I0220 12:22:22.484770 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbmvb\" (UniqueName: \"kubernetes.io/projected/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-kube-api-access-hbmvb\") pod \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " Feb 20 12:22:22.484883 master-0 kubenswrapper[31420]: I0220 12:22:22.484820 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-combined-ca-bundle\") pod \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " Feb 20 12:22:22.485142 master-0 kubenswrapper[31420]: I0220 12:22:22.484952 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-config-data\") pod \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " Feb 20 12:22:22.485142 master-0 kubenswrapper[31420]: I0220 12:22:22.484984 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-scripts\") pod \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\" (UID: \"b05b1900-c2a2-4f2b-b61a-32fb0825fb42\") " Feb 20 12:22:22.491494 master-0 kubenswrapper[31420]: I0220 12:22:22.491439 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-scripts" (OuterVolumeSpecName: "scripts") pod "b05b1900-c2a2-4f2b-b61a-32fb0825fb42" (UID: "b05b1900-c2a2-4f2b-b61a-32fb0825fb42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:22.503619 master-0 kubenswrapper[31420]: I0220 12:22:22.500860 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-kube-api-access-hbmvb" (OuterVolumeSpecName: "kube-api-access-hbmvb") pod "b05b1900-c2a2-4f2b-b61a-32fb0825fb42" (UID: "b05b1900-c2a2-4f2b-b61a-32fb0825fb42"). InnerVolumeSpecName "kube-api-access-hbmvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:22:22.525908 master-0 kubenswrapper[31420]: I0220 12:22:22.525824 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b05b1900-c2a2-4f2b-b61a-32fb0825fb42" (UID: "b05b1900-c2a2-4f2b-b61a-32fb0825fb42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:22.533852 master-0 kubenswrapper[31420]: I0220 12:22:22.533796 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-config-data" (OuterVolumeSpecName: "config-data") pod "b05b1900-c2a2-4f2b-b61a-32fb0825fb42" (UID: "b05b1900-c2a2-4f2b-b61a-32fb0825fb42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:22.589642 master-0 kubenswrapper[31420]: I0220 12:22:22.589575 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:22.589642 master-0 kubenswrapper[31420]: I0220 12:22:22.589627 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:22.589642 master-0 kubenswrapper[31420]: I0220 12:22:22.589637 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:22.589642 master-0 kubenswrapper[31420]: I0220 12:22:22.589647 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbmvb\" (UniqueName: \"kubernetes.io/projected/b05b1900-c2a2-4f2b-b61a-32fb0825fb42-kube-api-access-hbmvb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:22.614713 master-0 kubenswrapper[31420]: I0220 12:22:22.614553 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:22:22.615007 master-0 kubenswrapper[31420]: I0220 12:22:22.614961 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d11c3184-d8c7-45dc-8988-d4eb7f86289d" containerName="nova-api-log" containerID="cri-o://e804a68338c7242b5c81824b2ea26b981b8602ecb87f753ce493145579ac0cf1" gracePeriod=30 Feb 20 12:22:22.615137 master-0 kubenswrapper[31420]: I0220 12:22:22.615066 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d11c3184-d8c7-45dc-8988-d4eb7f86289d" containerName="nova-api-api" containerID="cri-o://55a36ac03714124e9dcb35b2689bc5ee9f829fe0b2d3569ce4a619e012ca25c9" gracePeriod=30 Feb 20 12:22:22.674798 master-0 kubenswrapper[31420]: I0220 12:22:22.674547 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 12:22:22.674798 master-0 kubenswrapper[31420]: I0220 12:22:22.674800 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8ee6d8dc-215d-4dc5-83df-120591fdddb7" containerName="nova-scheduler-scheduler" containerID="cri-o://92cff045051382fa3c89b33f22f074a4ed0c1602991e3f8131a4eaf560421d6d" gracePeriod=30 Feb 20 12:22:22.897633 master-0 kubenswrapper[31420]: I0220 12:22:22.896185 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xfqx5" event={"ID":"b05b1900-c2a2-4f2b-b61a-32fb0825fb42","Type":"ContainerDied","Data":"681a80ec326eee5f691495bfed000946ecb27786c206626ee520c412ba36426a"} Feb 20 12:22:22.897633 master-0 kubenswrapper[31420]: I0220 12:22:22.896244 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="681a80ec326eee5f691495bfed000946ecb27786c206626ee520c412ba36426a" Feb 20 12:22:22.897633 master-0 kubenswrapper[31420]: I0220 12:22:22.896296 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xfqx5" Feb 20 12:22:22.906374 master-0 kubenswrapper[31420]: I0220 12:22:22.906331 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c","Type":"ContainerStarted","Data":"0ed16c7c44d8fd69bacda38f35c5ef07c604c1f78e784bf68662a785f4c6f577"} Feb 20 12:22:22.906560 master-0 kubenswrapper[31420]: I0220 12:22:22.906378 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1c15d66e-eaa8-4305-a5cb-1fa14e718d2c","Type":"ContainerStarted","Data":"7097f5318bb898ff7849c2560379b734d3f7672d43ffb2213f69a6e03d1e6816"} Feb 20 12:22:22.907907 master-0 kubenswrapper[31420]: I0220 12:22:22.907856 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 20 12:22:22.907907 master-0 kubenswrapper[31420]: I0220 12:22:22.907903 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 20 12:22:22.914555 master-0 kubenswrapper[31420]: I0220 12:22:22.914500 31420 generic.go:334] "Generic (PLEG): container finished" podID="d11c3184-d8c7-45dc-8988-d4eb7f86289d" containerID="e804a68338c7242b5c81824b2ea26b981b8602ecb87f753ce493145579ac0cf1" exitCode=143 Feb 20 12:22:22.914796 master-0 kubenswrapper[31420]: I0220 12:22:22.914556 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d11c3184-d8c7-45dc-8988-d4eb7f86289d","Type":"ContainerDied","Data":"e804a68338c7242b5c81824b2ea26b981b8602ecb87f753ce493145579ac0cf1"} Feb 20 12:22:22.957148 master-0 kubenswrapper[31420]: I0220 12:22:22.954509 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=55.930258124 podStartE2EDuration="1m37.954487887s" podCreationTimestamp="2026-02-20 12:20:45 +0000 UTC" firstStartedPulling="2026-02-20 12:20:54.594779923 +0000 UTC m=+959.314018154" lastFinishedPulling="2026-02-20 12:21:36.619009666 +0000 UTC m=+1001.338247917" observedRunningTime="2026-02-20 12:22:22.946019518 +0000 UTC m=+1047.665257779" watchObservedRunningTime="2026-02-20 12:22:22.954487887 +0000 UTC m=+1047.673726128" Feb 20 12:22:23.075801 master-0 kubenswrapper[31420]: I0220 12:22:23.075612 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 20 12:22:23.076251 master-0 kubenswrapper[31420]: E0220 12:22:23.076225 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" containerName="init" Feb 20 12:22:23.076251 master-0 kubenswrapper[31420]: I0220 12:22:23.076249 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" containerName="init" Feb 20 12:22:23.076345 master-0 kubenswrapper[31420]: E0220 12:22:23.076265 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05b1900-c2a2-4f2b-b61a-32fb0825fb42" containerName="nova-cell1-conductor-db-sync" Feb 20 12:22:23.076345 master-0 kubenswrapper[31420]: I0220 12:22:23.076271 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05b1900-c2a2-4f2b-b61a-32fb0825fb42" containerName="nova-cell1-conductor-db-sync" Feb 20 12:22:23.076345 master-0 kubenswrapper[31420]: E0220 12:22:23.076295 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00959100-db68-42e4-9009-7424e5bdffe9" containerName="nova-manage" Feb 20 12:22:23.076345 master-0 kubenswrapper[31420]: I0220 12:22:23.076304 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="00959100-db68-42e4-9009-7424e5bdffe9" containerName="nova-manage" Feb 20 12:22:23.076345 master-0 kubenswrapper[31420]: E0220 12:22:23.076327 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" containerName="dnsmasq-dns" Feb 20 12:22:23.076345 master-0 kubenswrapper[31420]: I0220 12:22:23.076334 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" containerName="dnsmasq-dns" Feb 20 12:22:23.076682 master-0 kubenswrapper[31420]: I0220 12:22:23.076661 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="b05b1900-c2a2-4f2b-b61a-32fb0825fb42" containerName="nova-cell1-conductor-db-sync" Feb 20 12:22:23.076734 master-0 kubenswrapper[31420]: I0220 12:22:23.076697 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" containerName="dnsmasq-dns" Feb 20 12:22:23.076772 master-0 kubenswrapper[31420]: I0220 12:22:23.076748 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="00959100-db68-42e4-9009-7424e5bdffe9" containerName="nova-manage" Feb 20 12:22:23.077763 master-0 kubenswrapper[31420]: I0220 12:22:23.077599 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 20 12:22:23.084809 master-0 kubenswrapper[31420]: I0220 12:22:23.084747 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 20 12:22:23.093999 master-0 kubenswrapper[31420]: I0220 12:22:23.093320 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 20 12:22:23.203996 master-0 kubenswrapper[31420]: I0220 12:22:23.203928 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929378b8-f28c-4558-8b42-8b8a297e63d9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"929378b8-f28c-4558-8b42-8b8a297e63d9\") " pod="openstack/nova-cell1-conductor-0" Feb 20 12:22:23.204203 master-0 kubenswrapper[31420]: I0220 12:22:23.204122 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929378b8-f28c-4558-8b42-8b8a297e63d9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"929378b8-f28c-4558-8b42-8b8a297e63d9\") " pod="openstack/nova-cell1-conductor-0" Feb 20 12:22:23.204203 master-0 kubenswrapper[31420]: I0220 12:22:23.204177 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cvwp\" (UniqueName: \"kubernetes.io/projected/929378b8-f28c-4558-8b42-8b8a297e63d9-kube-api-access-5cvwp\") pod \"nova-cell1-conductor-0\" (UID: \"929378b8-f28c-4558-8b42-8b8a297e63d9\") " pod="openstack/nova-cell1-conductor-0" Feb 20 12:22:23.306394 master-0 kubenswrapper[31420]: I0220 12:22:23.306335 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929378b8-f28c-4558-8b42-8b8a297e63d9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"929378b8-f28c-4558-8b42-8b8a297e63d9\") " pod="openstack/nova-cell1-conductor-0" Feb 20 12:22:23.306490 master-0 kubenswrapper[31420]: I0220 12:22:23.306426 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cvwp\" (UniqueName: \"kubernetes.io/projected/929378b8-f28c-4558-8b42-8b8a297e63d9-kube-api-access-5cvwp\") pod \"nova-cell1-conductor-0\" (UID: \"929378b8-f28c-4558-8b42-8b8a297e63d9\") " pod="openstack/nova-cell1-conductor-0" Feb 20 12:22:23.306613 master-0 kubenswrapper[31420]: I0220 12:22:23.306553 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929378b8-f28c-4558-8b42-8b8a297e63d9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"929378b8-f28c-4558-8b42-8b8a297e63d9\") " pod="openstack/nova-cell1-conductor-0" Feb 20 12:22:23.310606 master-0 kubenswrapper[31420]: I0220 12:22:23.309975 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929378b8-f28c-4558-8b42-8b8a297e63d9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"929378b8-f28c-4558-8b42-8b8a297e63d9\") " pod="openstack/nova-cell1-conductor-0" Feb 20 12:22:23.310606 master-0 kubenswrapper[31420]: I0220 12:22:23.310238 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929378b8-f28c-4558-8b42-8b8a297e63d9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"929378b8-f28c-4558-8b42-8b8a297e63d9\") " pod="openstack/nova-cell1-conductor-0" Feb 20 12:22:23.325652 master-0 kubenswrapper[31420]: I0220 12:22:23.325586 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cvwp\" (UniqueName: \"kubernetes.io/projected/929378b8-f28c-4558-8b42-8b8a297e63d9-kube-api-access-5cvwp\") pod \"nova-cell1-conductor-0\" (UID: \"929378b8-f28c-4558-8b42-8b8a297e63d9\") " pod="openstack/nova-cell1-conductor-0" Feb 20 12:22:23.476944 master-0 kubenswrapper[31420]: I0220 12:22:23.476872 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 20 12:22:23.525871 master-0 kubenswrapper[31420]: I0220 12:22:23.525803 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688" path="/var/lib/kubelet/pods/c1ed8d7a-ef1c-4d32-9bdd-a1e29632c688/volumes" Feb 20 12:22:23.895191 master-0 kubenswrapper[31420]: I0220 12:22:23.895084 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Feb 20 12:22:23.986120 master-0 kubenswrapper[31420]: I0220 12:22:23.986051 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 20 12:22:24.941221 master-0 kubenswrapper[31420]: I0220 12:22:24.941151 31420 generic.go:334] "Generic (PLEG): container finished" podID="8ee6d8dc-215d-4dc5-83df-120591fdddb7" containerID="92cff045051382fa3c89b33f22f074a4ed0c1602991e3f8131a4eaf560421d6d" exitCode=0 Feb 20 12:22:24.941489 master-0 kubenswrapper[31420]: I0220 12:22:24.941233 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8ee6d8dc-215d-4dc5-83df-120591fdddb7","Type":"ContainerDied","Data":"92cff045051382fa3c89b33f22f074a4ed0c1602991e3f8131a4eaf560421d6d"} Feb 20 12:22:24.941489 master-0 kubenswrapper[31420]: I0220 12:22:24.941266 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8ee6d8dc-215d-4dc5-83df-120591fdddb7","Type":"ContainerDied","Data":"193834c13d02ac8f6b17cebac6fe4e4f750177d37caa42ccffddf3f519073eae"} Feb 20 12:22:24.941489 master-0 kubenswrapper[31420]: I0220 12:22:24.941281 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="193834c13d02ac8f6b17cebac6fe4e4f750177d37caa42ccffddf3f519073eae" Feb 20 12:22:24.944036 master-0 kubenswrapper[31420]: I0220 12:22:24.944003 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 12:22:24.947399 master-0 kubenswrapper[31420]: I0220 12:22:24.947084 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"929378b8-f28c-4558-8b42-8b8a297e63d9","Type":"ContainerStarted","Data":"79608a92ea0c0cfda40f977814d11db1d3adc098e2dfc235ea3d5b7c526304fb"} Feb 20 12:22:24.947399 master-0 kubenswrapper[31420]: I0220 12:22:24.947162 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 20 12:22:24.947399 master-0 kubenswrapper[31420]: I0220 12:22:24.947215 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"929378b8-f28c-4558-8b42-8b8a297e63d9","Type":"ContainerStarted","Data":"fb9e2b30c83a8b391738e50bfbaf346ce6e9455f5f6cc52c238aadc68f691d27"} Feb 20 12:22:25.012332 master-0 kubenswrapper[31420]: I0220 12:22:25.010508 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.010481069 podStartE2EDuration="2.010481069s" podCreationTimestamp="2026-02-20 12:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:22:24.979413511 +0000 UTC m=+1049.698651752" watchObservedRunningTime="2026-02-20 12:22:25.010481069 +0000 UTC m=+1049.729719310" Feb 20 12:22:25.055436 master-0 kubenswrapper[31420]: I0220 12:22:25.055366 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee6d8dc-215d-4dc5-83df-120591fdddb7-combined-ca-bundle\") pod \"8ee6d8dc-215d-4dc5-83df-120591fdddb7\" (UID: \"8ee6d8dc-215d-4dc5-83df-120591fdddb7\") " Feb 20 12:22:25.055686 master-0 kubenswrapper[31420]: I0220 12:22:25.055449 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee6d8dc-215d-4dc5-83df-120591fdddb7-config-data\") pod \"8ee6d8dc-215d-4dc5-83df-120591fdddb7\" (UID: \"8ee6d8dc-215d-4dc5-83df-120591fdddb7\") " Feb 20 12:22:25.055686 master-0 kubenswrapper[31420]: I0220 12:22:25.055483 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vkq7\" (UniqueName: \"kubernetes.io/projected/8ee6d8dc-215d-4dc5-83df-120591fdddb7-kube-api-access-2vkq7\") pod \"8ee6d8dc-215d-4dc5-83df-120591fdddb7\" (UID: \"8ee6d8dc-215d-4dc5-83df-120591fdddb7\") " Feb 20 12:22:25.062571 master-0 kubenswrapper[31420]: I0220 12:22:25.062470 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ee6d8dc-215d-4dc5-83df-120591fdddb7-kube-api-access-2vkq7" (OuterVolumeSpecName: "kube-api-access-2vkq7") pod "8ee6d8dc-215d-4dc5-83df-120591fdddb7" (UID: "8ee6d8dc-215d-4dc5-83df-120591fdddb7"). InnerVolumeSpecName "kube-api-access-2vkq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:22:25.090765 master-0 kubenswrapper[31420]: I0220 12:22:25.090608 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ee6d8dc-215d-4dc5-83df-120591fdddb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ee6d8dc-215d-4dc5-83df-120591fdddb7" (UID: "8ee6d8dc-215d-4dc5-83df-120591fdddb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:25.102552 master-0 kubenswrapper[31420]: I0220 12:22:25.102469 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ee6d8dc-215d-4dc5-83df-120591fdddb7-config-data" (OuterVolumeSpecName: "config-data") pod "8ee6d8dc-215d-4dc5-83df-120591fdddb7" (UID: "8ee6d8dc-215d-4dc5-83df-120591fdddb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:25.158243 master-0 kubenswrapper[31420]: I0220 12:22:25.158183 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee6d8dc-215d-4dc5-83df-120591fdddb7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:25.158243 master-0 kubenswrapper[31420]: I0220 12:22:25.158225 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee6d8dc-215d-4dc5-83df-120591fdddb7-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:25.158243 master-0 kubenswrapper[31420]: I0220 12:22:25.158236 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vkq7\" (UniqueName: \"kubernetes.io/projected/8ee6d8dc-215d-4dc5-83df-120591fdddb7-kube-api-access-2vkq7\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:25.170538 master-0 kubenswrapper[31420]: I0220 12:22:25.170458 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Feb 20 12:22:26.011236 master-0 kubenswrapper[31420]: I0220 12:22:26.000106 31420 generic.go:334] "Generic (PLEG): container finished" podID="d11c3184-d8c7-45dc-8988-d4eb7f86289d" containerID="55a36ac03714124e9dcb35b2689bc5ee9f829fe0b2d3569ce4a619e012ca25c9" exitCode=0 Feb 20 12:22:26.011236 master-0 kubenswrapper[31420]: I0220 12:22:26.000439 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d11c3184-d8c7-45dc-8988-d4eb7f86289d","Type":"ContainerDied","Data":"55a36ac03714124e9dcb35b2689bc5ee9f829fe0b2d3569ce4a619e012ca25c9"} Feb 20 12:22:26.011236 master-0 kubenswrapper[31420]: I0220 12:22:26.001909 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 12:22:26.050678 master-0 kubenswrapper[31420]: I0220 12:22:26.050577 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 20 12:22:26.083575 master-0 kubenswrapper[31420]: I0220 12:22:26.083402 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 12:22:26.107148 master-0 kubenswrapper[31420]: I0220 12:22:26.107068 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 12:22:26.137265 master-0 kubenswrapper[31420]: I0220 12:22:26.137129 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 12:22:26.137860 master-0 kubenswrapper[31420]: E0220 12:22:26.137837 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ee6d8dc-215d-4dc5-83df-120591fdddb7" containerName="nova-scheduler-scheduler" Feb 20 12:22:26.137860 master-0 kubenswrapper[31420]: I0220 12:22:26.137857 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ee6d8dc-215d-4dc5-83df-120591fdddb7" containerName="nova-scheduler-scheduler" Feb 20 12:22:26.138096 master-0 kubenswrapper[31420]: I0220 12:22:26.138072 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ee6d8dc-215d-4dc5-83df-120591fdddb7" containerName="nova-scheduler-scheduler" Feb 20 12:22:26.139091 master-0 kubenswrapper[31420]: I0220 12:22:26.138894 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 12:22:26.141560 master-0 kubenswrapper[31420]: I0220 12:22:26.141502 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 20 12:22:26.147564 master-0 kubenswrapper[31420]: I0220 12:22:26.147485 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 12:22:26.239015 master-0 kubenswrapper[31420]: I0220 12:22:26.238945 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpfrd\" (UniqueName: \"kubernetes.io/projected/0465bc3e-a104-4b88-b897-8d59c142c137-kube-api-access-vpfrd\") pod \"nova-scheduler-0\" (UID: \"0465bc3e-a104-4b88-b897-8d59c142c137\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:26.239230 master-0 kubenswrapper[31420]: I0220 12:22:26.239189 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0465bc3e-a104-4b88-b897-8d59c142c137-config-data\") pod \"nova-scheduler-0\" (UID: \"0465bc3e-a104-4b88-b897-8d59c142c137\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:26.239270 master-0 kubenswrapper[31420]: I0220 12:22:26.239238 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0465bc3e-a104-4b88-b897-8d59c142c137-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0465bc3e-a104-4b88-b897-8d59c142c137\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:26.341353 master-0 kubenswrapper[31420]: I0220 12:22:26.341263 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0465bc3e-a104-4b88-b897-8d59c142c137-config-data\") pod \"nova-scheduler-0\" (UID: \"0465bc3e-a104-4b88-b897-8d59c142c137\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:26.341353 master-0 kubenswrapper[31420]: I0220 12:22:26.341340 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0465bc3e-a104-4b88-b897-8d59c142c137-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0465bc3e-a104-4b88-b897-8d59c142c137\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:26.341353 master-0 kubenswrapper[31420]: I0220 12:22:26.341378 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpfrd\" (UniqueName: \"kubernetes.io/projected/0465bc3e-a104-4b88-b897-8d59c142c137-kube-api-access-vpfrd\") pod \"nova-scheduler-0\" (UID: \"0465bc3e-a104-4b88-b897-8d59c142c137\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:26.345695 master-0 kubenswrapper[31420]: I0220 12:22:26.345641 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0465bc3e-a104-4b88-b897-8d59c142c137-config-data\") pod \"nova-scheduler-0\" (UID: \"0465bc3e-a104-4b88-b897-8d59c142c137\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:26.347425 master-0 kubenswrapper[31420]: I0220 12:22:26.347359 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0465bc3e-a104-4b88-b897-8d59c142c137-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0465bc3e-a104-4b88-b897-8d59c142c137\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:26.359853 master-0 kubenswrapper[31420]: I0220 12:22:26.359780 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpfrd\" (UniqueName: \"kubernetes.io/projected/0465bc3e-a104-4b88-b897-8d59c142c137-kube-api-access-vpfrd\") pod \"nova-scheduler-0\" (UID: \"0465bc3e-a104-4b88-b897-8d59c142c137\") " pod="openstack/nova-scheduler-0" Feb 20 12:22:26.427098 master-0 kubenswrapper[31420]: I0220 12:22:26.426859 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 12:22:26.506394 master-0 kubenswrapper[31420]: I0220 12:22:26.506281 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 12:22:26.544639 master-0 kubenswrapper[31420]: I0220 12:22:26.544545 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d11c3184-d8c7-45dc-8988-d4eb7f86289d-logs\") pod \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " Feb 20 12:22:26.544991 master-0 kubenswrapper[31420]: I0220 12:22:26.544808 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpkvm\" (UniqueName: \"kubernetes.io/projected/d11c3184-d8c7-45dc-8988-d4eb7f86289d-kube-api-access-tpkvm\") pod \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " Feb 20 12:22:26.544991 master-0 kubenswrapper[31420]: I0220 12:22:26.544852 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11c3184-d8c7-45dc-8988-d4eb7f86289d-combined-ca-bundle\") pod \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " Feb 20 12:22:26.544991 master-0 kubenswrapper[31420]: I0220 12:22:26.544973 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11c3184-d8c7-45dc-8988-d4eb7f86289d-config-data\") pod \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\" (UID: \"d11c3184-d8c7-45dc-8988-d4eb7f86289d\") " Feb 20 12:22:26.545479 master-0 kubenswrapper[31420]: I0220 12:22:26.545419 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11c3184-d8c7-45dc-8988-d4eb7f86289d-logs" (OuterVolumeSpecName: "logs") pod "d11c3184-d8c7-45dc-8988-d4eb7f86289d" (UID: "d11c3184-d8c7-45dc-8988-d4eb7f86289d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:22:26.549047 master-0 kubenswrapper[31420]: I0220 12:22:26.548912 31420 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d11c3184-d8c7-45dc-8988-d4eb7f86289d-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:26.564105 master-0 kubenswrapper[31420]: I0220 12:22:26.563956 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11c3184-d8c7-45dc-8988-d4eb7f86289d-kube-api-access-tpkvm" (OuterVolumeSpecName: "kube-api-access-tpkvm") pod "d11c3184-d8c7-45dc-8988-d4eb7f86289d" (UID: "d11c3184-d8c7-45dc-8988-d4eb7f86289d"). InnerVolumeSpecName "kube-api-access-tpkvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:22:26.603824 master-0 kubenswrapper[31420]: I0220 12:22:26.603767 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11c3184-d8c7-45dc-8988-d4eb7f86289d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d11c3184-d8c7-45dc-8988-d4eb7f86289d" (UID: "d11c3184-d8c7-45dc-8988-d4eb7f86289d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:26.640914 master-0 kubenswrapper[31420]: I0220 12:22:26.640845 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11c3184-d8c7-45dc-8988-d4eb7f86289d-config-data" (OuterVolumeSpecName: "config-data") pod "d11c3184-d8c7-45dc-8988-d4eb7f86289d" (UID: "d11c3184-d8c7-45dc-8988-d4eb7f86289d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:26.652932 master-0 kubenswrapper[31420]: I0220 12:22:26.651858 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpkvm\" (UniqueName: \"kubernetes.io/projected/d11c3184-d8c7-45dc-8988-d4eb7f86289d-kube-api-access-tpkvm\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:26.652932 master-0 kubenswrapper[31420]: I0220 12:22:26.651888 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11c3184-d8c7-45dc-8988-d4eb7f86289d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:26.652932 master-0 kubenswrapper[31420]: I0220 12:22:26.651902 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11c3184-d8c7-45dc-8988-d4eb7f86289d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:26.998213 master-0 kubenswrapper[31420]: I0220 12:22:26.998153 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 12:22:27.000274 master-0 kubenswrapper[31420]: W0220 12:22:27.000213 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0465bc3e_a104_4b88_b897_8d59c142c137.slice/crio-c7b84a1d0ce0d2208442a438a4c7dba3082c14eae69892baa8d93c3f5f91e388 WatchSource:0}: Error finding container c7b84a1d0ce0d2208442a438a4c7dba3082c14eae69892baa8d93c3f5f91e388: Status 404 returned error can't find the container with id c7b84a1d0ce0d2208442a438a4c7dba3082c14eae69892baa8d93c3f5f91e388 Feb 20 12:22:27.014047 master-0 kubenswrapper[31420]: I0220 12:22:27.013904 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d11c3184-d8c7-45dc-8988-d4eb7f86289d","Type":"ContainerDied","Data":"8417ac3156f92b3e56d51020562b6c3ae541e950e7eb89312073282747c679c0"} Feb 20 12:22:27.014047 master-0 kubenswrapper[31420]: I0220 12:22:27.013932 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 12:22:27.014047 master-0 kubenswrapper[31420]: I0220 12:22:27.013966 31420 scope.go:117] "RemoveContainer" containerID="55a36ac03714124e9dcb35b2689bc5ee9f829fe0b2d3569ce4a619e012ca25c9" Feb 20 12:22:27.020106 master-0 kubenswrapper[31420]: I0220 12:22:27.019810 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 20 12:22:27.065249 master-0 kubenswrapper[31420]: I0220 12:22:27.065164 31420 scope.go:117] "RemoveContainer" containerID="e804a68338c7242b5c81824b2ea26b981b8602ecb87f753ce493145579ac0cf1" Feb 20 12:22:27.125570 master-0 kubenswrapper[31420]: I0220 12:22:27.123765 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:22:27.140631 master-0 kubenswrapper[31420]: I0220 12:22:27.140560 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:22:27.158887 master-0 kubenswrapper[31420]: I0220 12:22:27.157591 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 20 12:22:27.158887 master-0 kubenswrapper[31420]: E0220 12:22:27.158264 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11c3184-d8c7-45dc-8988-d4eb7f86289d" containerName="nova-api-log" Feb 20 12:22:27.158887 master-0 kubenswrapper[31420]: I0220 12:22:27.158290 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11c3184-d8c7-45dc-8988-d4eb7f86289d" containerName="nova-api-log" Feb 20 12:22:27.158887 master-0 kubenswrapper[31420]: E0220 12:22:27.158320 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11c3184-d8c7-45dc-8988-d4eb7f86289d" containerName="nova-api-api" Feb 20 12:22:27.158887 master-0 kubenswrapper[31420]: I0220 12:22:27.158329 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11c3184-d8c7-45dc-8988-d4eb7f86289d" containerName="nova-api-api" Feb 20 12:22:27.160053 master-0 kubenswrapper[31420]: I0220 12:22:27.160016 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11c3184-d8c7-45dc-8988-d4eb7f86289d" containerName="nova-api-log" Feb 20 12:22:27.160110 master-0 kubenswrapper[31420]: I0220 12:22:27.160080 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11c3184-d8c7-45dc-8988-d4eb7f86289d" containerName="nova-api-api" Feb 20 12:22:27.161505 master-0 kubenswrapper[31420]: I0220 12:22:27.161472 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 12:22:27.163960 master-0 kubenswrapper[31420]: I0220 12:22:27.163934 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 20 12:22:27.180294 master-0 kubenswrapper[31420]: I0220 12:22:27.173082 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:22:27.267636 master-0 kubenswrapper[31420]: I0220 12:22:27.267558 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwplj\" (UniqueName: \"kubernetes.io/projected/3014abfe-1825-4804-a49d-b1372fecdffc-kube-api-access-xwplj\") pod \"nova-api-0\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " pod="openstack/nova-api-0" Feb 20 12:22:27.267794 master-0 kubenswrapper[31420]: I0220 12:22:27.267763 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3014abfe-1825-4804-a49d-b1372fecdffc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " pod="openstack/nova-api-0" Feb 20 12:22:27.268043 master-0 kubenswrapper[31420]: I0220 12:22:27.268004 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3014abfe-1825-4804-a49d-b1372fecdffc-config-data\") pod \"nova-api-0\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " pod="openstack/nova-api-0" Feb 20 12:22:27.268117 master-0 kubenswrapper[31420]: I0220 12:22:27.268045 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3014abfe-1825-4804-a49d-b1372fecdffc-logs\") pod \"nova-api-0\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " pod="openstack/nova-api-0" Feb 20 12:22:27.373105 master-0 kubenswrapper[31420]: I0220 12:22:27.373049 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3014abfe-1825-4804-a49d-b1372fecdffc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " pod="openstack/nova-api-0" Feb 20 12:22:27.373330 master-0 kubenswrapper[31420]: I0220 12:22:27.373244 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3014abfe-1825-4804-a49d-b1372fecdffc-config-data\") pod \"nova-api-0\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " pod="openstack/nova-api-0" Feb 20 12:22:27.374113 master-0 kubenswrapper[31420]: I0220 12:22:27.373278 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3014abfe-1825-4804-a49d-b1372fecdffc-logs\") pod \"nova-api-0\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " pod="openstack/nova-api-0" Feb 20 12:22:27.374221 master-0 kubenswrapper[31420]: I0220 12:22:27.374195 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwplj\" (UniqueName: \"kubernetes.io/projected/3014abfe-1825-4804-a49d-b1372fecdffc-kube-api-access-xwplj\") pod \"nova-api-0\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " pod="openstack/nova-api-0" Feb 20 12:22:27.374914 master-0 kubenswrapper[31420]: I0220 12:22:27.374860 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3014abfe-1825-4804-a49d-b1372fecdffc-logs\") pod \"nova-api-0\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " pod="openstack/nova-api-0" Feb 20 12:22:27.377508 master-0 kubenswrapper[31420]: I0220 12:22:27.377484 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3014abfe-1825-4804-a49d-b1372fecdffc-config-data\") pod \"nova-api-0\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " pod="openstack/nova-api-0" Feb 20 12:22:27.377620 master-0 kubenswrapper[31420]: I0220 12:22:27.377576 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3014abfe-1825-4804-a49d-b1372fecdffc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " pod="openstack/nova-api-0" Feb 20 12:22:27.394939 master-0 kubenswrapper[31420]: I0220 12:22:27.394875 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwplj\" (UniqueName: \"kubernetes.io/projected/3014abfe-1825-4804-a49d-b1372fecdffc-kube-api-access-xwplj\") pod \"nova-api-0\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " pod="openstack/nova-api-0" Feb 20 12:22:27.513648 master-0 kubenswrapper[31420]: I0220 12:22:27.513523 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 12:22:27.522078 master-0 kubenswrapper[31420]: I0220 12:22:27.521981 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ee6d8dc-215d-4dc5-83df-120591fdddb7" path="/var/lib/kubelet/pods/8ee6d8dc-215d-4dc5-83df-120591fdddb7/volumes" Feb 20 12:22:27.523476 master-0 kubenswrapper[31420]: I0220 12:22:27.523408 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11c3184-d8c7-45dc-8988-d4eb7f86289d" path="/var/lib/kubelet/pods/d11c3184-d8c7-45dc-8988-d4eb7f86289d/volumes" Feb 20 12:22:28.027194 master-0 kubenswrapper[31420]: I0220 12:22:28.027137 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0465bc3e-a104-4b88-b897-8d59c142c137","Type":"ContainerStarted","Data":"320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812"} Feb 20 12:22:28.027194 master-0 kubenswrapper[31420]: I0220 12:22:28.027186 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0465bc3e-a104-4b88-b897-8d59c142c137","Type":"ContainerStarted","Data":"c7b84a1d0ce0d2208442a438a4c7dba3082c14eae69892baa8d93c3f5f91e388"} Feb 20 12:22:28.061551 master-0 kubenswrapper[31420]: I0220 12:22:28.044836 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.044817115 podStartE2EDuration="2.044817115s" podCreationTimestamp="2026-02-20 12:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:22:28.043747705 +0000 UTC m=+1052.762985946" watchObservedRunningTime="2026-02-20 12:22:28.044817115 +0000 UTC m=+1052.764055356" Feb 20 12:22:28.124927 master-0 kubenswrapper[31420]: W0220 12:22:28.124856 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3014abfe_1825_4804_a49d_b1372fecdffc.slice/crio-412340dc97e5e12a50c69ebcc1ac52f54cd0975fdc9570a2caf89b7d15982467 WatchSource:0}: Error finding container 412340dc97e5e12a50c69ebcc1ac52f54cd0975fdc9570a2caf89b7d15982467: Status 404 returned error can't find the container with id 412340dc97e5e12a50c69ebcc1ac52f54cd0975fdc9570a2caf89b7d15982467 Feb 20 12:22:28.128560 master-0 kubenswrapper[31420]: I0220 12:22:28.128452 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:22:29.040464 master-0 kubenswrapper[31420]: I0220 12:22:29.040399 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3014abfe-1825-4804-a49d-b1372fecdffc","Type":"ContainerStarted","Data":"a58630589a09a6c44acd76a774ab21471c8abb15b6cd9a5db1c61d7027ebe397"} Feb 20 12:22:29.040464 master-0 kubenswrapper[31420]: I0220 12:22:29.040452 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3014abfe-1825-4804-a49d-b1372fecdffc","Type":"ContainerStarted","Data":"9631072abf5741aeedc016fcaf6a5abb05195c6adeab2b22872a783910a9b143"} Feb 20 12:22:29.040464 master-0 kubenswrapper[31420]: I0220 12:22:29.040466 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3014abfe-1825-4804-a49d-b1372fecdffc","Type":"ContainerStarted","Data":"412340dc97e5e12a50c69ebcc1ac52f54cd0975fdc9570a2caf89b7d15982467"} Feb 20 12:22:29.069795 master-0 kubenswrapper[31420]: I0220 12:22:29.069687 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.069660942 podStartE2EDuration="2.069660942s" podCreationTimestamp="2026-02-20 12:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:22:29.062717956 +0000 UTC m=+1053.781956217" watchObservedRunningTime="2026-02-20 12:22:29.069660942 +0000 UTC m=+1053.788899183" Feb 20 12:22:31.522766 master-0 kubenswrapper[31420]: I0220 12:22:31.522637 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 20 12:22:33.527391 master-0 kubenswrapper[31420]: I0220 12:22:33.527267 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 20 12:22:36.507153 master-0 kubenswrapper[31420]: I0220 12:22:36.507046 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 20 12:22:36.564839 master-0 kubenswrapper[31420]: I0220 12:22:36.564761 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 20 12:22:37.227814 master-0 kubenswrapper[31420]: I0220 12:22:37.227690 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 20 12:22:37.519050 master-0 kubenswrapper[31420]: I0220 12:22:37.518898 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 20 12:22:37.520046 master-0 kubenswrapper[31420]: I0220 12:22:37.519080 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 20 12:22:38.598790 master-0 kubenswrapper[31420]: I0220 12:22:38.598709 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3014abfe-1825-4804-a49d-b1372fecdffc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.11:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 20 12:22:38.599649 master-0 kubenswrapper[31420]: I0220 12:22:38.598789 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3014abfe-1825-4804-a49d-b1372fecdffc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.11:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 20 12:22:44.275635 master-0 kubenswrapper[31420]: I0220 12:22:44.275566 31420 generic.go:334] "Generic (PLEG): container finished" podID="e8e8aa2e-fd54-4e46-96ae-34f769964346" containerID="f94b6604b63179e65d16ef147da2554a208fc08ba19eb028a782387a8bfa8aa1" exitCode=137 Feb 20 12:22:44.276269 master-0 kubenswrapper[31420]: I0220 12:22:44.275660 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e8e8aa2e-fd54-4e46-96ae-34f769964346","Type":"ContainerDied","Data":"f94b6604b63179e65d16ef147da2554a208fc08ba19eb028a782387a8bfa8aa1"} Feb 20 12:22:44.276269 master-0 kubenswrapper[31420]: I0220 12:22:44.275695 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e8e8aa2e-fd54-4e46-96ae-34f769964346","Type":"ContainerDied","Data":"462337f702831894ea2c5a6c7bd7cee2648516a8b416c395596914d689001e7b"} Feb 20 12:22:44.276269 master-0 kubenswrapper[31420]: I0220 12:22:44.275708 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="462337f702831894ea2c5a6c7bd7cee2648516a8b416c395596914d689001e7b" Feb 20 12:22:44.282580 master-0 kubenswrapper[31420]: I0220 12:22:44.282516 31420 generic.go:334] "Generic (PLEG): container finished" podID="51ecfe8c-6fff-4f97-a962-e36c8f813070" containerID="8b114439f867f660a7e8a91df375bc0a6301205a9fbc89fe4871f9a998d50b08" exitCode=137 Feb 20 12:22:44.282797 master-0 kubenswrapper[31420]: I0220 12:22:44.282562 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51ecfe8c-6fff-4f97-a962-e36c8f813070","Type":"ContainerDied","Data":"8b114439f867f660a7e8a91df375bc0a6301205a9fbc89fe4871f9a998d50b08"} Feb 20 12:22:44.376634 master-0 kubenswrapper[31420]: I0220 12:22:44.376592 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:44.384481 master-0 kubenswrapper[31420]: I0220 12:22:44.384439 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 12:22:44.482566 master-0 kubenswrapper[31420]: I0220 12:22:44.482496 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e8aa2e-fd54-4e46-96ae-34f769964346-combined-ca-bundle\") pod \"e8e8aa2e-fd54-4e46-96ae-34f769964346\" (UID: \"e8e8aa2e-fd54-4e46-96ae-34f769964346\") " Feb 20 12:22:44.482786 master-0 kubenswrapper[31420]: I0220 12:22:44.482643 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e8aa2e-fd54-4e46-96ae-34f769964346-config-data\") pod \"e8e8aa2e-fd54-4e46-96ae-34f769964346\" (UID: \"e8e8aa2e-fd54-4e46-96ae-34f769964346\") " Feb 20 12:22:44.482786 master-0 kubenswrapper[31420]: I0220 12:22:44.482668 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kwp6\" (UniqueName: \"kubernetes.io/projected/e8e8aa2e-fd54-4e46-96ae-34f769964346-kube-api-access-9kwp6\") pod \"e8e8aa2e-fd54-4e46-96ae-34f769964346\" (UID: \"e8e8aa2e-fd54-4e46-96ae-34f769964346\") " Feb 20 12:22:44.482786 master-0 kubenswrapper[31420]: I0220 12:22:44.482740 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ecfe8c-6fff-4f97-a962-e36c8f813070-combined-ca-bundle\") pod \"51ecfe8c-6fff-4f97-a962-e36c8f813070\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " Feb 20 12:22:44.482891 master-0 kubenswrapper[31420]: I0220 12:22:44.482814 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ecfe8c-6fff-4f97-a962-e36c8f813070-config-data\") pod \"51ecfe8c-6fff-4f97-a962-e36c8f813070\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " Feb 20 12:22:44.482891 master-0 kubenswrapper[31420]: I0220 12:22:44.482860 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ecfe8c-6fff-4f97-a962-e36c8f813070-logs\") pod \"51ecfe8c-6fff-4f97-a962-e36c8f813070\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " Feb 20 12:22:44.482891 master-0 kubenswrapper[31420]: I0220 12:22:44.482887 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rphz2\" (UniqueName: \"kubernetes.io/projected/51ecfe8c-6fff-4f97-a962-e36c8f813070-kube-api-access-rphz2\") pod \"51ecfe8c-6fff-4f97-a962-e36c8f813070\" (UID: \"51ecfe8c-6fff-4f97-a962-e36c8f813070\") " Feb 20 12:22:44.486455 master-0 kubenswrapper[31420]: I0220 12:22:44.486331 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8e8aa2e-fd54-4e46-96ae-34f769964346-kube-api-access-9kwp6" (OuterVolumeSpecName: "kube-api-access-9kwp6") pod "e8e8aa2e-fd54-4e46-96ae-34f769964346" (UID: "e8e8aa2e-fd54-4e46-96ae-34f769964346"). InnerVolumeSpecName "kube-api-access-9kwp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:22:44.486730 master-0 kubenswrapper[31420]: I0220 12:22:44.486601 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ecfe8c-6fff-4f97-a962-e36c8f813070-kube-api-access-rphz2" (OuterVolumeSpecName: "kube-api-access-rphz2") pod "51ecfe8c-6fff-4f97-a962-e36c8f813070" (UID: "51ecfe8c-6fff-4f97-a962-e36c8f813070"). InnerVolumeSpecName "kube-api-access-rphz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:22:44.486972 master-0 kubenswrapper[31420]: I0220 12:22:44.486934 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ecfe8c-6fff-4f97-a962-e36c8f813070-logs" (OuterVolumeSpecName: "logs") pod "51ecfe8c-6fff-4f97-a962-e36c8f813070" (UID: "51ecfe8c-6fff-4f97-a962-e36c8f813070"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:22:44.516684 master-0 kubenswrapper[31420]: I0220 12:22:44.516598 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8e8aa2e-fd54-4e46-96ae-34f769964346-config-data" (OuterVolumeSpecName: "config-data") pod "e8e8aa2e-fd54-4e46-96ae-34f769964346" (UID: "e8e8aa2e-fd54-4e46-96ae-34f769964346"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:44.517452 master-0 kubenswrapper[31420]: I0220 12:22:44.517392 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ecfe8c-6fff-4f97-a962-e36c8f813070-config-data" (OuterVolumeSpecName: "config-data") pod "51ecfe8c-6fff-4f97-a962-e36c8f813070" (UID: "51ecfe8c-6fff-4f97-a962-e36c8f813070"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:44.518733 master-0 kubenswrapper[31420]: I0220 12:22:44.518697 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ecfe8c-6fff-4f97-a962-e36c8f813070-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51ecfe8c-6fff-4f97-a962-e36c8f813070" (UID: "51ecfe8c-6fff-4f97-a962-e36c8f813070"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:44.525206 master-0 kubenswrapper[31420]: I0220 12:22:44.525172 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8e8aa2e-fd54-4e46-96ae-34f769964346-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8e8aa2e-fd54-4e46-96ae-34f769964346" (UID: "e8e8aa2e-fd54-4e46-96ae-34f769964346"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:44.587152 master-0 kubenswrapper[31420]: I0220 12:22:44.587036 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e8aa2e-fd54-4e46-96ae-34f769964346-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:44.587152 master-0 kubenswrapper[31420]: I0220 12:22:44.587084 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kwp6\" (UniqueName: \"kubernetes.io/projected/e8e8aa2e-fd54-4e46-96ae-34f769964346-kube-api-access-9kwp6\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:44.587152 master-0 kubenswrapper[31420]: I0220 12:22:44.587100 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ecfe8c-6fff-4f97-a962-e36c8f813070-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:44.587152 master-0 kubenswrapper[31420]: I0220 12:22:44.587115 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ecfe8c-6fff-4f97-a962-e36c8f813070-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:44.587152 master-0 kubenswrapper[31420]: I0220 12:22:44.587127 31420 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ecfe8c-6fff-4f97-a962-e36c8f813070-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:44.587152 master-0 kubenswrapper[31420]: I0220 12:22:44.587139 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rphz2\" (UniqueName: \"kubernetes.io/projected/51ecfe8c-6fff-4f97-a962-e36c8f813070-kube-api-access-rphz2\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:44.587152 master-0 kubenswrapper[31420]: I0220 12:22:44.587153 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e8aa2e-fd54-4e46-96ae-34f769964346-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:45.304958 master-0 kubenswrapper[31420]: I0220 12:22:45.304803 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51ecfe8c-6fff-4f97-a962-e36c8f813070","Type":"ContainerDied","Data":"bec7769dff0ff11dfeb39bb6d98762be7014ac9feb72ac0dc64c1f6629ab1ad7"} Feb 20 12:22:45.304958 master-0 kubenswrapper[31420]: I0220 12:22:45.304942 31420 scope.go:117] "RemoveContainer" containerID="8b114439f867f660a7e8a91df375bc0a6301205a9fbc89fe4871f9a998d50b08" Feb 20 12:22:45.306119 master-0 kubenswrapper[31420]: I0220 12:22:45.304936 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 12:22:45.306119 master-0 kubenswrapper[31420]: I0220 12:22:45.304946 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.351216 master-0 kubenswrapper[31420]: I0220 12:22:45.351006 31420 scope.go:117] "RemoveContainer" containerID="3ae035386e06f404c0ba4bf213cb34a3da15f9d06b31125d90235ab582e2991b" Feb 20 12:22:45.399570 master-0 kubenswrapper[31420]: I0220 12:22:45.397251 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 12:22:45.416570 master-0 kubenswrapper[31420]: I0220 12:22:45.414445 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 12:22:45.428745 master-0 kubenswrapper[31420]: I0220 12:22:45.428618 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 12:22:45.440792 master-0 kubenswrapper[31420]: I0220 12:22:45.439938 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 12:22:45.457087 master-0 kubenswrapper[31420]: I0220 12:22:45.455982 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 12:22:45.457087 master-0 kubenswrapper[31420]: E0220 12:22:45.456511 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8e8aa2e-fd54-4e46-96ae-34f769964346" containerName="nova-cell1-novncproxy-novncproxy" Feb 20 12:22:45.457087 master-0 kubenswrapper[31420]: I0220 12:22:45.456527 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8e8aa2e-fd54-4e46-96ae-34f769964346" containerName="nova-cell1-novncproxy-novncproxy" Feb 20 12:22:45.457087 master-0 kubenswrapper[31420]: E0220 12:22:45.456560 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ecfe8c-6fff-4f97-a962-e36c8f813070" containerName="nova-metadata-metadata" Feb 20 12:22:45.457087 master-0 kubenswrapper[31420]: I0220 12:22:45.456567 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ecfe8c-6fff-4f97-a962-e36c8f813070" containerName="nova-metadata-metadata" Feb 20 12:22:45.457087 master-0 kubenswrapper[31420]: E0220 12:22:45.456597 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ecfe8c-6fff-4f97-a962-e36c8f813070" containerName="nova-metadata-log" Feb 20 12:22:45.457087 master-0 kubenswrapper[31420]: I0220 12:22:45.456606 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ecfe8c-6fff-4f97-a962-e36c8f813070" containerName="nova-metadata-log" Feb 20 12:22:45.457087 master-0 kubenswrapper[31420]: I0220 12:22:45.456849 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ecfe8c-6fff-4f97-a962-e36c8f813070" containerName="nova-metadata-metadata" Feb 20 12:22:45.457087 master-0 kubenswrapper[31420]: I0220 12:22:45.456876 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8e8aa2e-fd54-4e46-96ae-34f769964346" containerName="nova-cell1-novncproxy-novncproxy" Feb 20 12:22:45.457087 master-0 kubenswrapper[31420]: I0220 12:22:45.456895 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ecfe8c-6fff-4f97-a962-e36c8f813070" containerName="nova-metadata-log" Feb 20 12:22:45.457763 master-0 kubenswrapper[31420]: I0220 12:22:45.457713 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.461451 master-0 kubenswrapper[31420]: I0220 12:22:45.461392 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 20 12:22:45.461713 master-0 kubenswrapper[31420]: I0220 12:22:45.461684 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 20 12:22:45.461871 master-0 kubenswrapper[31420]: I0220 12:22:45.461843 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 20 12:22:45.491106 master-0 kubenswrapper[31420]: I0220 12:22:45.491056 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 12:22:45.510677 master-0 kubenswrapper[31420]: I0220 12:22:45.510631 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ecfe8c-6fff-4f97-a962-e36c8f813070" path="/var/lib/kubelet/pods/51ecfe8c-6fff-4f97-a962-e36c8f813070/volumes" Feb 20 12:22:45.512256 master-0 kubenswrapper[31420]: I0220 12:22:45.512231 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8e8aa2e-fd54-4e46-96ae-34f769964346" path="/var/lib/kubelet/pods/e8e8aa2e-fd54-4e46-96ae-34f769964346/volumes" Feb 20 12:22:45.513158 master-0 kubenswrapper[31420]: I0220 12:22:45.513134 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 20 12:22:45.516417 master-0 kubenswrapper[31420]: I0220 12:22:45.516379 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 12:22:45.522603 master-0 kubenswrapper[31420]: I0220 12:22:45.520308 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 20 12:22:45.522603 master-0 kubenswrapper[31420]: I0220 12:22:45.521220 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 20 12:22:45.524961 master-0 kubenswrapper[31420]: I0220 12:22:45.523376 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 12:22:45.614399 master-0 kubenswrapper[31420]: I0220 12:22:45.614271 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2a9edf-20d3-48c8-99bf-4628575dbd9f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b2a9edf-20d3-48c8-99bf-4628575dbd9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.614636 master-0 kubenswrapper[31420]: I0220 12:22:45.614472 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2a9edf-20d3-48c8-99bf-4628575dbd9f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b2a9edf-20d3-48c8-99bf-4628575dbd9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.614636 master-0 kubenswrapper[31420]: I0220 12:22:45.614541 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkzgw\" (UniqueName: \"kubernetes.io/projected/7b2a9edf-20d3-48c8-99bf-4628575dbd9f-kube-api-access-hkzgw\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b2a9edf-20d3-48c8-99bf-4628575dbd9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.614636 master-0 kubenswrapper[31420]: I0220 12:22:45.614569 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b2a9edf-20d3-48c8-99bf-4628575dbd9f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b2a9edf-20d3-48c8-99bf-4628575dbd9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.614636 master-0 kubenswrapper[31420]: I0220 12:22:45.614611 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6818035-947a-43e2-8708-c2b70b22b705-logs\") pod \"nova-metadata-0\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " pod="openstack/nova-metadata-0" Feb 20 12:22:45.614925 master-0 kubenswrapper[31420]: I0220 12:22:45.614880 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-config-data\") pod \"nova-metadata-0\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " pod="openstack/nova-metadata-0" Feb 20 12:22:45.614925 master-0 kubenswrapper[31420]: I0220 12:22:45.614908 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5htt\" (UniqueName: \"kubernetes.io/projected/f6818035-947a-43e2-8708-c2b70b22b705-kube-api-access-c5htt\") pod \"nova-metadata-0\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " pod="openstack/nova-metadata-0" Feb 20 12:22:45.615011 master-0 kubenswrapper[31420]: I0220 12:22:45.614969 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " pod="openstack/nova-metadata-0" Feb 20 12:22:45.615011 master-0 kubenswrapper[31420]: I0220 12:22:45.614988 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b2a9edf-20d3-48c8-99bf-4628575dbd9f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b2a9edf-20d3-48c8-99bf-4628575dbd9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.615338 master-0 kubenswrapper[31420]: I0220 12:22:45.615121 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " pod="openstack/nova-metadata-0" Feb 20 12:22:45.723783 master-0 kubenswrapper[31420]: I0220 12:22:45.723313 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkzgw\" (UniqueName: \"kubernetes.io/projected/7b2a9edf-20d3-48c8-99bf-4628575dbd9f-kube-api-access-hkzgw\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b2a9edf-20d3-48c8-99bf-4628575dbd9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.723783 master-0 kubenswrapper[31420]: I0220 12:22:45.723444 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b2a9edf-20d3-48c8-99bf-4628575dbd9f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b2a9edf-20d3-48c8-99bf-4628575dbd9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.723783 master-0 kubenswrapper[31420]: I0220 12:22:45.723619 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6818035-947a-43e2-8708-c2b70b22b705-logs\") pod \"nova-metadata-0\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " pod="openstack/nova-metadata-0" Feb 20 12:22:45.724316 master-0 kubenswrapper[31420]: I0220 12:22:45.723811 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-config-data\") pod \"nova-metadata-0\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " pod="openstack/nova-metadata-0" Feb 20 12:22:45.724316 master-0 kubenswrapper[31420]: I0220 12:22:45.723856 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5htt\" (UniqueName: \"kubernetes.io/projected/f6818035-947a-43e2-8708-c2b70b22b705-kube-api-access-c5htt\") pod \"nova-metadata-0\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " pod="openstack/nova-metadata-0" Feb 20 12:22:45.724316 master-0 kubenswrapper[31420]: I0220 12:22:45.723940 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " pod="openstack/nova-metadata-0" Feb 20 12:22:45.724316 master-0 kubenswrapper[31420]: I0220 12:22:45.723973 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b2a9edf-20d3-48c8-99bf-4628575dbd9f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b2a9edf-20d3-48c8-99bf-4628575dbd9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.724316 master-0 kubenswrapper[31420]: I0220 12:22:45.724228 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " pod="openstack/nova-metadata-0" Feb 20 12:22:45.724724 master-0 kubenswrapper[31420]: I0220 12:22:45.724371 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2a9edf-20d3-48c8-99bf-4628575dbd9f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b2a9edf-20d3-48c8-99bf-4628575dbd9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.724724 master-0 kubenswrapper[31420]: I0220 12:22:45.724464 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2a9edf-20d3-48c8-99bf-4628575dbd9f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b2a9edf-20d3-48c8-99bf-4628575dbd9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.732326 master-0 kubenswrapper[31420]: I0220 12:22:45.732252 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2a9edf-20d3-48c8-99bf-4628575dbd9f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b2a9edf-20d3-48c8-99bf-4628575dbd9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.736401 master-0 kubenswrapper[31420]: I0220 12:22:45.736354 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b2a9edf-20d3-48c8-99bf-4628575dbd9f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b2a9edf-20d3-48c8-99bf-4628575dbd9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.736835 master-0 kubenswrapper[31420]: I0220 12:22:45.736745 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6818035-947a-43e2-8708-c2b70b22b705-logs\") pod \"nova-metadata-0\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " pod="openstack/nova-metadata-0" Feb 20 12:22:45.736963 master-0 kubenswrapper[31420]: I0220 12:22:45.736787 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " pod="openstack/nova-metadata-0" Feb 20 12:22:45.737730 master-0 kubenswrapper[31420]: I0220 12:22:45.737666 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2a9edf-20d3-48c8-99bf-4628575dbd9f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b2a9edf-20d3-48c8-99bf-4628575dbd9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.740049 master-0 kubenswrapper[31420]: I0220 12:22:45.739984 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b2a9edf-20d3-48c8-99bf-4628575dbd9f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b2a9edf-20d3-48c8-99bf-4628575dbd9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.750009 master-0 kubenswrapper[31420]: I0220 12:22:45.748704 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-config-data\") pod \"nova-metadata-0\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " pod="openstack/nova-metadata-0" Feb 20 12:22:45.755984 master-0 kubenswrapper[31420]: I0220 12:22:45.755931 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkzgw\" (UniqueName: \"kubernetes.io/projected/7b2a9edf-20d3-48c8-99bf-4628575dbd9f-kube-api-access-hkzgw\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b2a9edf-20d3-48c8-99bf-4628575dbd9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.759238 master-0 kubenswrapper[31420]: I0220 12:22:45.759184 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " pod="openstack/nova-metadata-0" Feb 20 12:22:45.760965 master-0 kubenswrapper[31420]: I0220 12:22:45.760875 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5htt\" (UniqueName: \"kubernetes.io/projected/f6818035-947a-43e2-8708-c2b70b22b705-kube-api-access-c5htt\") pod \"nova-metadata-0\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " pod="openstack/nova-metadata-0" Feb 20 12:22:45.788380 master-0 kubenswrapper[31420]: I0220 12:22:45.788331 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:45.844018 master-0 kubenswrapper[31420]: I0220 12:22:45.843938 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 12:22:46.357685 master-0 kubenswrapper[31420]: I0220 12:22:46.357293 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 12:22:46.366980 master-0 kubenswrapper[31420]: W0220 12:22:46.366908 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b2a9edf_20d3_48c8_99bf_4628575dbd9f.slice/crio-1d9f9942ae2fd0041b69783bb122056d265b334a5a0c05909b79cb5615eae1bc WatchSource:0}: Error finding container 1d9f9942ae2fd0041b69783bb122056d265b334a5a0c05909b79cb5615eae1bc: Status 404 returned error can't find the container with id 1d9f9942ae2fd0041b69783bb122056d265b334a5a0c05909b79cb5615eae1bc Feb 20 12:22:46.473992 master-0 kubenswrapper[31420]: I0220 12:22:46.473942 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 12:22:46.474428 master-0 kubenswrapper[31420]: W0220 12:22:46.474377 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6818035_947a_43e2_8708_c2b70b22b705.slice/crio-3fa25bf0a55dc772b662e5a7f87f3345aa7fec8708b08e88bec23527914e5973 WatchSource:0}: Error finding container 3fa25bf0a55dc772b662e5a7f87f3345aa7fec8708b08e88bec23527914e5973: Status 404 returned error can't find the container with id 3fa25bf0a55dc772b662e5a7f87f3345aa7fec8708b08e88bec23527914e5973 Feb 20 12:22:47.360649 master-0 kubenswrapper[31420]: I0220 12:22:47.349693 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b2a9edf-20d3-48c8-99bf-4628575dbd9f","Type":"ContainerStarted","Data":"c9f6c0afa13f23fbd16c99a7f40eefa026e85a8b56e5ce89e5bf1907c4f40906"} Feb 20 12:22:47.360649 master-0 kubenswrapper[31420]: I0220 12:22:47.349755 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b2a9edf-20d3-48c8-99bf-4628575dbd9f","Type":"ContainerStarted","Data":"1d9f9942ae2fd0041b69783bb122056d265b334a5a0c05909b79cb5615eae1bc"} Feb 20 12:22:47.360649 master-0 kubenswrapper[31420]: I0220 12:22:47.352235 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6818035-947a-43e2-8708-c2b70b22b705","Type":"ContainerStarted","Data":"a1804506c2c8d07f18814b8df7e13e348515ecf888e85476072b81e81061361e"} Feb 20 12:22:47.360649 master-0 kubenswrapper[31420]: I0220 12:22:47.352290 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6818035-947a-43e2-8708-c2b70b22b705","Type":"ContainerStarted","Data":"e23402f9c68a7c226a163710aff90ff3dbef3c11d2dfbfe0d841f52bb03c4e49"} Feb 20 12:22:47.360649 master-0 kubenswrapper[31420]: I0220 12:22:47.352304 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6818035-947a-43e2-8708-c2b70b22b705","Type":"ContainerStarted","Data":"3fa25bf0a55dc772b662e5a7f87f3345aa7fec8708b08e88bec23527914e5973"} Feb 20 12:22:47.373418 master-0 kubenswrapper[31420]: I0220 12:22:47.373323 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.373303584 podStartE2EDuration="2.373303584s" podCreationTimestamp="2026-02-20 12:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:22:47.369867227 +0000 UTC m=+1072.089105478" watchObservedRunningTime="2026-02-20 12:22:47.373303584 +0000 UTC m=+1072.092541835" Feb 20 12:22:47.429419 master-0 kubenswrapper[31420]: I0220 12:22:47.429325 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.429303967 podStartE2EDuration="2.429303967s" podCreationTimestamp="2026-02-20 12:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:22:47.414231961 +0000 UTC m=+1072.133470212" watchObservedRunningTime="2026-02-20 12:22:47.429303967 +0000 UTC m=+1072.148542218" Feb 20 12:22:47.520528 master-0 kubenswrapper[31420]: I0220 12:22:47.519322 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 20 12:22:47.520528 master-0 kubenswrapper[31420]: I0220 12:22:47.519848 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 20 12:22:47.524931 master-0 kubenswrapper[31420]: I0220 12:22:47.524860 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 20 12:22:47.525991 master-0 kubenswrapper[31420]: I0220 12:22:47.525946 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 20 12:22:48.369390 master-0 kubenswrapper[31420]: I0220 12:22:48.369200 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 20 12:22:48.374209 master-0 kubenswrapper[31420]: I0220 12:22:48.374147 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 20 12:22:48.741990 master-0 kubenswrapper[31420]: I0220 12:22:48.741936 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6d4b4b47-8r4td"] Feb 20 12:22:48.747403 master-0 kubenswrapper[31420]: I0220 12:22:48.746836 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.751393 master-0 kubenswrapper[31420]: I0220 12:22:48.751346 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6d4b4b47-8r4td"] Feb 20 12:22:48.861381 master-0 kubenswrapper[31420]: I0220 12:22:48.861302 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-config\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.861664 master-0 kubenswrapper[31420]: I0220 12:22:48.861557 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.861664 master-0 kubenswrapper[31420]: I0220 12:22:48.861603 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.861664 master-0 kubenswrapper[31420]: I0220 12:22:48.861661 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zclmm\" (UniqueName: \"kubernetes.io/projected/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-kube-api-access-zclmm\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.862065 master-0 kubenswrapper[31420]: I0220 12:22:48.861731 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-dns-swift-storage-0\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.862065 master-0 kubenswrapper[31420]: I0220 12:22:48.861775 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-dns-svc\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.963680 master-0 kubenswrapper[31420]: I0220 12:22:48.963519 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.963680 master-0 kubenswrapper[31420]: I0220 12:22:48.963603 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.963680 master-0 kubenswrapper[31420]: I0220 12:22:48.963645 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zclmm\" (UniqueName: \"kubernetes.io/projected/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-kube-api-access-zclmm\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.963680 master-0 kubenswrapper[31420]: I0220 12:22:48.963677 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-dns-swift-storage-0\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.963975 master-0 kubenswrapper[31420]: I0220 12:22:48.963880 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-dns-svc\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.964108 master-0 kubenswrapper[31420]: I0220 12:22:48.964075 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-config\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.964572 master-0 kubenswrapper[31420]: I0220 12:22:48.964520 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.964652 master-0 kubenswrapper[31420]: I0220 12:22:48.964608 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.964792 master-0 kubenswrapper[31420]: I0220 12:22:48.964764 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-dns-swift-storage-0\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.965314 master-0 kubenswrapper[31420]: I0220 12:22:48.965286 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-dns-svc\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.965931 master-0 kubenswrapper[31420]: I0220 12:22:48.965906 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-config\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:48.990559 master-0 kubenswrapper[31420]: I0220 12:22:48.990436 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zclmm\" (UniqueName: \"kubernetes.io/projected/82a3ce2c-bbec-4ef7-8975-5fbaced911cf-kube-api-access-zclmm\") pod \"dnsmasq-dns-5b6d4b4b47-8r4td\" (UID: \"82a3ce2c-bbec-4ef7-8975-5fbaced911cf\") " pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:49.066982 master-0 kubenswrapper[31420]: I0220 12:22:49.066862 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:49.585097 master-0 kubenswrapper[31420]: I0220 12:22:49.585045 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6d4b4b47-8r4td"] Feb 20 12:22:50.418621 master-0 kubenswrapper[31420]: I0220 12:22:50.418554 31420 generic.go:334] "Generic (PLEG): container finished" podID="82a3ce2c-bbec-4ef7-8975-5fbaced911cf" containerID="9f577416c76dc2d12a9e9c43eb3509be528ae434830555a54b7e7316a460ede0" exitCode=0 Feb 20 12:22:50.418865 master-0 kubenswrapper[31420]: I0220 12:22:50.418745 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" event={"ID":"82a3ce2c-bbec-4ef7-8975-5fbaced911cf","Type":"ContainerDied","Data":"9f577416c76dc2d12a9e9c43eb3509be528ae434830555a54b7e7316a460ede0"} Feb 20 12:22:50.418865 master-0 kubenswrapper[31420]: I0220 12:22:50.418806 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" event={"ID":"82a3ce2c-bbec-4ef7-8975-5fbaced911cf","Type":"ContainerStarted","Data":"4427a9ae782f45c033e034810424891b2e5e8195841cebd51e2a7f8a4c4d43a9"} Feb 20 12:22:50.789952 master-0 kubenswrapper[31420]: I0220 12:22:50.789877 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:50.845356 master-0 kubenswrapper[31420]: I0220 12:22:50.845289 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 20 12:22:50.845356 master-0 kubenswrapper[31420]: I0220 12:22:50.845360 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 20 12:22:51.437824 master-0 kubenswrapper[31420]: I0220 12:22:51.437746 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" event={"ID":"82a3ce2c-bbec-4ef7-8975-5fbaced911cf","Type":"ContainerStarted","Data":"2886ee62e3c086324c4273b399d57d59bc55610d0b6bbb9f639694371cce04b5"} Feb 20 12:22:51.438130 master-0 kubenswrapper[31420]: I0220 12:22:51.437937 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:51.488557 master-0 kubenswrapper[31420]: I0220 12:22:51.488333 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" podStartSLOduration=3.488304694 podStartE2EDuration="3.488304694s" podCreationTimestamp="2026-02-20 12:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:22:51.469909984 +0000 UTC m=+1076.189148265" watchObservedRunningTime="2026-02-20 12:22:51.488304694 +0000 UTC m=+1076.207542965" Feb 20 12:22:51.633816 master-0 kubenswrapper[31420]: I0220 12:22:51.633745 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:22:51.634087 master-0 kubenswrapper[31420]: I0220 12:22:51.633974 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3014abfe-1825-4804-a49d-b1372fecdffc" containerName="nova-api-log" containerID="cri-o://9631072abf5741aeedc016fcaf6a5abb05195c6adeab2b22872a783910a9b143" gracePeriod=30 Feb 20 12:22:51.634420 master-0 kubenswrapper[31420]: I0220 12:22:51.634108 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3014abfe-1825-4804-a49d-b1372fecdffc" containerName="nova-api-api" containerID="cri-o://a58630589a09a6c44acd76a774ab21471c8abb15b6cd9a5db1c61d7027ebe397" gracePeriod=30 Feb 20 12:22:52.458841 master-0 kubenswrapper[31420]: I0220 12:22:52.458722 31420 generic.go:334] "Generic (PLEG): container finished" podID="3014abfe-1825-4804-a49d-b1372fecdffc" containerID="9631072abf5741aeedc016fcaf6a5abb05195c6adeab2b22872a783910a9b143" exitCode=143 Feb 20 12:22:52.459885 master-0 kubenswrapper[31420]: I0220 12:22:52.458824 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3014abfe-1825-4804-a49d-b1372fecdffc","Type":"ContainerDied","Data":"9631072abf5741aeedc016fcaf6a5abb05195c6adeab2b22872a783910a9b143"} Feb 20 12:22:55.372456 master-0 kubenswrapper[31420]: I0220 12:22:55.372403 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 12:22:55.443903 master-0 kubenswrapper[31420]: I0220 12:22:55.443841 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3014abfe-1825-4804-a49d-b1372fecdffc-config-data\") pod \"3014abfe-1825-4804-a49d-b1372fecdffc\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " Feb 20 12:22:55.444151 master-0 kubenswrapper[31420]: I0220 12:22:55.443952 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3014abfe-1825-4804-a49d-b1372fecdffc-combined-ca-bundle\") pod \"3014abfe-1825-4804-a49d-b1372fecdffc\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " Feb 20 12:22:55.444151 master-0 kubenswrapper[31420]: I0220 12:22:55.443982 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3014abfe-1825-4804-a49d-b1372fecdffc-logs\") pod \"3014abfe-1825-4804-a49d-b1372fecdffc\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " Feb 20 12:22:55.444151 master-0 kubenswrapper[31420]: I0220 12:22:55.444071 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwplj\" (UniqueName: \"kubernetes.io/projected/3014abfe-1825-4804-a49d-b1372fecdffc-kube-api-access-xwplj\") pod \"3014abfe-1825-4804-a49d-b1372fecdffc\" (UID: \"3014abfe-1825-4804-a49d-b1372fecdffc\") " Feb 20 12:22:55.446506 master-0 kubenswrapper[31420]: I0220 12:22:55.446451 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3014abfe-1825-4804-a49d-b1372fecdffc-logs" (OuterVolumeSpecName: "logs") pod "3014abfe-1825-4804-a49d-b1372fecdffc" (UID: "3014abfe-1825-4804-a49d-b1372fecdffc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:22:55.454993 master-0 kubenswrapper[31420]: I0220 12:22:55.450307 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3014abfe-1825-4804-a49d-b1372fecdffc-kube-api-access-xwplj" (OuterVolumeSpecName: "kube-api-access-xwplj") pod "3014abfe-1825-4804-a49d-b1372fecdffc" (UID: "3014abfe-1825-4804-a49d-b1372fecdffc"). InnerVolumeSpecName "kube-api-access-xwplj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:22:55.485365 master-0 kubenswrapper[31420]: I0220 12:22:55.485295 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3014abfe-1825-4804-a49d-b1372fecdffc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3014abfe-1825-4804-a49d-b1372fecdffc" (UID: "3014abfe-1825-4804-a49d-b1372fecdffc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:55.509104 master-0 kubenswrapper[31420]: I0220 12:22:55.509050 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3014abfe-1825-4804-a49d-b1372fecdffc-config-data" (OuterVolumeSpecName: "config-data") pod "3014abfe-1825-4804-a49d-b1372fecdffc" (UID: "3014abfe-1825-4804-a49d-b1372fecdffc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:22:55.514829 master-0 kubenswrapper[31420]: I0220 12:22:55.514757 31420 generic.go:334] "Generic (PLEG): container finished" podID="3014abfe-1825-4804-a49d-b1372fecdffc" containerID="a58630589a09a6c44acd76a774ab21471c8abb15b6cd9a5db1c61d7027ebe397" exitCode=0 Feb 20 12:22:55.517554 master-0 kubenswrapper[31420]: I0220 12:22:55.515485 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 12:22:55.517554 master-0 kubenswrapper[31420]: I0220 12:22:55.516586 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3014abfe-1825-4804-a49d-b1372fecdffc","Type":"ContainerDied","Data":"a58630589a09a6c44acd76a774ab21471c8abb15b6cd9a5db1c61d7027ebe397"} Feb 20 12:22:55.517554 master-0 kubenswrapper[31420]: I0220 12:22:55.516638 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3014abfe-1825-4804-a49d-b1372fecdffc","Type":"ContainerDied","Data":"412340dc97e5e12a50c69ebcc1ac52f54cd0975fdc9570a2caf89b7d15982467"} Feb 20 12:22:55.517554 master-0 kubenswrapper[31420]: I0220 12:22:55.516656 31420 scope.go:117] "RemoveContainer" containerID="a58630589a09a6c44acd76a774ab21471c8abb15b6cd9a5db1c61d7027ebe397" Feb 20 12:22:55.550654 master-0 kubenswrapper[31420]: I0220 12:22:55.546827 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3014abfe-1825-4804-a49d-b1372fecdffc-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:55.550654 master-0 kubenswrapper[31420]: I0220 12:22:55.546865 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3014abfe-1825-4804-a49d-b1372fecdffc-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:55.550654 master-0 kubenswrapper[31420]: I0220 12:22:55.546875 31420 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3014abfe-1825-4804-a49d-b1372fecdffc-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:55.550654 master-0 kubenswrapper[31420]: I0220 12:22:55.546886 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwplj\" (UniqueName: \"kubernetes.io/projected/3014abfe-1825-4804-a49d-b1372fecdffc-kube-api-access-xwplj\") on node \"master-0\" DevicePath \"\"" Feb 20 12:22:55.604081 master-0 kubenswrapper[31420]: I0220 12:22:55.603981 31420 scope.go:117] "RemoveContainer" containerID="9631072abf5741aeedc016fcaf6a5abb05195c6adeab2b22872a783910a9b143" Feb 20 12:22:55.604375 master-0 kubenswrapper[31420]: I0220 12:22:55.604351 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:22:55.622376 master-0 kubenswrapper[31420]: I0220 12:22:55.622293 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:22:55.639056 master-0 kubenswrapper[31420]: I0220 12:22:55.638983 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 20 12:22:55.639596 master-0 kubenswrapper[31420]: E0220 12:22:55.639563 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3014abfe-1825-4804-a49d-b1372fecdffc" containerName="nova-api-log" Feb 20 12:22:55.639596 master-0 kubenswrapper[31420]: I0220 12:22:55.639585 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="3014abfe-1825-4804-a49d-b1372fecdffc" containerName="nova-api-log" Feb 20 12:22:55.639752 master-0 kubenswrapper[31420]: E0220 12:22:55.639614 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3014abfe-1825-4804-a49d-b1372fecdffc" containerName="nova-api-api" Feb 20 12:22:55.639752 master-0 kubenswrapper[31420]: I0220 12:22:55.639621 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="3014abfe-1825-4804-a49d-b1372fecdffc" containerName="nova-api-api" Feb 20 12:22:55.639911 master-0 kubenswrapper[31420]: I0220 12:22:55.639880 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="3014abfe-1825-4804-a49d-b1372fecdffc" containerName="nova-api-log" Feb 20 12:22:55.639975 master-0 kubenswrapper[31420]: I0220 12:22:55.639931 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="3014abfe-1825-4804-a49d-b1372fecdffc" containerName="nova-api-api" Feb 20 12:22:55.641453 master-0 kubenswrapper[31420]: I0220 12:22:55.641368 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 12:22:55.647603 master-0 kubenswrapper[31420]: I0220 12:22:55.647558 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 20 12:22:55.648020 master-0 kubenswrapper[31420]: I0220 12:22:55.647992 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 20 12:22:55.655169 master-0 kubenswrapper[31420]: I0220 12:22:55.654759 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.655169 master-0 kubenswrapper[31420]: I0220 12:22:55.654834 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-config-data\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.655169 master-0 kubenswrapper[31420]: I0220 12:22:55.654898 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9dkd\" (UniqueName: \"kubernetes.io/projected/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-kube-api-access-l9dkd\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.655977 master-0 kubenswrapper[31420]: I0220 12:22:55.655882 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.656602 master-0 kubenswrapper[31420]: I0220 12:22:55.656469 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.656676 master-0 kubenswrapper[31420]: I0220 12:22:55.656632 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-logs\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.660137 master-0 kubenswrapper[31420]: I0220 12:22:55.660064 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 20 12:22:55.669359 master-0 kubenswrapper[31420]: I0220 12:22:55.669319 31420 scope.go:117] "RemoveContainer" containerID="a58630589a09a6c44acd76a774ab21471c8abb15b6cd9a5db1c61d7027ebe397" Feb 20 12:22:55.670091 master-0 kubenswrapper[31420]: E0220 12:22:55.670059 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a58630589a09a6c44acd76a774ab21471c8abb15b6cd9a5db1c61d7027ebe397\": container with ID starting with a58630589a09a6c44acd76a774ab21471c8abb15b6cd9a5db1c61d7027ebe397 not found: ID does not exist" containerID="a58630589a09a6c44acd76a774ab21471c8abb15b6cd9a5db1c61d7027ebe397" Feb 20 12:22:55.670235 master-0 kubenswrapper[31420]: I0220 12:22:55.670201 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a58630589a09a6c44acd76a774ab21471c8abb15b6cd9a5db1c61d7027ebe397"} err="failed to get container status \"a58630589a09a6c44acd76a774ab21471c8abb15b6cd9a5db1c61d7027ebe397\": rpc error: code = NotFound desc = could not find container \"a58630589a09a6c44acd76a774ab21471c8abb15b6cd9a5db1c61d7027ebe397\": container with ID starting with a58630589a09a6c44acd76a774ab21471c8abb15b6cd9a5db1c61d7027ebe397 not found: ID does not exist" Feb 20 12:22:55.670348 master-0 kubenswrapper[31420]: I0220 12:22:55.670330 31420 scope.go:117] "RemoveContainer" containerID="9631072abf5741aeedc016fcaf6a5abb05195c6adeab2b22872a783910a9b143" Feb 20 12:22:55.671001 master-0 kubenswrapper[31420]: E0220 12:22:55.670949 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9631072abf5741aeedc016fcaf6a5abb05195c6adeab2b22872a783910a9b143\": container with ID starting with 9631072abf5741aeedc016fcaf6a5abb05195c6adeab2b22872a783910a9b143 not found: ID does not exist" containerID="9631072abf5741aeedc016fcaf6a5abb05195c6adeab2b22872a783910a9b143" Feb 20 12:22:55.671100 master-0 kubenswrapper[31420]: I0220 12:22:55.671037 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9631072abf5741aeedc016fcaf6a5abb05195c6adeab2b22872a783910a9b143"} err="failed to get container status \"9631072abf5741aeedc016fcaf6a5abb05195c6adeab2b22872a783910a9b143\": rpc error: code = NotFound desc = could not find container \"9631072abf5741aeedc016fcaf6a5abb05195c6adeab2b22872a783910a9b143\": container with ID starting with 9631072abf5741aeedc016fcaf6a5abb05195c6adeab2b22872a783910a9b143 not found: ID does not exist" Feb 20 12:22:55.696226 master-0 kubenswrapper[31420]: I0220 12:22:55.696122 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:22:55.758925 master-0 kubenswrapper[31420]: I0220 12:22:55.758844 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-logs\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.759156 master-0 kubenswrapper[31420]: I0220 12:22:55.758980 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.759156 master-0 kubenswrapper[31420]: I0220 12:22:55.759004 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-config-data\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.759156 master-0 kubenswrapper[31420]: I0220 12:22:55.759035 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9dkd\" (UniqueName: \"kubernetes.io/projected/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-kube-api-access-l9dkd\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.759369 master-0 kubenswrapper[31420]: I0220 12:22:55.759321 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-logs\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.759510 master-0 kubenswrapper[31420]: I0220 12:22:55.759481 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.759917 master-0 kubenswrapper[31420]: I0220 12:22:55.759878 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.764026 master-0 kubenswrapper[31420]: I0220 12:22:55.763999 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.764117 master-0 kubenswrapper[31420]: I0220 12:22:55.764102 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.764235 master-0 kubenswrapper[31420]: I0220 12:22:55.764205 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-config-data\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.765651 master-0 kubenswrapper[31420]: I0220 12:22:55.765626 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.775624 master-0 kubenswrapper[31420]: I0220 12:22:55.775519 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9dkd\" (UniqueName: \"kubernetes.io/projected/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-kube-api-access-l9dkd\") pod \"nova-api-0\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " pod="openstack/nova-api-0" Feb 20 12:22:55.789726 master-0 kubenswrapper[31420]: I0220 12:22:55.789673 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:55.810650 master-0 kubenswrapper[31420]: I0220 12:22:55.810484 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:55.844839 master-0 kubenswrapper[31420]: I0220 12:22:55.844742 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 20 12:22:55.844839 master-0 kubenswrapper[31420]: I0220 12:22:55.844843 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 20 12:22:55.967252 master-0 kubenswrapper[31420]: I0220 12:22:55.967192 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 12:22:56.500850 master-0 kubenswrapper[31420]: I0220 12:22:56.500780 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:22:56.510996 master-0 kubenswrapper[31420]: W0220 12:22:56.510907 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e02b1f2_9416_46c2_b4b8_f0c0a351b198.slice/crio-534430aef9d67b6b8415f4389acbd32455a412e36e22087b211c5c27d0ff521e WatchSource:0}: Error finding container 534430aef9d67b6b8415f4389acbd32455a412e36e22087b211c5c27d0ff521e: Status 404 returned error can't find the container with id 534430aef9d67b6b8415f4389acbd32455a412e36e22087b211c5c27d0ff521e Feb 20 12:22:56.555152 master-0 kubenswrapper[31420]: I0220 12:22:56.555077 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e02b1f2-9416-46c2-b4b8-f0c0a351b198","Type":"ContainerStarted","Data":"534430aef9d67b6b8415f4389acbd32455a412e36e22087b211c5c27d0ff521e"} Feb 20 12:22:56.574004 master-0 kubenswrapper[31420]: I0220 12:22:56.573275 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 20 12:22:56.805662 master-0 kubenswrapper[31420]: I0220 12:22:56.805589 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-lnnfr"] Feb 20 12:22:56.809970 master-0 kubenswrapper[31420]: I0220 12:22:56.809902 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:22:56.812209 master-0 kubenswrapper[31420]: I0220 12:22:56.811903 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 20 12:22:56.812934 master-0 kubenswrapper[31420]: I0220 12:22:56.812884 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 20 12:22:56.819115 master-0 kubenswrapper[31420]: I0220 12:22:56.819059 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-n25nj"] Feb 20 12:22:56.821126 master-0 kubenswrapper[31420]: I0220 12:22:56.821082 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:22:56.829980 master-0 kubenswrapper[31420]: I0220 12:22:56.829933 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lnnfr"] Feb 20 12:22:56.844213 master-0 kubenswrapper[31420]: I0220 12:22:56.844173 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-n25nj"] Feb 20 12:22:56.857857 master-0 kubenswrapper[31420]: I0220 12:22:56.857779 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f6818035-947a-43e2-8708-c2b70b22b705" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.13:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:22:56.858107 master-0 kubenswrapper[31420]: I0220 12:22:56.857796 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f6818035-947a-43e2-8708-c2b70b22b705" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.13:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:22:56.949784 master-0 kubenswrapper[31420]: I0220 12:22:56.948922 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-scripts\") pod \"nova-cell1-cell-mapping-lnnfr\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:22:56.949784 master-0 kubenswrapper[31420]: I0220 12:22:56.949029 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9fwd\" (UniqueName: \"kubernetes.io/projected/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-kube-api-access-v9fwd\") pod \"nova-cell1-host-discover-n25nj\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:22:56.949784 master-0 kubenswrapper[31420]: I0220 12:22:56.949232 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lnnfr\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:22:56.949784 master-0 kubenswrapper[31420]: I0220 12:22:56.949322 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rcg9\" (UniqueName: \"kubernetes.io/projected/1d111110-d012-4ec5-9c26-0701910b11b2-kube-api-access-5rcg9\") pod \"nova-cell1-cell-mapping-lnnfr\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:22:56.949784 master-0 kubenswrapper[31420]: I0220 12:22:56.949360 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-config-data\") pod \"nova-cell1-host-discover-n25nj\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:22:56.949784 master-0 kubenswrapper[31420]: I0220 12:22:56.949712 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-config-data\") pod \"nova-cell1-cell-mapping-lnnfr\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:22:56.950748 master-0 kubenswrapper[31420]: I0220 12:22:56.949854 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-combined-ca-bundle\") pod \"nova-cell1-host-discover-n25nj\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:22:56.950748 master-0 kubenswrapper[31420]: I0220 12:22:56.949907 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-scripts\") pod \"nova-cell1-host-discover-n25nj\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:22:57.052272 master-0 kubenswrapper[31420]: I0220 12:22:57.052216 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9fwd\" (UniqueName: \"kubernetes.io/projected/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-kube-api-access-v9fwd\") pod \"nova-cell1-host-discover-n25nj\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:22:57.052467 master-0 kubenswrapper[31420]: I0220 12:22:57.052295 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lnnfr\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:22:57.052508 master-0 kubenswrapper[31420]: I0220 12:22:57.052466 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rcg9\" (UniqueName: \"kubernetes.io/projected/1d111110-d012-4ec5-9c26-0701910b11b2-kube-api-access-5rcg9\") pod \"nova-cell1-cell-mapping-lnnfr\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:22:57.052578 master-0 kubenswrapper[31420]: I0220 12:22:57.052556 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-config-data\") pod \"nova-cell1-host-discover-n25nj\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:22:57.052722 master-0 kubenswrapper[31420]: I0220 12:22:57.052703 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-config-data\") pod \"nova-cell1-cell-mapping-lnnfr\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:22:57.052812 master-0 kubenswrapper[31420]: I0220 12:22:57.052797 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-combined-ca-bundle\") pod \"nova-cell1-host-discover-n25nj\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:22:57.052881 master-0 kubenswrapper[31420]: I0220 12:22:57.052837 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-scripts\") pod \"nova-cell1-host-discover-n25nj\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:22:57.053244 master-0 kubenswrapper[31420]: I0220 12:22:57.053220 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-scripts\") pod \"nova-cell1-cell-mapping-lnnfr\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:22:57.057298 master-0 kubenswrapper[31420]: I0220 12:22:57.057271 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-combined-ca-bundle\") pod \"nova-cell1-host-discover-n25nj\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:22:57.057948 master-0 kubenswrapper[31420]: I0220 12:22:57.057791 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-config-data\") pod \"nova-cell1-host-discover-n25nj\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:22:57.058074 master-0 kubenswrapper[31420]: I0220 12:22:57.057883 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-scripts\") pod \"nova-cell1-host-discover-n25nj\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:22:57.058906 master-0 kubenswrapper[31420]: I0220 12:22:57.058862 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lnnfr\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:22:57.064086 master-0 kubenswrapper[31420]: I0220 12:22:57.064022 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-config-data\") pod \"nova-cell1-cell-mapping-lnnfr\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:22:57.064649 master-0 kubenswrapper[31420]: I0220 12:22:57.064616 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-scripts\") pod \"nova-cell1-cell-mapping-lnnfr\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:22:57.067075 master-0 kubenswrapper[31420]: I0220 12:22:57.067030 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9fwd\" (UniqueName: \"kubernetes.io/projected/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-kube-api-access-v9fwd\") pod \"nova-cell1-host-discover-n25nj\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:22:57.068788 master-0 kubenswrapper[31420]: I0220 12:22:57.068748 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rcg9\" (UniqueName: \"kubernetes.io/projected/1d111110-d012-4ec5-9c26-0701910b11b2-kube-api-access-5rcg9\") pod \"nova-cell1-cell-mapping-lnnfr\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:22:57.143762 master-0 kubenswrapper[31420]: I0220 12:22:57.143707 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:22:57.163817 master-0 kubenswrapper[31420]: I0220 12:22:57.163752 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:22:57.517927 master-0 kubenswrapper[31420]: I0220 12:22:57.517862 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3014abfe-1825-4804-a49d-b1372fecdffc" path="/var/lib/kubelet/pods/3014abfe-1825-4804-a49d-b1372fecdffc/volumes" Feb 20 12:22:57.585882 master-0 kubenswrapper[31420]: I0220 12:22:57.583141 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e02b1f2-9416-46c2-b4b8-f0c0a351b198","Type":"ContainerStarted","Data":"6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f"} Feb 20 12:22:57.585882 master-0 kubenswrapper[31420]: I0220 12:22:57.583221 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e02b1f2-9416-46c2-b4b8-f0c0a351b198","Type":"ContainerStarted","Data":"8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528"} Feb 20 12:22:57.633181 master-0 kubenswrapper[31420]: I0220 12:22:57.633064 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.6330346650000003 podStartE2EDuration="2.633034665s" podCreationTimestamp="2026-02-20 12:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:22:57.60773957 +0000 UTC m=+1082.326977811" watchObservedRunningTime="2026-02-20 12:22:57.633034665 +0000 UTC m=+1082.352272926" Feb 20 12:22:57.656428 master-0 kubenswrapper[31420]: I0220 12:22:57.656354 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lnnfr"] Feb 20 12:22:57.656428 master-0 kubenswrapper[31420]: W0220 12:22:57.656399 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d111110_d012_4ec5_9c26_0701910b11b2.slice/crio-2a9b4006c9d74fe46bfadf473b820369324090b36fe1c9dcfb4745bad2587677 WatchSource:0}: Error finding container 2a9b4006c9d74fe46bfadf473b820369324090b36fe1c9dcfb4745bad2587677: Status 404 returned error can't find the container with id 2a9b4006c9d74fe46bfadf473b820369324090b36fe1c9dcfb4745bad2587677 Feb 20 12:22:57.765029 master-0 kubenswrapper[31420]: I0220 12:22:57.764965 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-n25nj"] Feb 20 12:22:57.783212 master-0 kubenswrapper[31420]: W0220 12:22:57.783157 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8218b6ff_f982_48d1_8fa5_82ea8e531fe6.slice/crio-1df766d0735e7ce1fa7f4f22026f0f43d59e69dbdd3a321226610e7ccc7f4595 WatchSource:0}: Error finding container 1df766d0735e7ce1fa7f4f22026f0f43d59e69dbdd3a321226610e7ccc7f4595: Status 404 returned error can't find the container with id 1df766d0735e7ce1fa7f4f22026f0f43d59e69dbdd3a321226610e7ccc7f4595 Feb 20 12:22:58.603986 master-0 kubenswrapper[31420]: I0220 12:22:58.603890 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-n25nj" event={"ID":"8218b6ff-f982-48d1-8fa5-82ea8e531fe6","Type":"ContainerStarted","Data":"d296df2bdf90e964d7294b8678e3869cb6f5a9977d3ce43f146fcfd250bbf994"} Feb 20 12:22:58.604813 master-0 kubenswrapper[31420]: I0220 12:22:58.604034 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-n25nj" event={"ID":"8218b6ff-f982-48d1-8fa5-82ea8e531fe6","Type":"ContainerStarted","Data":"1df766d0735e7ce1fa7f4f22026f0f43d59e69dbdd3a321226610e7ccc7f4595"} Feb 20 12:22:58.613135 master-0 kubenswrapper[31420]: I0220 12:22:58.613069 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lnnfr" event={"ID":"1d111110-d012-4ec5-9c26-0701910b11b2","Type":"ContainerStarted","Data":"5c792a8d70d7c3874e833eb39c1bb9aa91d63055305749d26df7026bed545b13"} Feb 20 12:22:58.613135 master-0 kubenswrapper[31420]: I0220 12:22:58.613127 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lnnfr" event={"ID":"1d111110-d012-4ec5-9c26-0701910b11b2","Type":"ContainerStarted","Data":"2a9b4006c9d74fe46bfadf473b820369324090b36fe1c9dcfb4745bad2587677"} Feb 20 12:22:58.629822 master-0 kubenswrapper[31420]: I0220 12:22:58.629663 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-n25nj" podStartSLOduration=2.629601733 podStartE2EDuration="2.629601733s" podCreationTimestamp="2026-02-20 12:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:22:58.623806859 +0000 UTC m=+1083.343045100" watchObservedRunningTime="2026-02-20 12:22:58.629601733 +0000 UTC m=+1083.348839984" Feb 20 12:22:58.666281 master-0 kubenswrapper[31420]: I0220 12:22:58.666180 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-lnnfr" podStartSLOduration=2.666158826 podStartE2EDuration="2.666158826s" podCreationTimestamp="2026-02-20 12:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:22:58.642998041 +0000 UTC m=+1083.362236292" watchObservedRunningTime="2026-02-20 12:22:58.666158826 +0000 UTC m=+1083.385397067" Feb 20 12:22:59.070805 master-0 kubenswrapper[31420]: I0220 12:22:59.069975 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6d4b4b47-8r4td" Feb 20 12:22:59.186059 master-0 kubenswrapper[31420]: I0220 12:22:59.185657 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-694f5b8c75-kvdtj"] Feb 20 12:22:59.186059 master-0 kubenswrapper[31420]: I0220 12:22:59.186032 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" podUID="c9303051-f88b-4545-b9ee-ddc0af81f1a7" containerName="dnsmasq-dns" containerID="cri-o://196847a0f30f1d3f92ca3b0096264e1e8d75ea6a188e90f925f06eba0b60df45" gracePeriod=10 Feb 20 12:22:59.631708 master-0 kubenswrapper[31420]: I0220 12:22:59.631582 31420 generic.go:334] "Generic (PLEG): container finished" podID="c9303051-f88b-4545-b9ee-ddc0af81f1a7" containerID="196847a0f30f1d3f92ca3b0096264e1e8d75ea6a188e90f925f06eba0b60df45" exitCode=0 Feb 20 12:22:59.631708 master-0 kubenswrapper[31420]: I0220 12:22:59.631665 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" event={"ID":"c9303051-f88b-4545-b9ee-ddc0af81f1a7","Type":"ContainerDied","Data":"196847a0f30f1d3f92ca3b0096264e1e8d75ea6a188e90f925f06eba0b60df45"} Feb 20 12:22:59.872540 master-0 kubenswrapper[31420]: I0220 12:22:59.872484 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:22:59.963456 master-0 kubenswrapper[31420]: I0220 12:22:59.963254 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-dns-swift-storage-0\") pod \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " Feb 20 12:22:59.963749 master-0 kubenswrapper[31420]: I0220 12:22:59.963568 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-ovsdbserver-nb\") pod \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " Feb 20 12:22:59.963749 master-0 kubenswrapper[31420]: I0220 12:22:59.963659 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-ovsdbserver-sb\") pod \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " Feb 20 12:22:59.963879 master-0 kubenswrapper[31420]: I0220 12:22:59.963829 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-config\") pod \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " Feb 20 12:22:59.963946 master-0 kubenswrapper[31420]: I0220 12:22:59.963915 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6stq\" (UniqueName: \"kubernetes.io/projected/c9303051-f88b-4545-b9ee-ddc0af81f1a7-kube-api-access-f6stq\") pod \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " Feb 20 12:22:59.963998 master-0 kubenswrapper[31420]: I0220 12:22:59.963986 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-dns-svc\") pod \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " Feb 20 12:22:59.986558 master-0 kubenswrapper[31420]: I0220 12:22:59.970319 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9303051-f88b-4545-b9ee-ddc0af81f1a7-kube-api-access-f6stq" (OuterVolumeSpecName: "kube-api-access-f6stq") pod "c9303051-f88b-4545-b9ee-ddc0af81f1a7" (UID: "c9303051-f88b-4545-b9ee-ddc0af81f1a7"). InnerVolumeSpecName "kube-api-access-f6stq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:23:00.040563 master-0 kubenswrapper[31420]: I0220 12:23:00.037790 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c9303051-f88b-4545-b9ee-ddc0af81f1a7" (UID: "c9303051-f88b-4545-b9ee-ddc0af81f1a7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:23:00.044710 master-0 kubenswrapper[31420]: I0220 12:23:00.043708 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c9303051-f88b-4545-b9ee-ddc0af81f1a7" (UID: "c9303051-f88b-4545-b9ee-ddc0af81f1a7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:23:00.056443 master-0 kubenswrapper[31420]: I0220 12:23:00.051914 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c9303051-f88b-4545-b9ee-ddc0af81f1a7" (UID: "c9303051-f88b-4545-b9ee-ddc0af81f1a7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:23:00.058566 master-0 kubenswrapper[31420]: I0220 12:23:00.058489 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c9303051-f88b-4545-b9ee-ddc0af81f1a7" (UID: "c9303051-f88b-4545-b9ee-ddc0af81f1a7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:23:00.067615 master-0 kubenswrapper[31420]: I0220 12:23:00.066729 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-config" (OuterVolumeSpecName: "config") pod "c9303051-f88b-4545-b9ee-ddc0af81f1a7" (UID: "c9303051-f88b-4545-b9ee-ddc0af81f1a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:23:00.067615 master-0 kubenswrapper[31420]: I0220 12:23:00.067047 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-config\") pod \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\" (UID: \"c9303051-f88b-4545-b9ee-ddc0af81f1a7\") " Feb 20 12:23:00.067615 master-0 kubenswrapper[31420]: W0220 12:23:00.067197 31420 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c9303051-f88b-4545-b9ee-ddc0af81f1a7/volumes/kubernetes.io~configmap/config Feb 20 12:23:00.067615 master-0 kubenswrapper[31420]: I0220 12:23:00.067213 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-config" (OuterVolumeSpecName: "config") pod "c9303051-f88b-4545-b9ee-ddc0af81f1a7" (UID: "c9303051-f88b-4545-b9ee-ddc0af81f1a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:23:00.069803 master-0 kubenswrapper[31420]: I0220 12:23:00.067833 31420 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:00.069803 master-0 kubenswrapper[31420]: I0220 12:23:00.067856 31420 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:00.069803 master-0 kubenswrapper[31420]: I0220 12:23:00.067869 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:00.069803 master-0 kubenswrapper[31420]: I0220 12:23:00.067878 31420 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:00.069803 master-0 kubenswrapper[31420]: I0220 12:23:00.067887 31420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9303051-f88b-4545-b9ee-ddc0af81f1a7-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:00.069803 master-0 kubenswrapper[31420]: I0220 12:23:00.067897 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6stq\" (UniqueName: \"kubernetes.io/projected/c9303051-f88b-4545-b9ee-ddc0af81f1a7-kube-api-access-f6stq\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:00.657384 master-0 kubenswrapper[31420]: I0220 12:23:00.657005 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" Feb 20 12:23:00.657384 master-0 kubenswrapper[31420]: I0220 12:23:00.656934 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694f5b8c75-kvdtj" event={"ID":"c9303051-f88b-4545-b9ee-ddc0af81f1a7","Type":"ContainerDied","Data":"c4cecfd88c4d60d77ed412a5e7e2a9696960040aa789e052e878fdd68ea81b3f"} Feb 20 12:23:00.657384 master-0 kubenswrapper[31420]: I0220 12:23:00.657230 31420 scope.go:117] "RemoveContainer" containerID="196847a0f30f1d3f92ca3b0096264e1e8d75ea6a188e90f925f06eba0b60df45" Feb 20 12:23:00.660035 master-0 kubenswrapper[31420]: I0220 12:23:00.659584 31420 generic.go:334] "Generic (PLEG): container finished" podID="8218b6ff-f982-48d1-8fa5-82ea8e531fe6" containerID="d296df2bdf90e964d7294b8678e3869cb6f5a9977d3ce43f146fcfd250bbf994" exitCode=0 Feb 20 12:23:00.660035 master-0 kubenswrapper[31420]: I0220 12:23:00.659674 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-n25nj" event={"ID":"8218b6ff-f982-48d1-8fa5-82ea8e531fe6","Type":"ContainerDied","Data":"d296df2bdf90e964d7294b8678e3869cb6f5a9977d3ce43f146fcfd250bbf994"} Feb 20 12:23:00.694109 master-0 kubenswrapper[31420]: I0220 12:23:00.694048 31420 scope.go:117] "RemoveContainer" containerID="eae0fcdf75faeb4886f06aeff56e7d31ce158eea1042d9fb67b1d4440245ed3a" Feb 20 12:23:00.745163 master-0 kubenswrapper[31420]: I0220 12:23:00.745054 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-694f5b8c75-kvdtj"] Feb 20 12:23:00.760746 master-0 kubenswrapper[31420]: I0220 12:23:00.760672 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-694f5b8c75-kvdtj"] Feb 20 12:23:01.516655 master-0 kubenswrapper[31420]: I0220 12:23:01.516587 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9303051-f88b-4545-b9ee-ddc0af81f1a7" path="/var/lib/kubelet/pods/c9303051-f88b-4545-b9ee-ddc0af81f1a7/volumes" Feb 20 12:23:02.326262 master-0 kubenswrapper[31420]: I0220 12:23:02.326205 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:23:02.452138 master-0 kubenswrapper[31420]: I0220 12:23:02.451990 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-config-data\") pod \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " Feb 20 12:23:02.452138 master-0 kubenswrapper[31420]: I0220 12:23:02.452076 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9fwd\" (UniqueName: \"kubernetes.io/projected/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-kube-api-access-v9fwd\") pod \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " Feb 20 12:23:02.452706 master-0 kubenswrapper[31420]: I0220 12:23:02.452168 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-scripts\") pod \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " Feb 20 12:23:02.452706 master-0 kubenswrapper[31420]: I0220 12:23:02.452229 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-combined-ca-bundle\") pod \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\" (UID: \"8218b6ff-f982-48d1-8fa5-82ea8e531fe6\") " Feb 20 12:23:02.456448 master-0 kubenswrapper[31420]: I0220 12:23:02.456417 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-kube-api-access-v9fwd" (OuterVolumeSpecName: "kube-api-access-v9fwd") pod "8218b6ff-f982-48d1-8fa5-82ea8e531fe6" (UID: "8218b6ff-f982-48d1-8fa5-82ea8e531fe6"). InnerVolumeSpecName "kube-api-access-v9fwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:23:02.458251 master-0 kubenswrapper[31420]: I0220 12:23:02.458172 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-scripts" (OuterVolumeSpecName: "scripts") pod "8218b6ff-f982-48d1-8fa5-82ea8e531fe6" (UID: "8218b6ff-f982-48d1-8fa5-82ea8e531fe6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:02.486926 master-0 kubenswrapper[31420]: I0220 12:23:02.486858 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8218b6ff-f982-48d1-8fa5-82ea8e531fe6" (UID: "8218b6ff-f982-48d1-8fa5-82ea8e531fe6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:02.500571 master-0 kubenswrapper[31420]: I0220 12:23:02.500490 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-config-data" (OuterVolumeSpecName: "config-data") pod "8218b6ff-f982-48d1-8fa5-82ea8e531fe6" (UID: "8218b6ff-f982-48d1-8fa5-82ea8e531fe6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:02.556145 master-0 kubenswrapper[31420]: I0220 12:23:02.556082 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:02.556303 master-0 kubenswrapper[31420]: I0220 12:23:02.556148 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9fwd\" (UniqueName: \"kubernetes.io/projected/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-kube-api-access-v9fwd\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:02.556303 master-0 kubenswrapper[31420]: I0220 12:23:02.556170 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:02.556303 master-0 kubenswrapper[31420]: I0220 12:23:02.556189 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8218b6ff-f982-48d1-8fa5-82ea8e531fe6-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:02.693824 master-0 kubenswrapper[31420]: I0220 12:23:02.693746 31420 generic.go:334] "Generic (PLEG): container finished" podID="1d111110-d012-4ec5-9c26-0701910b11b2" containerID="5c792a8d70d7c3874e833eb39c1bb9aa91d63055305749d26df7026bed545b13" exitCode=0 Feb 20 12:23:02.694075 master-0 kubenswrapper[31420]: I0220 12:23:02.694034 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lnnfr" event={"ID":"1d111110-d012-4ec5-9c26-0701910b11b2","Type":"ContainerDied","Data":"5c792a8d70d7c3874e833eb39c1bb9aa91d63055305749d26df7026bed545b13"} Feb 20 12:23:02.698573 master-0 kubenswrapper[31420]: I0220 12:23:02.698489 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-n25nj" event={"ID":"8218b6ff-f982-48d1-8fa5-82ea8e531fe6","Type":"ContainerDied","Data":"1df766d0735e7ce1fa7f4f22026f0f43d59e69dbdd3a321226610e7ccc7f4595"} Feb 20 12:23:02.698714 master-0 kubenswrapper[31420]: I0220 12:23:02.698579 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1df766d0735e7ce1fa7f4f22026f0f43d59e69dbdd3a321226610e7ccc7f4595" Feb 20 12:23:02.698714 master-0 kubenswrapper[31420]: I0220 12:23:02.698627 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-n25nj" Feb 20 12:23:04.164747 master-0 kubenswrapper[31420]: I0220 12:23:04.164697 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:23:04.297415 master-0 kubenswrapper[31420]: I0220 12:23:04.297369 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rcg9\" (UniqueName: \"kubernetes.io/projected/1d111110-d012-4ec5-9c26-0701910b11b2-kube-api-access-5rcg9\") pod \"1d111110-d012-4ec5-9c26-0701910b11b2\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " Feb 20 12:23:04.297620 master-0 kubenswrapper[31420]: I0220 12:23:04.297447 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-combined-ca-bundle\") pod \"1d111110-d012-4ec5-9c26-0701910b11b2\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " Feb 20 12:23:04.297620 master-0 kubenswrapper[31420]: I0220 12:23:04.297577 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-config-data\") pod \"1d111110-d012-4ec5-9c26-0701910b11b2\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " Feb 20 12:23:04.297620 master-0 kubenswrapper[31420]: I0220 12:23:04.297603 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-scripts\") pod \"1d111110-d012-4ec5-9c26-0701910b11b2\" (UID: \"1d111110-d012-4ec5-9c26-0701910b11b2\") " Feb 20 12:23:04.300843 master-0 kubenswrapper[31420]: I0220 12:23:04.300793 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-scripts" (OuterVolumeSpecName: "scripts") pod "1d111110-d012-4ec5-9c26-0701910b11b2" (UID: "1d111110-d012-4ec5-9c26-0701910b11b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:04.303905 master-0 kubenswrapper[31420]: I0220 12:23:04.303814 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d111110-d012-4ec5-9c26-0701910b11b2-kube-api-access-5rcg9" (OuterVolumeSpecName: "kube-api-access-5rcg9") pod "1d111110-d012-4ec5-9c26-0701910b11b2" (UID: "1d111110-d012-4ec5-9c26-0701910b11b2"). InnerVolumeSpecName "kube-api-access-5rcg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:23:04.330397 master-0 kubenswrapper[31420]: I0220 12:23:04.329889 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d111110-d012-4ec5-9c26-0701910b11b2" (UID: "1d111110-d012-4ec5-9c26-0701910b11b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:04.343846 master-0 kubenswrapper[31420]: I0220 12:23:04.343781 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-config-data" (OuterVolumeSpecName: "config-data") pod "1d111110-d012-4ec5-9c26-0701910b11b2" (UID: "1d111110-d012-4ec5-9c26-0701910b11b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:04.401131 master-0 kubenswrapper[31420]: I0220 12:23:04.401071 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rcg9\" (UniqueName: \"kubernetes.io/projected/1d111110-d012-4ec5-9c26-0701910b11b2-kube-api-access-5rcg9\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:04.401131 master-0 kubenswrapper[31420]: I0220 12:23:04.401132 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:04.401503 master-0 kubenswrapper[31420]: I0220 12:23:04.401146 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:04.401503 master-0 kubenswrapper[31420]: I0220 12:23:04.401160 31420 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d111110-d012-4ec5-9c26-0701910b11b2-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:04.700855 master-0 kubenswrapper[31420]: I0220 12:23:04.700658 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:23:04.701214 master-0 kubenswrapper[31420]: I0220 12:23:04.701092 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8e02b1f2-9416-46c2-b4b8-f0c0a351b198" containerName="nova-api-log" containerID="cri-o://8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528" gracePeriod=30 Feb 20 12:23:04.701868 master-0 kubenswrapper[31420]: I0220 12:23:04.701818 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8e02b1f2-9416-46c2-b4b8-f0c0a351b198" containerName="nova-api-api" containerID="cri-o://6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f" gracePeriod=30 Feb 20 12:23:04.714287 master-0 kubenswrapper[31420]: I0220 12:23:04.714222 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 12:23:04.714541 master-0 kubenswrapper[31420]: I0220 12:23:04.714474 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0465bc3e-a104-4b88-b897-8d59c142c137" containerName="nova-scheduler-scheduler" containerID="cri-o://320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812" gracePeriod=30 Feb 20 12:23:04.727391 master-0 kubenswrapper[31420]: I0220 12:23:04.727329 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lnnfr" event={"ID":"1d111110-d012-4ec5-9c26-0701910b11b2","Type":"ContainerDied","Data":"2a9b4006c9d74fe46bfadf473b820369324090b36fe1c9dcfb4745bad2587677"} Feb 20 12:23:04.727391 master-0 kubenswrapper[31420]: I0220 12:23:04.727390 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a9b4006c9d74fe46bfadf473b820369324090b36fe1c9dcfb4745bad2587677" Feb 20 12:23:04.727560 master-0 kubenswrapper[31420]: I0220 12:23:04.727479 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lnnfr" Feb 20 12:23:04.786159 master-0 kubenswrapper[31420]: I0220 12:23:04.786072 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 12:23:04.786373 master-0 kubenswrapper[31420]: I0220 12:23:04.786352 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f6818035-947a-43e2-8708-c2b70b22b705" containerName="nova-metadata-log" containerID="cri-o://e23402f9c68a7c226a163710aff90ff3dbef3c11d2dfbfe0d841f52bb03c4e49" gracePeriod=30 Feb 20 12:23:04.786519 master-0 kubenswrapper[31420]: I0220 12:23:04.786491 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f6818035-947a-43e2-8708-c2b70b22b705" containerName="nova-metadata-metadata" containerID="cri-o://a1804506c2c8d07f18814b8df7e13e348515ecf888e85476072b81e81061361e" gracePeriod=30 Feb 20 12:23:05.374612 master-0 kubenswrapper[31420]: I0220 12:23:05.374564 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 12:23:05.549343 master-0 kubenswrapper[31420]: I0220 12:23:05.549208 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-internal-tls-certs\") pod \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " Feb 20 12:23:05.549552 master-0 kubenswrapper[31420]: I0220 12:23:05.549446 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-config-data\") pod \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " Feb 20 12:23:05.549552 master-0 kubenswrapper[31420]: I0220 12:23:05.549497 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-public-tls-certs\") pod \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " Feb 20 12:23:05.549660 master-0 kubenswrapper[31420]: I0220 12:23:05.549624 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-logs\") pod \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " Feb 20 12:23:05.549790 master-0 kubenswrapper[31420]: I0220 12:23:05.549770 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9dkd\" (UniqueName: \"kubernetes.io/projected/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-kube-api-access-l9dkd\") pod \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " Feb 20 12:23:05.550238 master-0 kubenswrapper[31420]: I0220 12:23:05.549839 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-combined-ca-bundle\") pod \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\" (UID: \"8e02b1f2-9416-46c2-b4b8-f0c0a351b198\") " Feb 20 12:23:05.550238 master-0 kubenswrapper[31420]: I0220 12:23:05.549969 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-logs" (OuterVolumeSpecName: "logs") pod "8e02b1f2-9416-46c2-b4b8-f0c0a351b198" (UID: "8e02b1f2-9416-46c2-b4b8-f0c0a351b198"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:23:05.550886 master-0 kubenswrapper[31420]: I0220 12:23:05.550851 31420 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:05.553328 master-0 kubenswrapper[31420]: I0220 12:23:05.553285 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-kube-api-access-l9dkd" (OuterVolumeSpecName: "kube-api-access-l9dkd") pod "8e02b1f2-9416-46c2-b4b8-f0c0a351b198" (UID: "8e02b1f2-9416-46c2-b4b8-f0c0a351b198"). InnerVolumeSpecName "kube-api-access-l9dkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:23:05.591322 master-0 kubenswrapper[31420]: I0220 12:23:05.587584 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e02b1f2-9416-46c2-b4b8-f0c0a351b198" (UID: "8e02b1f2-9416-46c2-b4b8-f0c0a351b198"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:05.591322 master-0 kubenswrapper[31420]: I0220 12:23:05.587706 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-config-data" (OuterVolumeSpecName: "config-data") pod "8e02b1f2-9416-46c2-b4b8-f0c0a351b198" (UID: "8e02b1f2-9416-46c2-b4b8-f0c0a351b198"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:05.611640 master-0 kubenswrapper[31420]: I0220 12:23:05.611582 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8e02b1f2-9416-46c2-b4b8-f0c0a351b198" (UID: "8e02b1f2-9416-46c2-b4b8-f0c0a351b198"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:05.624295 master-0 kubenswrapper[31420]: I0220 12:23:05.624231 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8e02b1f2-9416-46c2-b4b8-f0c0a351b198" (UID: "8e02b1f2-9416-46c2-b4b8-f0c0a351b198"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:05.653299 master-0 kubenswrapper[31420]: I0220 12:23:05.653240 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:05.653299 master-0 kubenswrapper[31420]: I0220 12:23:05.653283 31420 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:05.653299 master-0 kubenswrapper[31420]: I0220 12:23:05.653296 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9dkd\" (UniqueName: \"kubernetes.io/projected/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-kube-api-access-l9dkd\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:05.653299 master-0 kubenswrapper[31420]: I0220 12:23:05.653308 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:05.653299 master-0 kubenswrapper[31420]: I0220 12:23:05.653316 31420 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e02b1f2-9416-46c2-b4b8-f0c0a351b198-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:05.742438 master-0 kubenswrapper[31420]: I0220 12:23:05.742364 31420 generic.go:334] "Generic (PLEG): container finished" podID="f6818035-947a-43e2-8708-c2b70b22b705" containerID="e23402f9c68a7c226a163710aff90ff3dbef3c11d2dfbfe0d841f52bb03c4e49" exitCode=143 Feb 20 12:23:05.742638 master-0 kubenswrapper[31420]: I0220 12:23:05.742454 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6818035-947a-43e2-8708-c2b70b22b705","Type":"ContainerDied","Data":"e23402f9c68a7c226a163710aff90ff3dbef3c11d2dfbfe0d841f52bb03c4e49"} Feb 20 12:23:05.744739 master-0 kubenswrapper[31420]: I0220 12:23:05.744713 31420 generic.go:334] "Generic (PLEG): container finished" podID="8e02b1f2-9416-46c2-b4b8-f0c0a351b198" containerID="6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f" exitCode=0 Feb 20 12:23:05.744739 master-0 kubenswrapper[31420]: I0220 12:23:05.744738 31420 generic.go:334] "Generic (PLEG): container finished" podID="8e02b1f2-9416-46c2-b4b8-f0c0a351b198" containerID="8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528" exitCode=143 Feb 20 12:23:05.744854 master-0 kubenswrapper[31420]: I0220 12:23:05.744745 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e02b1f2-9416-46c2-b4b8-f0c0a351b198","Type":"ContainerDied","Data":"6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f"} Feb 20 12:23:05.744854 master-0 kubenswrapper[31420]: I0220 12:23:05.744764 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 12:23:05.744854 master-0 kubenswrapper[31420]: I0220 12:23:05.744812 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e02b1f2-9416-46c2-b4b8-f0c0a351b198","Type":"ContainerDied","Data":"8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528"} Feb 20 12:23:05.744854 master-0 kubenswrapper[31420]: I0220 12:23:05.744831 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e02b1f2-9416-46c2-b4b8-f0c0a351b198","Type":"ContainerDied","Data":"534430aef9d67b6b8415f4389acbd32455a412e36e22087b211c5c27d0ff521e"} Feb 20 12:23:05.744854 master-0 kubenswrapper[31420]: I0220 12:23:05.744851 31420 scope.go:117] "RemoveContainer" containerID="6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f" Feb 20 12:23:05.786932 master-0 kubenswrapper[31420]: I0220 12:23:05.786774 31420 scope.go:117] "RemoveContainer" containerID="8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528" Feb 20 12:23:05.829001 master-0 kubenswrapper[31420]: I0220 12:23:05.821795 31420 scope.go:117] "RemoveContainer" containerID="6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f" Feb 20 12:23:05.829001 master-0 kubenswrapper[31420]: E0220 12:23:05.822293 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f\": container with ID starting with 6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f not found: ID does not exist" containerID="6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f" Feb 20 12:23:05.829001 master-0 kubenswrapper[31420]: I0220 12:23:05.822348 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f"} err="failed to get container status \"6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f\": rpc error: code = NotFound desc = could not find container \"6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f\": container with ID starting with 6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f not found: ID does not exist" Feb 20 12:23:05.829001 master-0 kubenswrapper[31420]: I0220 12:23:05.822380 31420 scope.go:117] "RemoveContainer" containerID="8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528" Feb 20 12:23:05.829001 master-0 kubenswrapper[31420]: I0220 12:23:05.822484 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:23:05.829001 master-0 kubenswrapper[31420]: E0220 12:23:05.822711 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528\": container with ID starting with 8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528 not found: ID does not exist" containerID="8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528" Feb 20 12:23:05.829001 master-0 kubenswrapper[31420]: I0220 12:23:05.822732 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528"} err="failed to get container status \"8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528\": rpc error: code = NotFound desc = could not find container \"8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528\": container with ID starting with 8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528 not found: ID does not exist" Feb 20 12:23:05.829001 master-0 kubenswrapper[31420]: I0220 12:23:05.822751 31420 scope.go:117] "RemoveContainer" containerID="6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f" Feb 20 12:23:05.829001 master-0 kubenswrapper[31420]: I0220 12:23:05.823219 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f"} err="failed to get container status \"6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f\": rpc error: code = NotFound desc = could not find container \"6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f\": container with ID starting with 6663b60533f2bd290f4efbeb655cd1ba0ff27bd8114f87c02fd5895e83e7bf9f not found: ID does not exist" Feb 20 12:23:05.829001 master-0 kubenswrapper[31420]: I0220 12:23:05.823239 31420 scope.go:117] "RemoveContainer" containerID="8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528" Feb 20 12:23:05.829001 master-0 kubenswrapper[31420]: I0220 12:23:05.823746 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528"} err="failed to get container status \"8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528\": rpc error: code = NotFound desc = could not find container \"8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528\": container with ID starting with 8deafa8717c73ec2a40cec4a78ed734e15d8b972e524502ef0428c821e000528 not found: ID does not exist" Feb 20 12:23:05.856474 master-0 kubenswrapper[31420]: I0220 12:23:05.856417 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:23:05.888242 master-0 kubenswrapper[31420]: I0220 12:23:05.888175 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 20 12:23:05.891245 master-0 kubenswrapper[31420]: E0220 12:23:05.891192 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9303051-f88b-4545-b9ee-ddc0af81f1a7" containerName="init" Feb 20 12:23:05.891331 master-0 kubenswrapper[31420]: I0220 12:23:05.891248 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9303051-f88b-4545-b9ee-ddc0af81f1a7" containerName="init" Feb 20 12:23:05.891376 master-0 kubenswrapper[31420]: E0220 12:23:05.891333 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9303051-f88b-4545-b9ee-ddc0af81f1a7" containerName="dnsmasq-dns" Feb 20 12:23:05.891376 master-0 kubenswrapper[31420]: I0220 12:23:05.891345 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9303051-f88b-4545-b9ee-ddc0af81f1a7" containerName="dnsmasq-dns" Feb 20 12:23:05.891376 master-0 kubenswrapper[31420]: E0220 12:23:05.891363 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8218b6ff-f982-48d1-8fa5-82ea8e531fe6" containerName="nova-manage" Feb 20 12:23:05.891376 master-0 kubenswrapper[31420]: I0220 12:23:05.891372 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8218b6ff-f982-48d1-8fa5-82ea8e531fe6" containerName="nova-manage" Feb 20 12:23:05.891497 master-0 kubenswrapper[31420]: E0220 12:23:05.891418 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e02b1f2-9416-46c2-b4b8-f0c0a351b198" containerName="nova-api-api" Feb 20 12:23:05.891497 master-0 kubenswrapper[31420]: I0220 12:23:05.891429 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e02b1f2-9416-46c2-b4b8-f0c0a351b198" containerName="nova-api-api" Feb 20 12:23:05.891497 master-0 kubenswrapper[31420]: E0220 12:23:05.891445 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d111110-d012-4ec5-9c26-0701910b11b2" containerName="nova-manage" Feb 20 12:23:05.891497 master-0 kubenswrapper[31420]: I0220 12:23:05.891492 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d111110-d012-4ec5-9c26-0701910b11b2" containerName="nova-manage" Feb 20 12:23:05.891640 master-0 kubenswrapper[31420]: E0220 12:23:05.891511 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e02b1f2-9416-46c2-b4b8-f0c0a351b198" containerName="nova-api-log" Feb 20 12:23:05.891640 master-0 kubenswrapper[31420]: I0220 12:23:05.891521 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e02b1f2-9416-46c2-b4b8-f0c0a351b198" containerName="nova-api-log" Feb 20 12:23:05.892397 master-0 kubenswrapper[31420]: I0220 12:23:05.892358 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e02b1f2-9416-46c2-b4b8-f0c0a351b198" containerName="nova-api-api" Feb 20 12:23:05.892446 master-0 kubenswrapper[31420]: I0220 12:23:05.892418 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8218b6ff-f982-48d1-8fa5-82ea8e531fe6" containerName="nova-manage" Feb 20 12:23:05.892485 master-0 kubenswrapper[31420]: I0220 12:23:05.892457 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d111110-d012-4ec5-9c26-0701910b11b2" containerName="nova-manage" Feb 20 12:23:05.892485 master-0 kubenswrapper[31420]: I0220 12:23:05.892474 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9303051-f88b-4545-b9ee-ddc0af81f1a7" containerName="dnsmasq-dns" Feb 20 12:23:05.892566 master-0 kubenswrapper[31420]: I0220 12:23:05.892508 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e02b1f2-9416-46c2-b4b8-f0c0a351b198" containerName="nova-api-log" Feb 20 12:23:05.894196 master-0 kubenswrapper[31420]: I0220 12:23:05.894168 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 12:23:05.896547 master-0 kubenswrapper[31420]: I0220 12:23:05.896410 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 20 12:23:05.896547 master-0 kubenswrapper[31420]: I0220 12:23:05.896504 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 20 12:23:05.897037 master-0 kubenswrapper[31420]: I0220 12:23:05.896506 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 20 12:23:05.905344 master-0 kubenswrapper[31420]: I0220 12:23:05.904703 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:23:06.068898 master-0 kubenswrapper[31420]: I0220 12:23:06.068813 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6679d6f-5fd2-407d-96b4-2dcea806dec6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.069107 master-0 kubenswrapper[31420]: I0220 12:23:06.069083 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6679d6f-5fd2-407d-96b4-2dcea806dec6-public-tls-certs\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.070633 master-0 kubenswrapper[31420]: I0220 12:23:06.069411 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6679d6f-5fd2-407d-96b4-2dcea806dec6-config-data\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.070633 master-0 kubenswrapper[31420]: I0220 12:23:06.069467 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6679d6f-5fd2-407d-96b4-2dcea806dec6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.070633 master-0 kubenswrapper[31420]: I0220 12:23:06.069557 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6679d6f-5fd2-407d-96b4-2dcea806dec6-logs\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.070633 master-0 kubenswrapper[31420]: I0220 12:23:06.069612 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bksdw\" (UniqueName: \"kubernetes.io/projected/b6679d6f-5fd2-407d-96b4-2dcea806dec6-kube-api-access-bksdw\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.171553 master-0 kubenswrapper[31420]: I0220 12:23:06.171363 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6679d6f-5fd2-407d-96b4-2dcea806dec6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.171553 master-0 kubenswrapper[31420]: I0220 12:23:06.171544 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6679d6f-5fd2-407d-96b4-2dcea806dec6-public-tls-certs\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.172677 master-0 kubenswrapper[31420]: I0220 12:23:06.171671 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6679d6f-5fd2-407d-96b4-2dcea806dec6-config-data\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.172677 master-0 kubenswrapper[31420]: I0220 12:23:06.171823 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6679d6f-5fd2-407d-96b4-2dcea806dec6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.172677 master-0 kubenswrapper[31420]: I0220 12:23:06.171876 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6679d6f-5fd2-407d-96b4-2dcea806dec6-logs\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.172677 master-0 kubenswrapper[31420]: I0220 12:23:06.171909 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bksdw\" (UniqueName: \"kubernetes.io/projected/b6679d6f-5fd2-407d-96b4-2dcea806dec6-kube-api-access-bksdw\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.172677 master-0 kubenswrapper[31420]: I0220 12:23:06.172471 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6679d6f-5fd2-407d-96b4-2dcea806dec6-logs\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.175090 master-0 kubenswrapper[31420]: I0220 12:23:06.175041 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6679d6f-5fd2-407d-96b4-2dcea806dec6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.175090 master-0 kubenswrapper[31420]: I0220 12:23:06.175075 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6679d6f-5fd2-407d-96b4-2dcea806dec6-config-data\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.177520 master-0 kubenswrapper[31420]: I0220 12:23:06.177456 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6679d6f-5fd2-407d-96b4-2dcea806dec6-public-tls-certs\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.178139 master-0 kubenswrapper[31420]: I0220 12:23:06.178092 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6679d6f-5fd2-407d-96b4-2dcea806dec6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.189538 master-0 kubenswrapper[31420]: I0220 12:23:06.187904 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bksdw\" (UniqueName: \"kubernetes.io/projected/b6679d6f-5fd2-407d-96b4-2dcea806dec6-kube-api-access-bksdw\") pod \"nova-api-0\" (UID: \"b6679d6f-5fd2-407d-96b4-2dcea806dec6\") " pod="openstack/nova-api-0" Feb 20 12:23:06.221264 master-0 kubenswrapper[31420]: I0220 12:23:06.221204 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 12:23:06.510561 master-0 kubenswrapper[31420]: E0220 12:23:06.510407 31420 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 20 12:23:06.513371 master-0 kubenswrapper[31420]: E0220 12:23:06.513152 31420 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 20 12:23:06.527297 master-0 kubenswrapper[31420]: E0220 12:23:06.527188 31420 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 20 12:23:06.527474 master-0 kubenswrapper[31420]: E0220 12:23:06.527308 31420 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0465bc3e-a104-4b88-b897-8d59c142c137" containerName="nova-scheduler-scheduler" Feb 20 12:23:06.716562 master-0 kubenswrapper[31420]: W0220 12:23:06.714903 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6679d6f_5fd2_407d_96b4_2dcea806dec6.slice/crio-ba142c72bec90e177ae47d55597fae626cda077662483b55d42ee56037faf8e6 WatchSource:0}: Error finding container ba142c72bec90e177ae47d55597fae626cda077662483b55d42ee56037faf8e6: Status 404 returned error can't find the container with id ba142c72bec90e177ae47d55597fae626cda077662483b55d42ee56037faf8e6 Feb 20 12:23:06.719744 master-0 kubenswrapper[31420]: I0220 12:23:06.717864 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 12:23:06.758347 master-0 kubenswrapper[31420]: I0220 12:23:06.758270 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b6679d6f-5fd2-407d-96b4-2dcea806dec6","Type":"ContainerStarted","Data":"ba142c72bec90e177ae47d55597fae626cda077662483b55d42ee56037faf8e6"} Feb 20 12:23:07.509323 master-0 kubenswrapper[31420]: I0220 12:23:07.509259 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e02b1f2-9416-46c2-b4b8-f0c0a351b198" path="/var/lib/kubelet/pods/8e02b1f2-9416-46c2-b4b8-f0c0a351b198/volumes" Feb 20 12:23:07.776848 master-0 kubenswrapper[31420]: I0220 12:23:07.776068 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b6679d6f-5fd2-407d-96b4-2dcea806dec6","Type":"ContainerStarted","Data":"13b1e883884bba50c2de7052b3f03e3ef75eccdcb827266897fefb96df65f1b6"} Feb 20 12:23:07.777960 master-0 kubenswrapper[31420]: I0220 12:23:07.776829 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b6679d6f-5fd2-407d-96b4-2dcea806dec6","Type":"ContainerStarted","Data":"2d7ff39431ca62f41204b7d1f0a0ab1f87c691973002dbe47df4eb997ddd091e"} Feb 20 12:23:07.804785 master-0 kubenswrapper[31420]: I0220 12:23:07.804689 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.804668985 podStartE2EDuration="2.804668985s" podCreationTimestamp="2026-02-20 12:23:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:23:07.80271487 +0000 UTC m=+1092.521953151" watchObservedRunningTime="2026-02-20 12:23:07.804668985 +0000 UTC m=+1092.523907236" Feb 20 12:23:08.461575 master-0 kubenswrapper[31420]: I0220 12:23:08.461473 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 12:23:08.543116 master-0 kubenswrapper[31420]: I0220 12:23:08.542996 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6818035-947a-43e2-8708-c2b70b22b705-logs\") pod \"f6818035-947a-43e2-8708-c2b70b22b705\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " Feb 20 12:23:08.543386 master-0 kubenswrapper[31420]: I0220 12:23:08.543245 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-combined-ca-bundle\") pod \"f6818035-947a-43e2-8708-c2b70b22b705\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " Feb 20 12:23:08.543386 master-0 kubenswrapper[31420]: I0220 12:23:08.543296 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-nova-metadata-tls-certs\") pod \"f6818035-947a-43e2-8708-c2b70b22b705\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " Feb 20 12:23:08.543386 master-0 kubenswrapper[31420]: I0220 12:23:08.543323 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-config-data\") pod \"f6818035-947a-43e2-8708-c2b70b22b705\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " Feb 20 12:23:08.543386 master-0 kubenswrapper[31420]: I0220 12:23:08.543341 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5htt\" (UniqueName: \"kubernetes.io/projected/f6818035-947a-43e2-8708-c2b70b22b705-kube-api-access-c5htt\") pod \"f6818035-947a-43e2-8708-c2b70b22b705\" (UID: \"f6818035-947a-43e2-8708-c2b70b22b705\") " Feb 20 12:23:08.543694 master-0 kubenswrapper[31420]: I0220 12:23:08.543544 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6818035-947a-43e2-8708-c2b70b22b705-logs" (OuterVolumeSpecName: "logs") pod "f6818035-947a-43e2-8708-c2b70b22b705" (UID: "f6818035-947a-43e2-8708-c2b70b22b705"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 12:23:08.544271 master-0 kubenswrapper[31420]: I0220 12:23:08.544221 31420 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6818035-947a-43e2-8708-c2b70b22b705-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:08.549439 master-0 kubenswrapper[31420]: I0220 12:23:08.549319 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6818035-947a-43e2-8708-c2b70b22b705-kube-api-access-c5htt" (OuterVolumeSpecName: "kube-api-access-c5htt") pod "f6818035-947a-43e2-8708-c2b70b22b705" (UID: "f6818035-947a-43e2-8708-c2b70b22b705"). InnerVolumeSpecName "kube-api-access-c5htt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:23:08.584658 master-0 kubenswrapper[31420]: I0220 12:23:08.584592 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-config-data" (OuterVolumeSpecName: "config-data") pod "f6818035-947a-43e2-8708-c2b70b22b705" (UID: "f6818035-947a-43e2-8708-c2b70b22b705"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:08.587856 master-0 kubenswrapper[31420]: I0220 12:23:08.587795 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6818035-947a-43e2-8708-c2b70b22b705" (UID: "f6818035-947a-43e2-8708-c2b70b22b705"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:08.615184 master-0 kubenswrapper[31420]: I0220 12:23:08.615040 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f6818035-947a-43e2-8708-c2b70b22b705" (UID: "f6818035-947a-43e2-8708-c2b70b22b705"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:08.646900 master-0 kubenswrapper[31420]: I0220 12:23:08.646840 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:08.646900 master-0 kubenswrapper[31420]: I0220 12:23:08.646890 31420 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:08.647242 master-0 kubenswrapper[31420]: I0220 12:23:08.646909 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6818035-947a-43e2-8708-c2b70b22b705-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:08.647242 master-0 kubenswrapper[31420]: I0220 12:23:08.646920 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5htt\" (UniqueName: \"kubernetes.io/projected/f6818035-947a-43e2-8708-c2b70b22b705-kube-api-access-c5htt\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:08.791083 master-0 kubenswrapper[31420]: I0220 12:23:08.790989 31420 generic.go:334] "Generic (PLEG): container finished" podID="f6818035-947a-43e2-8708-c2b70b22b705" containerID="a1804506c2c8d07f18814b8df7e13e348515ecf888e85476072b81e81061361e" exitCode=0 Feb 20 12:23:08.791083 master-0 kubenswrapper[31420]: I0220 12:23:08.791063 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 12:23:08.791741 master-0 kubenswrapper[31420]: I0220 12:23:08.791061 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6818035-947a-43e2-8708-c2b70b22b705","Type":"ContainerDied","Data":"a1804506c2c8d07f18814b8df7e13e348515ecf888e85476072b81e81061361e"} Feb 20 12:23:08.791741 master-0 kubenswrapper[31420]: I0220 12:23:08.791186 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6818035-947a-43e2-8708-c2b70b22b705","Type":"ContainerDied","Data":"3fa25bf0a55dc772b662e5a7f87f3345aa7fec8708b08e88bec23527914e5973"} Feb 20 12:23:08.791741 master-0 kubenswrapper[31420]: I0220 12:23:08.791203 31420 scope.go:117] "RemoveContainer" containerID="a1804506c2c8d07f18814b8df7e13e348515ecf888e85476072b81e81061361e" Feb 20 12:23:08.836740 master-0 kubenswrapper[31420]: I0220 12:23:08.835318 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 12:23:08.848598 master-0 kubenswrapper[31420]: I0220 12:23:08.848539 31420 scope.go:117] "RemoveContainer" containerID="e23402f9c68a7c226a163710aff90ff3dbef3c11d2dfbfe0d841f52bb03c4e49" Feb 20 12:23:08.852823 master-0 kubenswrapper[31420]: I0220 12:23:08.852755 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 12:23:08.874825 master-0 kubenswrapper[31420]: I0220 12:23:08.874308 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 20 12:23:08.875619 master-0 kubenswrapper[31420]: E0220 12:23:08.875591 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6818035-947a-43e2-8708-c2b70b22b705" containerName="nova-metadata-metadata" Feb 20 12:23:08.875752 master-0 kubenswrapper[31420]: I0220 12:23:08.875621 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6818035-947a-43e2-8708-c2b70b22b705" containerName="nova-metadata-metadata" Feb 20 12:23:08.875752 master-0 kubenswrapper[31420]: E0220 12:23:08.875684 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6818035-947a-43e2-8708-c2b70b22b705" containerName="nova-metadata-log" Feb 20 12:23:08.875752 master-0 kubenswrapper[31420]: I0220 12:23:08.875696 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6818035-947a-43e2-8708-c2b70b22b705" containerName="nova-metadata-log" Feb 20 12:23:08.877017 master-0 kubenswrapper[31420]: I0220 12:23:08.876076 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6818035-947a-43e2-8708-c2b70b22b705" containerName="nova-metadata-metadata" Feb 20 12:23:08.877017 master-0 kubenswrapper[31420]: I0220 12:23:08.876122 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6818035-947a-43e2-8708-c2b70b22b705" containerName="nova-metadata-log" Feb 20 12:23:08.878421 master-0 kubenswrapper[31420]: I0220 12:23:08.878378 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 12:23:08.881367 master-0 kubenswrapper[31420]: I0220 12:23:08.881323 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 20 12:23:08.881620 master-0 kubenswrapper[31420]: I0220 12:23:08.881578 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 20 12:23:08.898050 master-0 kubenswrapper[31420]: I0220 12:23:08.897991 31420 scope.go:117] "RemoveContainer" containerID="a1804506c2c8d07f18814b8df7e13e348515ecf888e85476072b81e81061361e" Feb 20 12:23:08.898798 master-0 kubenswrapper[31420]: E0220 12:23:08.898721 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1804506c2c8d07f18814b8df7e13e348515ecf888e85476072b81e81061361e\": container with ID starting with a1804506c2c8d07f18814b8df7e13e348515ecf888e85476072b81e81061361e not found: ID does not exist" containerID="a1804506c2c8d07f18814b8df7e13e348515ecf888e85476072b81e81061361e" Feb 20 12:23:08.898883 master-0 kubenswrapper[31420]: I0220 12:23:08.898821 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1804506c2c8d07f18814b8df7e13e348515ecf888e85476072b81e81061361e"} err="failed to get container status \"a1804506c2c8d07f18814b8df7e13e348515ecf888e85476072b81e81061361e\": rpc error: code = NotFound desc = could not find container \"a1804506c2c8d07f18814b8df7e13e348515ecf888e85476072b81e81061361e\": container with ID starting with a1804506c2c8d07f18814b8df7e13e348515ecf888e85476072b81e81061361e not found: ID does not exist" Feb 20 12:23:08.898883 master-0 kubenswrapper[31420]: I0220 12:23:08.898856 31420 scope.go:117] "RemoveContainer" containerID="e23402f9c68a7c226a163710aff90ff3dbef3c11d2dfbfe0d841f52bb03c4e49" Feb 20 12:23:08.899493 master-0 kubenswrapper[31420]: E0220 12:23:08.899440 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e23402f9c68a7c226a163710aff90ff3dbef3c11d2dfbfe0d841f52bb03c4e49\": container with ID starting with e23402f9c68a7c226a163710aff90ff3dbef3c11d2dfbfe0d841f52bb03c4e49 not found: ID does not exist" containerID="e23402f9c68a7c226a163710aff90ff3dbef3c11d2dfbfe0d841f52bb03c4e49" Feb 20 12:23:08.899593 master-0 kubenswrapper[31420]: I0220 12:23:08.899507 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23402f9c68a7c226a163710aff90ff3dbef3c11d2dfbfe0d841f52bb03c4e49"} err="failed to get container status \"e23402f9c68a7c226a163710aff90ff3dbef3c11d2dfbfe0d841f52bb03c4e49\": rpc error: code = NotFound desc = could not find container \"e23402f9c68a7c226a163710aff90ff3dbef3c11d2dfbfe0d841f52bb03c4e49\": container with ID starting with e23402f9c68a7c226a163710aff90ff3dbef3c11d2dfbfe0d841f52bb03c4e49 not found: ID does not exist" Feb 20 12:23:08.910780 master-0 kubenswrapper[31420]: I0220 12:23:08.910708 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 12:23:09.057029 master-0 kubenswrapper[31420]: I0220 12:23:09.056891 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd027dd-3995-4508-bbee-7c776c2d6fe4-config-data\") pod \"nova-metadata-0\" (UID: \"0dd027dd-3995-4508-bbee-7c776c2d6fe4\") " pod="openstack/nova-metadata-0" Feb 20 12:23:09.057927 master-0 kubenswrapper[31420]: I0220 12:23:09.057874 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd027dd-3995-4508-bbee-7c776c2d6fe4-logs\") pod \"nova-metadata-0\" (UID: \"0dd027dd-3995-4508-bbee-7c776c2d6fe4\") " pod="openstack/nova-metadata-0" Feb 20 12:23:09.058161 master-0 kubenswrapper[31420]: I0220 12:23:09.058100 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd027dd-3995-4508-bbee-7c776c2d6fe4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0dd027dd-3995-4508-bbee-7c776c2d6fe4\") " pod="openstack/nova-metadata-0" Feb 20 12:23:09.058351 master-0 kubenswrapper[31420]: I0220 12:23:09.058272 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xj95\" (UniqueName: \"kubernetes.io/projected/0dd027dd-3995-4508-bbee-7c776c2d6fe4-kube-api-access-9xj95\") pod \"nova-metadata-0\" (UID: \"0dd027dd-3995-4508-bbee-7c776c2d6fe4\") " pod="openstack/nova-metadata-0" Feb 20 12:23:09.058447 master-0 kubenswrapper[31420]: I0220 12:23:09.058415 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd027dd-3995-4508-bbee-7c776c2d6fe4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0dd027dd-3995-4508-bbee-7c776c2d6fe4\") " pod="openstack/nova-metadata-0" Feb 20 12:23:09.160762 master-0 kubenswrapper[31420]: I0220 12:23:09.160685 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd027dd-3995-4508-bbee-7c776c2d6fe4-config-data\") pod \"nova-metadata-0\" (UID: \"0dd027dd-3995-4508-bbee-7c776c2d6fe4\") " pod="openstack/nova-metadata-0" Feb 20 12:23:09.160762 master-0 kubenswrapper[31420]: I0220 12:23:09.160773 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd027dd-3995-4508-bbee-7c776c2d6fe4-logs\") pod \"nova-metadata-0\" (UID: \"0dd027dd-3995-4508-bbee-7c776c2d6fe4\") " pod="openstack/nova-metadata-0" Feb 20 12:23:09.161187 master-0 kubenswrapper[31420]: I0220 12:23:09.160812 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd027dd-3995-4508-bbee-7c776c2d6fe4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0dd027dd-3995-4508-bbee-7c776c2d6fe4\") " pod="openstack/nova-metadata-0" Feb 20 12:23:09.161187 master-0 kubenswrapper[31420]: I0220 12:23:09.160841 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xj95\" (UniqueName: \"kubernetes.io/projected/0dd027dd-3995-4508-bbee-7c776c2d6fe4-kube-api-access-9xj95\") pod \"nova-metadata-0\" (UID: \"0dd027dd-3995-4508-bbee-7c776c2d6fe4\") " pod="openstack/nova-metadata-0" Feb 20 12:23:09.161187 master-0 kubenswrapper[31420]: I0220 12:23:09.161124 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd027dd-3995-4508-bbee-7c776c2d6fe4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0dd027dd-3995-4508-bbee-7c776c2d6fe4\") " pod="openstack/nova-metadata-0" Feb 20 12:23:09.161719 master-0 kubenswrapper[31420]: I0220 12:23:09.161630 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd027dd-3995-4508-bbee-7c776c2d6fe4-logs\") pod \"nova-metadata-0\" (UID: \"0dd027dd-3995-4508-bbee-7c776c2d6fe4\") " pod="openstack/nova-metadata-0" Feb 20 12:23:09.166744 master-0 kubenswrapper[31420]: I0220 12:23:09.166677 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd027dd-3995-4508-bbee-7c776c2d6fe4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0dd027dd-3995-4508-bbee-7c776c2d6fe4\") " pod="openstack/nova-metadata-0" Feb 20 12:23:09.167395 master-0 kubenswrapper[31420]: I0220 12:23:09.167343 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd027dd-3995-4508-bbee-7c776c2d6fe4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0dd027dd-3995-4508-bbee-7c776c2d6fe4\") " pod="openstack/nova-metadata-0" Feb 20 12:23:09.171240 master-0 kubenswrapper[31420]: I0220 12:23:09.171158 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd027dd-3995-4508-bbee-7c776c2d6fe4-config-data\") pod \"nova-metadata-0\" (UID: \"0dd027dd-3995-4508-bbee-7c776c2d6fe4\") " pod="openstack/nova-metadata-0" Feb 20 12:23:09.190333 master-0 kubenswrapper[31420]: I0220 12:23:09.190053 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xj95\" (UniqueName: \"kubernetes.io/projected/0dd027dd-3995-4508-bbee-7c776c2d6fe4-kube-api-access-9xj95\") pod \"nova-metadata-0\" (UID: \"0dd027dd-3995-4508-bbee-7c776c2d6fe4\") " pod="openstack/nova-metadata-0" Feb 20 12:23:09.215114 master-0 kubenswrapper[31420]: I0220 12:23:09.215038 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 12:23:09.514252 master-0 kubenswrapper[31420]: I0220 12:23:09.514176 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6818035-947a-43e2-8708-c2b70b22b705" path="/var/lib/kubelet/pods/f6818035-947a-43e2-8708-c2b70b22b705/volumes" Feb 20 12:23:09.740973 master-0 kubenswrapper[31420]: I0220 12:23:09.740876 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 12:23:09.747309 master-0 kubenswrapper[31420]: W0220 12:23:09.746683 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0dd027dd_3995_4508_bbee_7c776c2d6fe4.slice/crio-3a796cf12ddaf8eac494d1a97984ef0c4b4cf6b557fc15ba975feda8dcd69909 WatchSource:0}: Error finding container 3a796cf12ddaf8eac494d1a97984ef0c4b4cf6b557fc15ba975feda8dcd69909: Status 404 returned error can't find the container with id 3a796cf12ddaf8eac494d1a97984ef0c4b4cf6b557fc15ba975feda8dcd69909 Feb 20 12:23:09.811647 master-0 kubenswrapper[31420]: I0220 12:23:09.811585 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0dd027dd-3995-4508-bbee-7c776c2d6fe4","Type":"ContainerStarted","Data":"3a796cf12ddaf8eac494d1a97984ef0c4b4cf6b557fc15ba975feda8dcd69909"} Feb 20 12:23:10.591607 master-0 kubenswrapper[31420]: I0220 12:23:10.588427 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 12:23:10.712452 master-0 kubenswrapper[31420]: I0220 12:23:10.712359 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0465bc3e-a104-4b88-b897-8d59c142c137-config-data\") pod \"0465bc3e-a104-4b88-b897-8d59c142c137\" (UID: \"0465bc3e-a104-4b88-b897-8d59c142c137\") " Feb 20 12:23:10.712452 master-0 kubenswrapper[31420]: I0220 12:23:10.712456 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpfrd\" (UniqueName: \"kubernetes.io/projected/0465bc3e-a104-4b88-b897-8d59c142c137-kube-api-access-vpfrd\") pod \"0465bc3e-a104-4b88-b897-8d59c142c137\" (UID: \"0465bc3e-a104-4b88-b897-8d59c142c137\") " Feb 20 12:23:10.712854 master-0 kubenswrapper[31420]: I0220 12:23:10.712575 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0465bc3e-a104-4b88-b897-8d59c142c137-combined-ca-bundle\") pod \"0465bc3e-a104-4b88-b897-8d59c142c137\" (UID: \"0465bc3e-a104-4b88-b897-8d59c142c137\") " Feb 20 12:23:10.717068 master-0 kubenswrapper[31420]: I0220 12:23:10.717015 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0465bc3e-a104-4b88-b897-8d59c142c137-kube-api-access-vpfrd" (OuterVolumeSpecName: "kube-api-access-vpfrd") pod "0465bc3e-a104-4b88-b897-8d59c142c137" (UID: "0465bc3e-a104-4b88-b897-8d59c142c137"). InnerVolumeSpecName "kube-api-access-vpfrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:23:10.753648 master-0 kubenswrapper[31420]: I0220 12:23:10.753593 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0465bc3e-a104-4b88-b897-8d59c142c137-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0465bc3e-a104-4b88-b897-8d59c142c137" (UID: "0465bc3e-a104-4b88-b897-8d59c142c137"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:10.762322 master-0 kubenswrapper[31420]: I0220 12:23:10.762271 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0465bc3e-a104-4b88-b897-8d59c142c137-config-data" (OuterVolumeSpecName: "config-data") pod "0465bc3e-a104-4b88-b897-8d59c142c137" (UID: "0465bc3e-a104-4b88-b897-8d59c142c137"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:10.817771 master-0 kubenswrapper[31420]: I0220 12:23:10.817717 31420 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0465bc3e-a104-4b88-b897-8d59c142c137-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:10.817771 master-0 kubenswrapper[31420]: I0220 12:23:10.817765 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpfrd\" (UniqueName: \"kubernetes.io/projected/0465bc3e-a104-4b88-b897-8d59c142c137-kube-api-access-vpfrd\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:10.817771 master-0 kubenswrapper[31420]: I0220 12:23:10.817777 31420 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0465bc3e-a104-4b88-b897-8d59c142c137-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:10.827547 master-0 kubenswrapper[31420]: I0220 12:23:10.827475 31420 generic.go:334] "Generic (PLEG): container finished" podID="0465bc3e-a104-4b88-b897-8d59c142c137" containerID="320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812" exitCode=0 Feb 20 12:23:10.827620 master-0 kubenswrapper[31420]: I0220 12:23:10.827574 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0465bc3e-a104-4b88-b897-8d59c142c137","Type":"ContainerDied","Data":"320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812"} Feb 20 12:23:10.827664 master-0 kubenswrapper[31420]: I0220 12:23:10.827592 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 12:23:10.827736 master-0 kubenswrapper[31420]: I0220 12:23:10.827603 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0465bc3e-a104-4b88-b897-8d59c142c137","Type":"ContainerDied","Data":"c7b84a1d0ce0d2208442a438a4c7dba3082c14eae69892baa8d93c3f5f91e388"} Feb 20 12:23:10.827736 master-0 kubenswrapper[31420]: I0220 12:23:10.827615 31420 scope.go:117] "RemoveContainer" containerID="320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812" Feb 20 12:23:10.835908 master-0 kubenswrapper[31420]: I0220 12:23:10.835852 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0dd027dd-3995-4508-bbee-7c776c2d6fe4","Type":"ContainerStarted","Data":"d502dc2befb0f2a20de9616a4a6e460a65ec856f1fdb36d701c94d67ea88424e"} Feb 20 12:23:10.835908 master-0 kubenswrapper[31420]: I0220 12:23:10.835896 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0dd027dd-3995-4508-bbee-7c776c2d6fe4","Type":"ContainerStarted","Data":"5c9c32f719af2c0d53c1d110b69106578d6a019b1b76b2a723e1e13665035b32"} Feb 20 12:23:10.878481 master-0 kubenswrapper[31420]: I0220 12:23:10.878393 31420 scope.go:117] "RemoveContainer" containerID="320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812" Feb 20 12:23:10.878813 master-0 kubenswrapper[31420]: E0220 12:23:10.878787 31420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812\": container with ID starting with 320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812 not found: ID does not exist" containerID="320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812" Feb 20 12:23:10.878879 master-0 kubenswrapper[31420]: I0220 12:23:10.878817 31420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812"} err="failed to get container status \"320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812\": rpc error: code = NotFound desc = could not find container \"320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812\": container with ID starting with 320bc3879540f145a4aacae99134a49eb41434f3e5c3460d77b7601ecced4812 not found: ID does not exist" Feb 20 12:23:10.882178 master-0 kubenswrapper[31420]: I0220 12:23:10.882080 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.882064079 podStartE2EDuration="2.882064079s" podCreationTimestamp="2026-02-20 12:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:23:10.862961169 +0000 UTC m=+1095.582199500" watchObservedRunningTime="2026-02-20 12:23:10.882064079 +0000 UTC m=+1095.601302320" Feb 20 12:23:10.911363 master-0 kubenswrapper[31420]: I0220 12:23:10.911084 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 12:23:10.934439 master-0 kubenswrapper[31420]: I0220 12:23:10.930756 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 12:23:10.954609 master-0 kubenswrapper[31420]: I0220 12:23:10.953715 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 12:23:10.954881 master-0 kubenswrapper[31420]: E0220 12:23:10.954718 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0465bc3e-a104-4b88-b897-8d59c142c137" containerName="nova-scheduler-scheduler" Feb 20 12:23:10.954881 master-0 kubenswrapper[31420]: I0220 12:23:10.954739 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="0465bc3e-a104-4b88-b897-8d59c142c137" containerName="nova-scheduler-scheduler" Feb 20 12:23:10.955093 master-0 kubenswrapper[31420]: I0220 12:23:10.955055 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="0465bc3e-a104-4b88-b897-8d59c142c137" containerName="nova-scheduler-scheduler" Feb 20 12:23:10.956705 master-0 kubenswrapper[31420]: I0220 12:23:10.955845 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 12:23:10.958044 master-0 kubenswrapper[31420]: I0220 12:23:10.957974 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 20 12:23:10.966715 master-0 kubenswrapper[31420]: I0220 12:23:10.966645 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 12:23:11.026819 master-0 kubenswrapper[31420]: I0220 12:23:11.026734 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/299e8ff2-72bd-4426-bc93-4bd2c89197cd-config-data\") pod \"nova-scheduler-0\" (UID: \"299e8ff2-72bd-4426-bc93-4bd2c89197cd\") " pod="openstack/nova-scheduler-0" Feb 20 12:23:11.027074 master-0 kubenswrapper[31420]: I0220 12:23:11.026881 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/299e8ff2-72bd-4426-bc93-4bd2c89197cd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"299e8ff2-72bd-4426-bc93-4bd2c89197cd\") " pod="openstack/nova-scheduler-0" Feb 20 12:23:11.027074 master-0 kubenswrapper[31420]: I0220 12:23:11.026955 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86vxt\" (UniqueName: \"kubernetes.io/projected/299e8ff2-72bd-4426-bc93-4bd2c89197cd-kube-api-access-86vxt\") pod \"nova-scheduler-0\" (UID: \"299e8ff2-72bd-4426-bc93-4bd2c89197cd\") " pod="openstack/nova-scheduler-0" Feb 20 12:23:11.132259 master-0 kubenswrapper[31420]: I0220 12:23:11.132092 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/299e8ff2-72bd-4426-bc93-4bd2c89197cd-config-data\") pod \"nova-scheduler-0\" (UID: \"299e8ff2-72bd-4426-bc93-4bd2c89197cd\") " pod="openstack/nova-scheduler-0" Feb 20 12:23:11.132259 master-0 kubenswrapper[31420]: I0220 12:23:11.132234 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/299e8ff2-72bd-4426-bc93-4bd2c89197cd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"299e8ff2-72bd-4426-bc93-4bd2c89197cd\") " pod="openstack/nova-scheduler-0" Feb 20 12:23:11.132499 master-0 kubenswrapper[31420]: I0220 12:23:11.132316 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86vxt\" (UniqueName: \"kubernetes.io/projected/299e8ff2-72bd-4426-bc93-4bd2c89197cd-kube-api-access-86vxt\") pod \"nova-scheduler-0\" (UID: \"299e8ff2-72bd-4426-bc93-4bd2c89197cd\") " pod="openstack/nova-scheduler-0" Feb 20 12:23:11.145082 master-0 kubenswrapper[31420]: I0220 12:23:11.145015 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/299e8ff2-72bd-4426-bc93-4bd2c89197cd-config-data\") pod \"nova-scheduler-0\" (UID: \"299e8ff2-72bd-4426-bc93-4bd2c89197cd\") " pod="openstack/nova-scheduler-0" Feb 20 12:23:11.145296 master-0 kubenswrapper[31420]: I0220 12:23:11.145202 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/299e8ff2-72bd-4426-bc93-4bd2c89197cd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"299e8ff2-72bd-4426-bc93-4bd2c89197cd\") " pod="openstack/nova-scheduler-0" Feb 20 12:23:11.154334 master-0 kubenswrapper[31420]: I0220 12:23:11.153567 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86vxt\" (UniqueName: \"kubernetes.io/projected/299e8ff2-72bd-4426-bc93-4bd2c89197cd-kube-api-access-86vxt\") pod \"nova-scheduler-0\" (UID: \"299e8ff2-72bd-4426-bc93-4bd2c89197cd\") " pod="openstack/nova-scheduler-0" Feb 20 12:23:11.276791 master-0 kubenswrapper[31420]: I0220 12:23:11.276655 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 12:23:11.518407 master-0 kubenswrapper[31420]: I0220 12:23:11.518348 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0465bc3e-a104-4b88-b897-8d59c142c137" path="/var/lib/kubelet/pods/0465bc3e-a104-4b88-b897-8d59c142c137/volumes" Feb 20 12:23:11.800393 master-0 kubenswrapper[31420]: I0220 12:23:11.800301 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 12:23:11.865359 master-0 kubenswrapper[31420]: I0220 12:23:11.865268 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"299e8ff2-72bd-4426-bc93-4bd2c89197cd","Type":"ContainerStarted","Data":"2c10494b9e072d30791356fd254730ec4b8309c6f4bd196c454efc44d872d969"} Feb 20 12:23:12.887753 master-0 kubenswrapper[31420]: I0220 12:23:12.887656 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"299e8ff2-72bd-4426-bc93-4bd2c89197cd","Type":"ContainerStarted","Data":"f1c5df50ce86e01f064ffbd2840a36bddab5f760501af331f2ec0b1f305cea5d"} Feb 20 12:23:12.933987 master-0 kubenswrapper[31420]: I0220 12:23:12.933483 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.933457651 podStartE2EDuration="2.933457651s" podCreationTimestamp="2026-02-20 12:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:23:12.911729297 +0000 UTC m=+1097.630967558" watchObservedRunningTime="2026-02-20 12:23:12.933457651 +0000 UTC m=+1097.652695902" Feb 20 12:23:14.215701 master-0 kubenswrapper[31420]: I0220 12:23:14.215610 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 20 12:23:14.216318 master-0 kubenswrapper[31420]: I0220 12:23:14.216069 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 20 12:23:16.222234 master-0 kubenswrapper[31420]: I0220 12:23:16.222138 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 20 12:23:16.222234 master-0 kubenswrapper[31420]: I0220 12:23:16.222239 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 20 12:23:16.276956 master-0 kubenswrapper[31420]: I0220 12:23:16.276848 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 20 12:23:17.244052 master-0 kubenswrapper[31420]: I0220 12:23:17.243931 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b6679d6f-5fd2-407d-96b4-2dcea806dec6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.18:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:23:17.244788 master-0 kubenswrapper[31420]: I0220 12:23:17.244112 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b6679d6f-5fd2-407d-96b4-2dcea806dec6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.18:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:23:19.215713 master-0 kubenswrapper[31420]: I0220 12:23:19.215517 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 20 12:23:19.215713 master-0 kubenswrapper[31420]: I0220 12:23:19.215649 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 20 12:23:20.234122 master-0 kubenswrapper[31420]: I0220 12:23:20.234029 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0dd027dd-3995-4508-bbee-7c776c2d6fe4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.19:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:23:20.234738 master-0 kubenswrapper[31420]: I0220 12:23:20.234010 31420 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0dd027dd-3995-4508-bbee-7c776c2d6fe4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.19:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 12:23:21.277806 master-0 kubenswrapper[31420]: I0220 12:23:21.277730 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 20 12:23:21.336797 master-0 kubenswrapper[31420]: I0220 12:23:21.336625 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 20 12:23:22.092885 master-0 kubenswrapper[31420]: I0220 12:23:22.092829 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 20 12:23:26.230832 master-0 kubenswrapper[31420]: I0220 12:23:26.230732 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 20 12:23:26.231924 master-0 kubenswrapper[31420]: I0220 12:23:26.231862 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 20 12:23:26.236962 master-0 kubenswrapper[31420]: I0220 12:23:26.236897 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 20 12:23:26.239703 master-0 kubenswrapper[31420]: I0220 12:23:26.239667 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 20 12:23:27.119789 master-0 kubenswrapper[31420]: I0220 12:23:27.119714 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 20 12:23:27.127687 master-0 kubenswrapper[31420]: I0220 12:23:27.127607 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 20 12:23:29.224101 master-0 kubenswrapper[31420]: I0220 12:23:29.224003 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 20 12:23:29.225034 master-0 kubenswrapper[31420]: I0220 12:23:29.224970 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 20 12:23:29.236760 master-0 kubenswrapper[31420]: I0220 12:23:29.236702 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 20 12:23:30.178996 master-0 kubenswrapper[31420]: I0220 12:23:30.178915 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 20 12:23:57.176630 master-0 kubenswrapper[31420]: I0220 12:23:57.176273 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-skcgc"] Feb 20 12:23:57.176630 master-0 kubenswrapper[31420]: I0220 12:23:57.176546 31420 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" podUID="c44ebcae-003d-4347-8ab6-36cc5b16e2df" containerName="sushy-emulator" containerID="cri-o://cb10b6e24e617d07b45499c736c68763ec59ffb74c9f30677d7b1dabe8eea533" gracePeriod=30 Feb 20 12:23:57.631911 master-0 kubenswrapper[31420]: I0220 12:23:57.631767 31420 generic.go:334] "Generic (PLEG): container finished" podID="c44ebcae-003d-4347-8ab6-36cc5b16e2df" containerID="cb10b6e24e617d07b45499c736c68763ec59ffb74c9f30677d7b1dabe8eea533" exitCode=0 Feb 20 12:23:57.632165 master-0 kubenswrapper[31420]: I0220 12:23:57.632013 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" event={"ID":"c44ebcae-003d-4347-8ab6-36cc5b16e2df","Type":"ContainerDied","Data":"cb10b6e24e617d07b45499c736c68763ec59ffb74c9f30677d7b1dabe8eea533"} Feb 20 12:23:57.800287 master-0 kubenswrapper[31420]: I0220 12:23:57.800221 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:23:57.827462 master-0 kubenswrapper[31420]: I0220 12:23:57.827380 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/c44ebcae-003d-4347-8ab6-36cc5b16e2df-sushy-emulator-config\") pod \"c44ebcae-003d-4347-8ab6-36cc5b16e2df\" (UID: \"c44ebcae-003d-4347-8ab6-36cc5b16e2df\") " Feb 20 12:23:57.828269 master-0 kubenswrapper[31420]: I0220 12:23:57.828231 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44ebcae-003d-4347-8ab6-36cc5b16e2df-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "c44ebcae-003d-4347-8ab6-36cc5b16e2df" (UID: "c44ebcae-003d-4347-8ab6-36cc5b16e2df"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:23:57.916504 master-0 kubenswrapper[31420]: I0220 12:23:57.916364 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-7qrbx"] Feb 20 12:23:57.917150 master-0 kubenswrapper[31420]: E0220 12:23:57.917110 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44ebcae-003d-4347-8ab6-36cc5b16e2df" containerName="sushy-emulator" Feb 20 12:23:57.917150 master-0 kubenswrapper[31420]: I0220 12:23:57.917139 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44ebcae-003d-4347-8ab6-36cc5b16e2df" containerName="sushy-emulator" Feb 20 12:23:57.917425 master-0 kubenswrapper[31420]: I0220 12:23:57.917399 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44ebcae-003d-4347-8ab6-36cc5b16e2df" containerName="sushy-emulator" Feb 20 12:23:57.918167 master-0 kubenswrapper[31420]: I0220 12:23:57.918123 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" Feb 20 12:23:57.930926 master-0 kubenswrapper[31420]: I0220 12:23:57.929256 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgrkx\" (UniqueName: \"kubernetes.io/projected/c44ebcae-003d-4347-8ab6-36cc5b16e2df-kube-api-access-fgrkx\") pod \"c44ebcae-003d-4347-8ab6-36cc5b16e2df\" (UID: \"c44ebcae-003d-4347-8ab6-36cc5b16e2df\") " Feb 20 12:23:57.930926 master-0 kubenswrapper[31420]: I0220 12:23:57.929689 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c44ebcae-003d-4347-8ab6-36cc5b16e2df-os-client-config\") pod \"c44ebcae-003d-4347-8ab6-36cc5b16e2df\" (UID: \"c44ebcae-003d-4347-8ab6-36cc5b16e2df\") " Feb 20 12:23:57.930926 master-0 kubenswrapper[31420]: I0220 12:23:57.930386 31420 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/c44ebcae-003d-4347-8ab6-36cc5b16e2df-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:57.934582 master-0 kubenswrapper[31420]: I0220 12:23:57.933764 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44ebcae-003d-4347-8ab6-36cc5b16e2df-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "c44ebcae-003d-4347-8ab6-36cc5b16e2df" (UID: "c44ebcae-003d-4347-8ab6-36cc5b16e2df"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:23:57.938019 master-0 kubenswrapper[31420]: I0220 12:23:57.937953 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-7qrbx"] Feb 20 12:23:57.941724 master-0 kubenswrapper[31420]: I0220 12:23:57.941671 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44ebcae-003d-4347-8ab6-36cc5b16e2df-kube-api-access-fgrkx" (OuterVolumeSpecName: "kube-api-access-fgrkx") pod "c44ebcae-003d-4347-8ab6-36cc5b16e2df" (UID: "c44ebcae-003d-4347-8ab6-36cc5b16e2df"). InnerVolumeSpecName "kube-api-access-fgrkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:23:58.032558 master-0 kubenswrapper[31420]: I0220 12:23:58.032483 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp9bn\" (UniqueName: \"kubernetes.io/projected/0af82d78-c3b3-4f54-acf1-b127a904ca31-kube-api-access-wp9bn\") pod \"sushy-emulator-84965d5d88-7qrbx\" (UID: \"0af82d78-c3b3-4f54-acf1-b127a904ca31\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" Feb 20 12:23:58.032883 master-0 kubenswrapper[31420]: I0220 12:23:58.032820 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/0af82d78-c3b3-4f54-acf1-b127a904ca31-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-7qrbx\" (UID: \"0af82d78-c3b3-4f54-acf1-b127a904ca31\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" Feb 20 12:23:58.033291 master-0 kubenswrapper[31420]: I0220 12:23:58.033245 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0af82d78-c3b3-4f54-acf1-b127a904ca31-os-client-config\") pod \"sushy-emulator-84965d5d88-7qrbx\" (UID: \"0af82d78-c3b3-4f54-acf1-b127a904ca31\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" Feb 20 12:23:58.033748 master-0 kubenswrapper[31420]: I0220 12:23:58.033683 31420 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c44ebcae-003d-4347-8ab6-36cc5b16e2df-os-client-config\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:58.033748 master-0 kubenswrapper[31420]: I0220 12:23:58.033743 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgrkx\" (UniqueName: \"kubernetes.io/projected/c44ebcae-003d-4347-8ab6-36cc5b16e2df-kube-api-access-fgrkx\") on node \"master-0\" DevicePath \"\"" Feb 20 12:23:58.135791 master-0 kubenswrapper[31420]: I0220 12:23:58.135686 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp9bn\" (UniqueName: \"kubernetes.io/projected/0af82d78-c3b3-4f54-acf1-b127a904ca31-kube-api-access-wp9bn\") pod \"sushy-emulator-84965d5d88-7qrbx\" (UID: \"0af82d78-c3b3-4f54-acf1-b127a904ca31\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" Feb 20 12:23:58.136202 master-0 kubenswrapper[31420]: I0220 12:23:58.135830 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/0af82d78-c3b3-4f54-acf1-b127a904ca31-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-7qrbx\" (UID: \"0af82d78-c3b3-4f54-acf1-b127a904ca31\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" Feb 20 12:23:58.136202 master-0 kubenswrapper[31420]: I0220 12:23:58.135960 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0af82d78-c3b3-4f54-acf1-b127a904ca31-os-client-config\") pod \"sushy-emulator-84965d5d88-7qrbx\" (UID: \"0af82d78-c3b3-4f54-acf1-b127a904ca31\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" Feb 20 12:23:58.137232 master-0 kubenswrapper[31420]: I0220 12:23:58.137200 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/0af82d78-c3b3-4f54-acf1-b127a904ca31-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-7qrbx\" (UID: \"0af82d78-c3b3-4f54-acf1-b127a904ca31\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" Feb 20 12:23:58.142131 master-0 kubenswrapper[31420]: I0220 12:23:58.142090 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0af82d78-c3b3-4f54-acf1-b127a904ca31-os-client-config\") pod \"sushy-emulator-84965d5d88-7qrbx\" (UID: \"0af82d78-c3b3-4f54-acf1-b127a904ca31\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" Feb 20 12:23:58.157219 master-0 kubenswrapper[31420]: I0220 12:23:58.157160 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp9bn\" (UniqueName: \"kubernetes.io/projected/0af82d78-c3b3-4f54-acf1-b127a904ca31-kube-api-access-wp9bn\") pod \"sushy-emulator-84965d5d88-7qrbx\" (UID: \"0af82d78-c3b3-4f54-acf1-b127a904ca31\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" Feb 20 12:23:58.340062 master-0 kubenswrapper[31420]: I0220 12:23:58.339984 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" Feb 20 12:23:58.644301 master-0 kubenswrapper[31420]: I0220 12:23:58.643850 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" event={"ID":"c44ebcae-003d-4347-8ab6-36cc5b16e2df","Type":"ContainerDied","Data":"8c3861b0a28b63b1084d2fbdac77c6fd1b306f40ecedd4690b672e10f13f1843"} Feb 20 12:23:58.644301 master-0 kubenswrapper[31420]: I0220 12:23:58.644066 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-skcgc" Feb 20 12:23:58.644301 master-0 kubenswrapper[31420]: I0220 12:23:58.644267 31420 scope.go:117] "RemoveContainer" containerID="cb10b6e24e617d07b45499c736c68763ec59ffb74c9f30677d7b1dabe8eea533" Feb 20 12:23:58.697835 master-0 kubenswrapper[31420]: I0220 12:23:58.697768 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-skcgc"] Feb 20 12:23:58.707815 master-0 kubenswrapper[31420]: I0220 12:23:58.707751 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-skcgc"] Feb 20 12:23:58.962477 master-0 kubenswrapper[31420]: I0220 12:23:58.962400 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-7qrbx"] Feb 20 12:23:58.966288 master-0 kubenswrapper[31420]: W0220 12:23:58.966208 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0af82d78_c3b3_4f54_acf1_b127a904ca31.slice/crio-d5d3fb1d0a6d03ccf1d857b8c08808186b761b2a4200c3ec848beb4f8a8ea031 WatchSource:0}: Error finding container d5d3fb1d0a6d03ccf1d857b8c08808186b761b2a4200c3ec848beb4f8a8ea031: Status 404 returned error can't find the container with id d5d3fb1d0a6d03ccf1d857b8c08808186b761b2a4200c3ec848beb4f8a8ea031 Feb 20 12:23:59.517457 master-0 kubenswrapper[31420]: I0220 12:23:59.517330 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44ebcae-003d-4347-8ab6-36cc5b16e2df" path="/var/lib/kubelet/pods/c44ebcae-003d-4347-8ab6-36cc5b16e2df/volumes" Feb 20 12:23:59.676858 master-0 kubenswrapper[31420]: I0220 12:23:59.676787 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" event={"ID":"0af82d78-c3b3-4f54-acf1-b127a904ca31","Type":"ContainerStarted","Data":"749ead0f742298f39ff0b9ca550d38cbfe2a3232ceb54b9b609455f3bd5d74bc"} Feb 20 12:23:59.677193 master-0 kubenswrapper[31420]: I0220 12:23:59.677160 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" event={"ID":"0af82d78-c3b3-4f54-acf1-b127a904ca31","Type":"ContainerStarted","Data":"d5d3fb1d0a6d03ccf1d857b8c08808186b761b2a4200c3ec848beb4f8a8ea031"} Feb 20 12:23:59.709400 master-0 kubenswrapper[31420]: I0220 12:23:59.709287 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" podStartSLOduration=2.709265256 podStartE2EDuration="2.709265256s" podCreationTimestamp="2026-02-20 12:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:23:59.705634864 +0000 UTC m=+1144.424873135" watchObservedRunningTime="2026-02-20 12:23:59.709265256 +0000 UTC m=+1144.428503507" Feb 20 12:24:08.340937 master-0 kubenswrapper[31420]: I0220 12:24:08.340855 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" Feb 20 12:24:08.340937 master-0 kubenswrapper[31420]: I0220 12:24:08.340930 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" Feb 20 12:24:08.354719 master-0 kubenswrapper[31420]: I0220 12:24:08.354631 31420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" Feb 20 12:24:08.812720 master-0 kubenswrapper[31420]: I0220 12:24:08.811370 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-84965d5d88-7qrbx" Feb 20 12:25:57.034139 master-0 kubenswrapper[31420]: I0220 12:25:57.034050 31420 scope.go:117] "RemoveContainer" containerID="13a2453f36e91d02e72cfa93ecfe6c136b9a4cb74eba8695c44691710f1ed0f6" Feb 20 12:25:57.071201 master-0 kubenswrapper[31420]: I0220 12:25:57.071056 31420 scope.go:117] "RemoveContainer" containerID="18df7d168eb282c5bbc91f49fac72678a4c662bbafe86bf67ba91551ed043aa5" Feb 20 12:25:57.116242 master-0 kubenswrapper[31420]: I0220 12:25:57.116157 31420 scope.go:117] "RemoveContainer" containerID="d94419d0c149503d253a6791e1794d93c25f53c6bb440ec4c3d9f0faea4b6081" Feb 20 12:26:57.225350 master-0 kubenswrapper[31420]: I0220 12:26:57.225298 31420 scope.go:117] "RemoveContainer" containerID="844fc409bd4fd08593b300e7fcacbcdecf3ba5a3e68c9f7a90c2f9f597024073" Feb 20 12:26:57.257064 master-0 kubenswrapper[31420]: I0220 12:26:57.257020 31420 scope.go:117] "RemoveContainer" containerID="d9b044a68317b3d3f5d929b39f6e6e102e65dc56db00f41d411702628c62dc1c" Feb 20 12:26:57.284546 master-0 kubenswrapper[31420]: I0220 12:26:57.284477 31420 scope.go:117] "RemoveContainer" containerID="d4c4b10edf183fb34c17d57ce1327c4a724f81082f7e37ecdd3583f5864e45a3" Feb 20 12:26:57.334369 master-0 kubenswrapper[31420]: I0220 12:26:57.333960 31420 scope.go:117] "RemoveContainer" containerID="06de34299e9fbb5f40cd4903ce58ddde79dd310774d2b62b0aa9687d7d997a1f" Feb 20 12:26:57.386426 master-0 kubenswrapper[31420]: I0220 12:26:57.386039 31420 scope.go:117] "RemoveContainer" containerID="63c35f90eed454fde6afe3d0e77f139ad569d08726242fa8e3eec960ee4204cf" Feb 20 12:26:57.449788 master-0 kubenswrapper[31420]: I0220 12:26:57.449650 31420 scope.go:117] "RemoveContainer" containerID="0de51f7f49a31f2c9d89354e893509b56ea81a15e920a73326d577be8e61c29f" Feb 20 12:28:04.145693 master-0 kubenswrapper[31420]: I0220 12:28:04.145589 31420 patch_prober.go:28] interesting pod/catalog-operator-596f79dd6f-bjxbt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 12:28:04.150189 master-0 kubenswrapper[31420]: I0220 12:28:04.145697 31420 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-bjxbt" podUID="4d060bff-3c25-4eeb-bdd3-e20fb2687645" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 12:28:57.639339 master-0 kubenswrapper[31420]: I0220 12:28:57.639262 31420 scope.go:117] "RemoveContainer" containerID="f94b6604b63179e65d16ef147da2554a208fc08ba19eb028a782387a8bfa8aa1" Feb 20 12:28:57.677038 master-0 kubenswrapper[31420]: I0220 12:28:57.675955 31420 scope.go:117] "RemoveContainer" containerID="92cff045051382fa3c89b33f22f074a4ed0c1602991e3f8131a4eaf560421d6d" Feb 20 12:29:11.094122 master-0 kubenswrapper[31420]: I0220 12:29:11.094017 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9be4-account-create-update-4jzfk"] Feb 20 12:29:11.114819 master-0 kubenswrapper[31420]: I0220 12:29:11.114723 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9be4-account-create-update-4jzfk"] Feb 20 12:29:11.523506 master-0 kubenswrapper[31420]: I0220 12:29:11.522895 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28293a37-d871-493f-8286-a6705a2e5bd8" path="/var/lib/kubelet/pods/28293a37-d871-493f-8286-a6705a2e5bd8/volumes" Feb 20 12:29:12.080649 master-0 kubenswrapper[31420]: I0220 12:29:12.080494 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zkxzd"] Feb 20 12:29:12.096480 master-0 kubenswrapper[31420]: I0220 12:29:12.096394 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-9fa1-account-create-update-88db9"] Feb 20 12:29:12.108476 master-0 kubenswrapper[31420]: I0220 12:29:12.108409 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-k8p84"] Feb 20 12:29:12.118103 master-0 kubenswrapper[31420]: I0220 12:29:12.118065 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-gmvz9"] Feb 20 12:29:12.128775 master-0 kubenswrapper[31420]: I0220 12:29:12.128715 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-9fa1-account-create-update-88db9"] Feb 20 12:29:12.145403 master-0 kubenswrapper[31420]: I0220 12:29:12.138814 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zkxzd"] Feb 20 12:29:12.151340 master-0 kubenswrapper[31420]: I0220 12:29:12.148320 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-54fa-account-create-update-5ng4x"] Feb 20 12:29:12.158120 master-0 kubenswrapper[31420]: I0220 12:29:12.158019 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-k8p84"] Feb 20 12:29:12.168669 master-0 kubenswrapper[31420]: I0220 12:29:12.168611 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-gmvz9"] Feb 20 12:29:12.179151 master-0 kubenswrapper[31420]: I0220 12:29:12.179086 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-54fa-account-create-update-5ng4x"] Feb 20 12:29:13.535028 master-0 kubenswrapper[31420]: I0220 12:29:13.534948 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0311ef3e-c313-4f78-86ce-371566b44c31" path="/var/lib/kubelet/pods/0311ef3e-c313-4f78-86ce-371566b44c31/volumes" Feb 20 12:29:13.537702 master-0 kubenswrapper[31420]: I0220 12:29:13.537646 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a83694-e4aa-4031-8ba3-7eaab90b0abd" path="/var/lib/kubelet/pods/57a83694-e4aa-4031-8ba3-7eaab90b0abd/volumes" Feb 20 12:29:13.540144 master-0 kubenswrapper[31420]: I0220 12:29:13.540034 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8565cedf-c9e9-45a2-a463-00f1f5224559" path="/var/lib/kubelet/pods/8565cedf-c9e9-45a2-a463-00f1f5224559/volumes" Feb 20 12:29:13.542623 master-0 kubenswrapper[31420]: I0220 12:29:13.541801 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e5fb862-336d-459e-8b0d-7a688bfc722c" path="/var/lib/kubelet/pods/9e5fb862-336d-459e-8b0d-7a688bfc722c/volumes" Feb 20 12:29:13.546690 master-0 kubenswrapper[31420]: I0220 12:29:13.544470 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0bba7eb-9925-4bdc-b18a-09a15d13bb07" path="/var/lib/kubelet/pods/b0bba7eb-9925-4bdc-b18a-09a15d13bb07/volumes" Feb 20 12:29:18.108566 master-0 kubenswrapper[31420]: I0220 12:29:18.108053 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sztf8"] Feb 20 12:29:18.131247 master-0 kubenswrapper[31420]: I0220 12:29:18.131139 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-sztf8"] Feb 20 12:29:19.517342 master-0 kubenswrapper[31420]: I0220 12:29:19.517264 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b293179-ca05-4d55-8691-120c2b338814" path="/var/lib/kubelet/pods/9b293179-ca05-4d55-8691-120c2b338814/volumes" Feb 20 12:29:40.067690 master-0 kubenswrapper[31420]: I0220 12:29:40.067592 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-2d6nq"] Feb 20 12:29:40.084615 master-0 kubenswrapper[31420]: I0220 12:29:40.084490 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-2d6nq"] Feb 20 12:29:41.519390 master-0 kubenswrapper[31420]: I0220 12:29:41.519292 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="964cb0d1-eb1a-404e-b395-0a733f4ae02b" path="/var/lib/kubelet/pods/964cb0d1-eb1a-404e-b395-0a733f4ae02b/volumes" Feb 20 12:29:54.085728 master-0 kubenswrapper[31420]: I0220 12:29:54.085644 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5976-account-create-update-4hl72"] Feb 20 12:29:54.099570 master-0 kubenswrapper[31420]: I0220 12:29:54.099447 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-2jfwr"] Feb 20 12:29:54.115720 master-0 kubenswrapper[31420]: I0220 12:29:54.115650 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c59-account-create-update-5sh5r"] Feb 20 12:29:54.127053 master-0 kubenswrapper[31420]: I0220 12:29:54.126976 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-8ds52"] Feb 20 12:29:54.137508 master-0 kubenswrapper[31420]: I0220 12:29:54.137429 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5976-account-create-update-4hl72"] Feb 20 12:29:54.148078 master-0 kubenswrapper[31420]: I0220 12:29:54.148043 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7c59-account-create-update-5sh5r"] Feb 20 12:29:54.183208 master-0 kubenswrapper[31420]: I0220 12:29:54.183116 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-8ds52"] Feb 20 12:29:54.200143 master-0 kubenswrapper[31420]: I0220 12:29:54.199308 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-2jfwr"] Feb 20 12:29:55.518487 master-0 kubenswrapper[31420]: I0220 12:29:55.518401 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dc5e57b-2b58-449c-95e3-844ee6b42236" path="/var/lib/kubelet/pods/4dc5e57b-2b58-449c-95e3-844ee6b42236/volumes" Feb 20 12:29:55.519466 master-0 kubenswrapper[31420]: I0220 12:29:55.519064 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1bb6428-4a9d-4b9c-b6ba-d45526559d3a" path="/var/lib/kubelet/pods/d1bb6428-4a9d-4b9c-b6ba-d45526559d3a/volumes" Feb 20 12:29:55.519653 master-0 kubenswrapper[31420]: I0220 12:29:55.519609 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f94c78c2-eb36-445b-a302-a5f67efdd1e8" path="/var/lib/kubelet/pods/f94c78c2-eb36-445b-a302-a5f67efdd1e8/volumes" Feb 20 12:29:55.520186 master-0 kubenswrapper[31420]: I0220 12:29:55.520143 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff72fc46-bb67-400f-b402-cc01ed97f277" path="/var/lib/kubelet/pods/ff72fc46-bb67-400f-b402-cc01ed97f277/volumes" Feb 20 12:29:57.778316 master-0 kubenswrapper[31420]: I0220 12:29:57.778220 31420 scope.go:117] "RemoveContainer" containerID="8b0d3997e849ef504066bb898a28f27f05b34e75f7b68648844a7e24ece351a0" Feb 20 12:29:57.816321 master-0 kubenswrapper[31420]: I0220 12:29:57.816258 31420 scope.go:117] "RemoveContainer" containerID="be0dc5fc593a041518a78b9625a2e4d7c0033dd7f77c9df9b27ad85c1ad90f7e" Feb 20 12:29:57.889350 master-0 kubenswrapper[31420]: I0220 12:29:57.889282 31420 scope.go:117] "RemoveContainer" containerID="61179ce47e86b7e72ff24a09f0108086936c3398509fb17f634c7953b8e76888" Feb 20 12:29:57.898341 master-0 kubenswrapper[31420]: E0220 12:29:57.898277 31420 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:49370->192.168.32.10:45797: write tcp 192.168.32.10:49370->192.168.32.10:45797: write: broken pipe Feb 20 12:29:57.989918 master-0 kubenswrapper[31420]: I0220 12:29:57.989824 31420 scope.go:117] "RemoveContainer" containerID="7f94cef87282f6f121aa17622625616ad8a9ba45820aec1952de28eadbd8d41e" Feb 20 12:29:58.028877 master-0 kubenswrapper[31420]: I0220 12:29:58.028755 31420 scope.go:117] "RemoveContainer" containerID="3dbb81d5035182c2dbc7c4509b6861a658218421378232f8e1e6d08cbb50105c" Feb 20 12:29:58.074353 master-0 kubenswrapper[31420]: I0220 12:29:58.074274 31420 scope.go:117] "RemoveContainer" containerID="637f8fa838694e5132ca7464110a75c39414ef566f4293b34cd8a9f584f7674a" Feb 20 12:29:58.124754 master-0 kubenswrapper[31420]: I0220 12:29:58.124660 31420 scope.go:117] "RemoveContainer" containerID="bc235ae96af00ab441d425f1b0833f39c43498f8a4b325b2b78ac03deb486c99" Feb 20 12:29:58.159466 master-0 kubenswrapper[31420]: I0220 12:29:58.159402 31420 scope.go:117] "RemoveContainer" containerID="f78318e942b07f7a320dda2061b8c25dd130a458eba70751b27e9a339975dbc6" Feb 20 12:29:58.202678 master-0 kubenswrapper[31420]: I0220 12:29:58.202583 31420 scope.go:117] "RemoveContainer" containerID="c13474afee77470017ea846d2ca6f3fec08ef7c3ab5607a81156d250816ab599" Feb 20 12:29:58.240753 master-0 kubenswrapper[31420]: I0220 12:29:58.240643 31420 scope.go:117] "RemoveContainer" containerID="718efa5e0092db719d3aa3dc1f1adc336410e90ed8c7ad41f1becfa9beb14c6e" Feb 20 12:29:58.280582 master-0 kubenswrapper[31420]: I0220 12:29:58.280486 31420 scope.go:117] "RemoveContainer" containerID="66de4746d8519524504725e279badc0d8f23fd8623f9f07793a365f3dbfb15b9" Feb 20 12:29:58.335819 master-0 kubenswrapper[31420]: I0220 12:29:58.335690 31420 scope.go:117] "RemoveContainer" containerID="5df8110662bd016fd8287deb6b6772e83dce00230979d49dc7488b445f50b811" Feb 20 12:30:00.060023 master-0 kubenswrapper[31420]: I0220 12:30:00.059939 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-bg44c"] Feb 20 12:30:00.080065 master-0 kubenswrapper[31420]: I0220 12:30:00.079966 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-bg44c"] Feb 20 12:30:00.178876 master-0 kubenswrapper[31420]: I0220 12:30:00.178755 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk"] Feb 20 12:30:00.184883 master-0 kubenswrapper[31420]: I0220 12:30:00.184793 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" Feb 20 12:30:00.187986 master-0 kubenswrapper[31420]: I0220 12:30:00.187738 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kmfb6" Feb 20 12:30:00.188520 master-0 kubenswrapper[31420]: I0220 12:30:00.188468 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 20 12:30:00.197602 master-0 kubenswrapper[31420]: I0220 12:30:00.197497 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk"] Feb 20 12:30:00.252269 master-0 kubenswrapper[31420]: I0220 12:30:00.252200 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19a62233-9dca-412d-bb01-6be0e63093a2-secret-volume\") pod \"collect-profiles-29526510-94fmk\" (UID: \"19a62233-9dca-412d-bb01-6be0e63093a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" Feb 20 12:30:00.252269 master-0 kubenswrapper[31420]: I0220 12:30:00.252248 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19a62233-9dca-412d-bb01-6be0e63093a2-config-volume\") pod \"collect-profiles-29526510-94fmk\" (UID: \"19a62233-9dca-412d-bb01-6be0e63093a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" Feb 20 12:30:00.252643 master-0 kubenswrapper[31420]: I0220 12:30:00.252291 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvjh\" (UniqueName: \"kubernetes.io/projected/19a62233-9dca-412d-bb01-6be0e63093a2-kube-api-access-8zvjh\") pod \"collect-profiles-29526510-94fmk\" (UID: \"19a62233-9dca-412d-bb01-6be0e63093a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" Feb 20 12:30:00.354731 master-0 kubenswrapper[31420]: I0220 12:30:00.354598 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zvjh\" (UniqueName: \"kubernetes.io/projected/19a62233-9dca-412d-bb01-6be0e63093a2-kube-api-access-8zvjh\") pod \"collect-profiles-29526510-94fmk\" (UID: \"19a62233-9dca-412d-bb01-6be0e63093a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" Feb 20 12:30:00.354948 master-0 kubenswrapper[31420]: I0220 12:30:00.354881 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19a62233-9dca-412d-bb01-6be0e63093a2-secret-volume\") pod \"collect-profiles-29526510-94fmk\" (UID: \"19a62233-9dca-412d-bb01-6be0e63093a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" Feb 20 12:30:00.354948 master-0 kubenswrapper[31420]: I0220 12:30:00.354913 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19a62233-9dca-412d-bb01-6be0e63093a2-config-volume\") pod \"collect-profiles-29526510-94fmk\" (UID: \"19a62233-9dca-412d-bb01-6be0e63093a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" Feb 20 12:30:00.355994 master-0 kubenswrapper[31420]: I0220 12:30:00.355954 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19a62233-9dca-412d-bb01-6be0e63093a2-config-volume\") pod \"collect-profiles-29526510-94fmk\" (UID: \"19a62233-9dca-412d-bb01-6be0e63093a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" Feb 20 12:30:00.358890 master-0 kubenswrapper[31420]: I0220 12:30:00.358842 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19a62233-9dca-412d-bb01-6be0e63093a2-secret-volume\") pod \"collect-profiles-29526510-94fmk\" (UID: \"19a62233-9dca-412d-bb01-6be0e63093a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" Feb 20 12:30:00.387052 master-0 kubenswrapper[31420]: I0220 12:30:00.386991 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zvjh\" (UniqueName: \"kubernetes.io/projected/19a62233-9dca-412d-bb01-6be0e63093a2-kube-api-access-8zvjh\") pod \"collect-profiles-29526510-94fmk\" (UID: \"19a62233-9dca-412d-bb01-6be0e63093a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" Feb 20 12:30:00.511201 master-0 kubenswrapper[31420]: I0220 12:30:00.510753 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" Feb 20 12:30:01.005171 master-0 kubenswrapper[31420]: I0220 12:30:01.005101 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk"] Feb 20 12:30:01.508457 master-0 kubenswrapper[31420]: I0220 12:30:01.508390 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67854856-9af9-4d6c-af39-4a6c05afaa69" path="/var/lib/kubelet/pods/67854856-9af9-4d6c-af39-4a6c05afaa69/volumes" Feb 20 12:30:02.048953 master-0 kubenswrapper[31420]: I0220 12:30:02.048825 31420 generic.go:334] "Generic (PLEG): container finished" podID="19a62233-9dca-412d-bb01-6be0e63093a2" containerID="deafe34966c5f32a649938c848cdf8868083fbe1c9b3c47d73d3223baefdfb49" exitCode=0 Feb 20 12:30:02.048953 master-0 kubenswrapper[31420]: I0220 12:30:02.048894 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" event={"ID":"19a62233-9dca-412d-bb01-6be0e63093a2","Type":"ContainerDied","Data":"deafe34966c5f32a649938c848cdf8868083fbe1c9b3c47d73d3223baefdfb49"} Feb 20 12:30:02.048953 master-0 kubenswrapper[31420]: I0220 12:30:02.048921 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" event={"ID":"19a62233-9dca-412d-bb01-6be0e63093a2","Type":"ContainerStarted","Data":"258c54552939471d0d1c4ff634acdd80aaec9533988e5dbd5e28e8e9b3371254"} Feb 20 12:30:04.160374 master-0 kubenswrapper[31420]: I0220 12:30:04.160284 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" Feb 20 12:30:04.289746 master-0 kubenswrapper[31420]: I0220 12:30:04.289585 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19a62233-9dca-412d-bb01-6be0e63093a2-secret-volume\") pod \"19a62233-9dca-412d-bb01-6be0e63093a2\" (UID: \"19a62233-9dca-412d-bb01-6be0e63093a2\") " Feb 20 12:30:04.289973 master-0 kubenswrapper[31420]: I0220 12:30:04.289923 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19a62233-9dca-412d-bb01-6be0e63093a2-config-volume\") pod \"19a62233-9dca-412d-bb01-6be0e63093a2\" (UID: \"19a62233-9dca-412d-bb01-6be0e63093a2\") " Feb 20 12:30:04.290046 master-0 kubenswrapper[31420]: I0220 12:30:04.289986 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zvjh\" (UniqueName: \"kubernetes.io/projected/19a62233-9dca-412d-bb01-6be0e63093a2-kube-api-access-8zvjh\") pod \"19a62233-9dca-412d-bb01-6be0e63093a2\" (UID: \"19a62233-9dca-412d-bb01-6be0e63093a2\") " Feb 20 12:30:04.290671 master-0 kubenswrapper[31420]: I0220 12:30:04.290506 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19a62233-9dca-412d-bb01-6be0e63093a2-config-volume" (OuterVolumeSpecName: "config-volume") pod "19a62233-9dca-412d-bb01-6be0e63093a2" (UID: "19a62233-9dca-412d-bb01-6be0e63093a2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:30:04.290972 master-0 kubenswrapper[31420]: I0220 12:30:04.290931 31420 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19a62233-9dca-412d-bb01-6be0e63093a2-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 12:30:04.293494 master-0 kubenswrapper[31420]: I0220 12:30:04.293443 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19a62233-9dca-412d-bb01-6be0e63093a2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "19a62233-9dca-412d-bb01-6be0e63093a2" (UID: "19a62233-9dca-412d-bb01-6be0e63093a2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:30:04.294706 master-0 kubenswrapper[31420]: I0220 12:30:04.294649 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a62233-9dca-412d-bb01-6be0e63093a2-kube-api-access-8zvjh" (OuterVolumeSpecName: "kube-api-access-8zvjh") pod "19a62233-9dca-412d-bb01-6be0e63093a2" (UID: "19a62233-9dca-412d-bb01-6be0e63093a2"). InnerVolumeSpecName "kube-api-access-8zvjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:30:04.394645 master-0 kubenswrapper[31420]: I0220 12:30:04.394567 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zvjh\" (UniqueName: \"kubernetes.io/projected/19a62233-9dca-412d-bb01-6be0e63093a2-kube-api-access-8zvjh\") on node \"master-0\" DevicePath \"\"" Feb 20 12:30:04.394645 master-0 kubenswrapper[31420]: I0220 12:30:04.394634 31420 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19a62233-9dca-412d-bb01-6be0e63093a2-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 12:30:04.678688 master-0 kubenswrapper[31420]: I0220 12:30:04.678559 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" event={"ID":"19a62233-9dca-412d-bb01-6be0e63093a2","Type":"ContainerDied","Data":"258c54552939471d0d1c4ff634acdd80aaec9533988e5dbd5e28e8e9b3371254"} Feb 20 12:30:04.678688 master-0 kubenswrapper[31420]: I0220 12:30:04.678621 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="258c54552939471d0d1c4ff634acdd80aaec9533988e5dbd5e28e8e9b3371254" Feb 20 12:30:04.678974 master-0 kubenswrapper[31420]: I0220 12:30:04.678941 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526510-94fmk" Feb 20 12:30:05.044590 master-0 kubenswrapper[31420]: I0220 12:30:05.044446 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-b737-account-create-update-wnbqm"] Feb 20 12:30:05.056340 master-0 kubenswrapper[31420]: I0220 12:30:05.056197 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-b737-account-create-update-wnbqm"] Feb 20 12:30:05.272922 master-0 kubenswrapper[31420]: I0220 12:30:05.272816 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4"] Feb 20 12:30:05.286775 master-0 kubenswrapper[31420]: I0220 12:30:05.286689 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526465-tpgw4"] Feb 20 12:30:05.522440 master-0 kubenswrapper[31420]: I0220 12:30:05.522354 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5827049e-6178-46cf-83c5-cff55daac768" path="/var/lib/kubelet/pods/5827049e-6178-46cf-83c5-cff55daac768/volumes" Feb 20 12:30:05.523855 master-0 kubenswrapper[31420]: I0220 12:30:05.523803 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba9661c0-9b94-468a-a207-51d7de5ecc92" path="/var/lib/kubelet/pods/ba9661c0-9b94-468a-a207-51d7de5ecc92/volumes" Feb 20 12:30:06.054594 master-0 kubenswrapper[31420]: I0220 12:30:06.054501 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-4q2vm"] Feb 20 12:30:06.072649 master-0 kubenswrapper[31420]: I0220 12:30:06.072557 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-4q2vm"] Feb 20 12:30:07.516618 master-0 kubenswrapper[31420]: I0220 12:30:07.516471 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f57898-ebe9-4c8b-984b-86a4b35fed36" path="/var/lib/kubelet/pods/33f57898-ebe9-4c8b-984b-86a4b35fed36/volumes" Feb 20 12:30:18.054765 master-0 kubenswrapper[31420]: I0220 12:30:18.054683 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-g4frr"] Feb 20 12:30:18.068131 master-0 kubenswrapper[31420]: I0220 12:30:18.068066 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-g4frr"] Feb 20 12:30:19.514160 master-0 kubenswrapper[31420]: I0220 12:30:19.513944 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d644bcbb-d205-4408-a0c7-7e3bbc55e180" path="/var/lib/kubelet/pods/d644bcbb-d205-4408-a0c7-7e3bbc55e180/volumes" Feb 20 12:30:28.055243 master-0 kubenswrapper[31420]: I0220 12:30:28.055169 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6c2dd"] Feb 20 12:30:28.106455 master-0 kubenswrapper[31420]: I0220 12:30:28.106372 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6c2dd"] Feb 20 12:30:29.518735 master-0 kubenswrapper[31420]: I0220 12:30:29.518637 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="546ac957-a54d-45ab-aaf7-f0f22fbb5883" path="/var/lib/kubelet/pods/546ac957-a54d-45ab-aaf7-f0f22fbb5883/volumes" Feb 20 12:30:30.067792 master-0 kubenswrapper[31420]: I0220 12:30:30.067716 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d44a4-db-sync-wkljp"] Feb 20 12:30:30.088958 master-0 kubenswrapper[31420]: I0220 12:30:30.088853 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d44a4-db-sync-wkljp"] Feb 20 12:30:31.519957 master-0 kubenswrapper[31420]: I0220 12:30:31.519878 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85180db-2d00-4ec9-b408-813c4db2d86b" path="/var/lib/kubelet/pods/f85180db-2d00-4ec9-b408-813c4db2d86b/volumes" Feb 20 12:30:37.069751 master-0 kubenswrapper[31420]: I0220 12:30:37.069663 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-v9jsf"] Feb 20 12:30:37.088670 master-0 kubenswrapper[31420]: I0220 12:30:37.088591 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-v9jsf"] Feb 20 12:30:37.514344 master-0 kubenswrapper[31420]: I0220 12:30:37.514252 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e730e756-3c53-48ff-a27d-5ddbf042a996" path="/var/lib/kubelet/pods/e730e756-3c53-48ff-a27d-5ddbf042a996/volumes" Feb 20 12:30:45.066562 master-0 kubenswrapper[31420]: I0220 12:30:45.066474 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-sync-9z9n4"] Feb 20 12:30:45.081578 master-0 kubenswrapper[31420]: I0220 12:30:45.081485 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-sync-9z9n4"] Feb 20 12:30:45.545096 master-0 kubenswrapper[31420]: I0220 12:30:45.544993 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1caf9802-b963-4368-ac29-e47812b48ad3" path="/var/lib/kubelet/pods/1caf9802-b963-4368-ac29-e47812b48ad3/volumes" Feb 20 12:30:53.074312 master-0 kubenswrapper[31420]: I0220 12:30:53.074200 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-50c8-account-create-update-m4sp9"] Feb 20 12:30:53.104346 master-0 kubenswrapper[31420]: I0220 12:30:53.104251 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-create-2p4tg"] Feb 20 12:30:53.118930 master-0 kubenswrapper[31420]: I0220 12:30:53.118505 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-50c8-account-create-update-m4sp9"] Feb 20 12:30:53.133131 master-0 kubenswrapper[31420]: I0220 12:30:53.133063 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-create-2p4tg"] Feb 20 12:30:53.516747 master-0 kubenswrapper[31420]: I0220 12:30:53.513336 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95fc91ce-d187-45a1-bc88-45c0415d6cde" path="/var/lib/kubelet/pods/95fc91ce-d187-45a1-bc88-45c0415d6cde/volumes" Feb 20 12:30:53.516747 master-0 kubenswrapper[31420]: I0220 12:30:53.514005 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aab73f02-e440-40c2-bc9d-073803f49fc8" path="/var/lib/kubelet/pods/aab73f02-e440-40c2-bc9d-073803f49fc8/volumes" Feb 20 12:30:58.666770 master-0 kubenswrapper[31420]: I0220 12:30:58.666704 31420 scope.go:117] "RemoveContainer" containerID="6b38aff5ec6e38ac18cb466ee47b561072367c3019ad692af90939cdeb3e6adb" Feb 20 12:30:58.710652 master-0 kubenswrapper[31420]: I0220 12:30:58.709795 31420 scope.go:117] "RemoveContainer" containerID="aa7560336d93a2765df2978574a3c3d391bc12dec63fac01cdb150e552294168" Feb 20 12:30:58.789954 master-0 kubenswrapper[31420]: I0220 12:30:58.789840 31420 scope.go:117] "RemoveContainer" containerID="7fe5f27a7ae49c9b623a643dc445b3fa77a869b52d99dffe7799a83234971877" Feb 20 12:30:58.825737 master-0 kubenswrapper[31420]: I0220 12:30:58.825665 31420 scope.go:117] "RemoveContainer" containerID="672c4272aec9b7e8ded032efdcd0ebd3b995afe98c3005fde71f6773d902c8c4" Feb 20 12:30:58.873125 master-0 kubenswrapper[31420]: I0220 12:30:58.873047 31420 scope.go:117] "RemoveContainer" containerID="8e9ea463f1c7507d7bcb6ea06d6b97f934c584defeeedab8c83b630db28fe8cf" Feb 20 12:30:58.941369 master-0 kubenswrapper[31420]: I0220 12:30:58.941296 31420 scope.go:117] "RemoveContainer" containerID="89b844a662e66706dd4946c8d62966d5c7ea40f1737816ca2dca836c3b17c380" Feb 20 12:30:58.980921 master-0 kubenswrapper[31420]: I0220 12:30:58.980881 31420 scope.go:117] "RemoveContainer" containerID="fc1b6c7301a8aa1f6470e8ecb18f25402b2f6fd16e029212b08909682ebf8d84" Feb 20 12:30:59.026466 master-0 kubenswrapper[31420]: I0220 12:30:59.026359 31420 scope.go:117] "RemoveContainer" containerID="a837f6722373de985a0022d381f5e0bd07e868b8b2bee4a3be2b2f377ec6c72d" Feb 20 12:30:59.071237 master-0 kubenswrapper[31420]: I0220 12:30:59.071154 31420 scope.go:117] "RemoveContainer" containerID="af7bcd0389da6ecbf387e067862097cbdde12c6359b8812eb3086092ba104b4a" Feb 20 12:30:59.115323 master-0 kubenswrapper[31420]: I0220 12:30:59.115283 31420 scope.go:117] "RemoveContainer" containerID="2f6e2cdefb1f6c8584f138cfc2ac8b1cae268cc4e1730c5cf5119ebd8fc9f159" Feb 20 12:30:59.137668 master-0 kubenswrapper[31420]: I0220 12:30:59.137628 31420 scope.go:117] "RemoveContainer" containerID="6519461b2e173561b6ef562740aab62732cd443f0355bca2694104d6d4bc42f7" Feb 20 12:30:59.172317 master-0 kubenswrapper[31420]: I0220 12:30:59.169042 31420 scope.go:117] "RemoveContainer" containerID="6cfa026e3bf259a7d671a1b5788cae2f9c7cd73916a2fd08aebc17f25e3c4856" Feb 20 12:31:14.085228 master-0 kubenswrapper[31420]: I0220 12:31:14.085145 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-sync-zz7jl"] Feb 20 12:31:14.102377 master-0 kubenswrapper[31420]: I0220 12:31:14.102174 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-sync-zz7jl"] Feb 20 12:31:15.530666 master-0 kubenswrapper[31420]: I0220 12:31:15.530576 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db935f50-a18d-4ceb-9a23-149442b7f041" path="/var/lib/kubelet/pods/db935f50-a18d-4ceb-9a23-149442b7f041/volumes" Feb 20 12:31:22.054634 master-0 kubenswrapper[31420]: I0220 12:31:22.054483 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f6bb-account-create-update-bmnfz"] Feb 20 12:31:22.078657 master-0 kubenswrapper[31420]: I0220 12:31:22.078428 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-bb02-account-create-update-gjp7p"] Feb 20 12:31:22.089353 master-0 kubenswrapper[31420]: I0220 12:31:22.089276 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f6bb-account-create-update-bmnfz"] Feb 20 12:31:22.098579 master-0 kubenswrapper[31420]: I0220 12:31:22.098503 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-bb02-account-create-update-gjp7p"] Feb 20 12:31:22.108717 master-0 kubenswrapper[31420]: I0220 12:31:22.108634 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-rkmxn"] Feb 20 12:31:22.119106 master-0 kubenswrapper[31420]: I0220 12:31:22.118997 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-rkmxn"] Feb 20 12:31:23.057892 master-0 kubenswrapper[31420]: I0220 12:31:23.057836 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-zhh86"] Feb 20 12:31:23.075821 master-0 kubenswrapper[31420]: I0220 12:31:23.075759 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-8jmg4"] Feb 20 12:31:23.085571 master-0 kubenswrapper[31420]: I0220 12:31:23.085476 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-zhh86"] Feb 20 12:31:23.094643 master-0 kubenswrapper[31420]: I0220 12:31:23.094598 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-8jmg4"] Feb 20 12:31:23.103820 master-0 kubenswrapper[31420]: I0220 12:31:23.103598 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-ace2-account-create-update-8fqwl"] Feb 20 12:31:23.112687 master-0 kubenswrapper[31420]: I0220 12:31:23.112646 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-ace2-account-create-update-8fqwl"] Feb 20 12:31:23.515628 master-0 kubenswrapper[31420]: I0220 12:31:23.514766 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710" path="/var/lib/kubelet/pods/1ec6ab0a-82f4-4cbd-bba7-b3fe8f9a1710/volumes" Feb 20 12:31:23.516147 master-0 kubenswrapper[31420]: I0220 12:31:23.516100 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dc933f4-f7a5-4ae9-8d43-336be86a5f34" path="/var/lib/kubelet/pods/5dc933f4-f7a5-4ae9-8d43-336be86a5f34/volumes" Feb 20 12:31:23.517406 master-0 kubenswrapper[31420]: I0220 12:31:23.517357 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="603ae118-f4a7-48ea-b99b-8a71f297b617" path="/var/lib/kubelet/pods/603ae118-f4a7-48ea-b99b-8a71f297b617/volumes" Feb 20 12:31:23.518653 master-0 kubenswrapper[31420]: I0220 12:31:23.518602 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af8da94d-28af-4392-bdf9-c0d1c6eaeda4" path="/var/lib/kubelet/pods/af8da94d-28af-4392-bdf9-c0d1c6eaeda4/volumes" Feb 20 12:31:23.520969 master-0 kubenswrapper[31420]: I0220 12:31:23.520921 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1383ebf-b51c-4b56-bdec-191e09ab35ac" path="/var/lib/kubelet/pods/d1383ebf-b51c-4b56-bdec-191e09ab35ac/volumes" Feb 20 12:31:23.522168 master-0 kubenswrapper[31420]: I0220 12:31:23.522125 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d142bf65-a53f-4fd8-95b0-46c05306e168" path="/var/lib/kubelet/pods/d142bf65-a53f-4fd8-95b0-46c05306e168/volumes" Feb 20 12:31:55.084978 master-0 kubenswrapper[31420]: I0220 12:31:55.084904 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nd8sh"] Feb 20 12:31:55.111864 master-0 kubenswrapper[31420]: I0220 12:31:55.111335 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nd8sh"] Feb 20 12:31:55.516569 master-0 kubenswrapper[31420]: I0220 12:31:55.516455 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41df128e-94f0-4150-b0d6-2e81542c1ab7" path="/var/lib/kubelet/pods/41df128e-94f0-4150-b0d6-2e81542c1ab7/volumes" Feb 20 12:31:59.450082 master-0 kubenswrapper[31420]: I0220 12:31:59.449981 31420 scope.go:117] "RemoveContainer" containerID="58130a99e04289ab8fafdad5b078f40dd208f13581bfb5b6885307aaf49fd8fe" Feb 20 12:31:59.530664 master-0 kubenswrapper[31420]: I0220 12:31:59.530584 31420 scope.go:117] "RemoveContainer" containerID="3815ba6310d328f53d67ea5b0ebb678596af8ce615bdef33afbcd4e1ce09a5af" Feb 20 12:31:59.590126 master-0 kubenswrapper[31420]: I0220 12:31:59.590053 31420 scope.go:117] "RemoveContainer" containerID="9bb96d56826f96af093dc2e3d3c8daa2d93bb851667bba16f7ab39fea2290048" Feb 20 12:31:59.640860 master-0 kubenswrapper[31420]: I0220 12:31:59.640805 31420 scope.go:117] "RemoveContainer" containerID="63a8f9b946b53f0ce79cb3ae39e1d46c27f94a427c48979fa883ebc4942a0c06" Feb 20 12:31:59.680635 master-0 kubenswrapper[31420]: I0220 12:31:59.680585 31420 scope.go:117] "RemoveContainer" containerID="8fb243e514b9ab6000eb1df79f67dfa417c2ac187ba67b476973a21322589132" Feb 20 12:31:59.721662 master-0 kubenswrapper[31420]: I0220 12:31:59.721582 31420 scope.go:117] "RemoveContainer" containerID="97757dc3712b363521655fbabb164ff78d64e5f2d3e118f917513153a122a4f6" Feb 20 12:31:59.780976 master-0 kubenswrapper[31420]: I0220 12:31:59.780913 31420 scope.go:117] "RemoveContainer" containerID="05c49cede32294792bf46180583c0b8580b76f9952709092062e13e31fc8fdf7" Feb 20 12:31:59.816666 master-0 kubenswrapper[31420]: I0220 12:31:59.816581 31420 scope.go:117] "RemoveContainer" containerID="9f50b16b977160a1f15ee6b7ba30314830fd1f4637aa06b39d33a96e7ac62bd9" Feb 20 12:32:22.094758 master-0 kubenswrapper[31420]: I0220 12:32:22.092325 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-zdwhr"] Feb 20 12:32:22.113559 master-0 kubenswrapper[31420]: I0220 12:32:22.112942 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xfqx5"] Feb 20 12:32:22.124714 master-0 kubenswrapper[31420]: I0220 12:32:22.124599 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-zdwhr"] Feb 20 12:32:22.135450 master-0 kubenswrapper[31420]: I0220 12:32:22.135383 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xfqx5"] Feb 20 12:32:23.527020 master-0 kubenswrapper[31420]: I0220 12:32:23.526661 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00959100-db68-42e4-9009-7424e5bdffe9" path="/var/lib/kubelet/pods/00959100-db68-42e4-9009-7424e5bdffe9/volumes" Feb 20 12:32:23.528075 master-0 kubenswrapper[31420]: I0220 12:32:23.527675 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05b1900-c2a2-4f2b-b61a-32fb0825fb42" path="/var/lib/kubelet/pods/b05b1900-c2a2-4f2b-b61a-32fb0825fb42/volumes" Feb 20 12:33:00.041259 master-0 kubenswrapper[31420]: I0220 12:33:00.041139 31420 scope.go:117] "RemoveContainer" containerID="072d654a1934e8076b1f63d71c80bc248f6d5ec0809f229f6ef67ff035b59ba6" Feb 20 12:33:00.112480 master-0 kubenswrapper[31420]: I0220 12:33:00.112371 31420 scope.go:117] "RemoveContainer" containerID="952582ff816e7852a32903563543aa08297742db0d7e5cb97fa7a873499a3555" Feb 20 12:33:03.077764 master-0 kubenswrapper[31420]: I0220 12:33:03.077668 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-host-discover-n25nj"] Feb 20 12:33:03.095495 master-0 kubenswrapper[31420]: I0220 12:33:03.095391 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-host-discover-n25nj"] Feb 20 12:33:03.525631 master-0 kubenswrapper[31420]: I0220 12:33:03.525328 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8218b6ff-f982-48d1-8fa5-82ea8e531fe6" path="/var/lib/kubelet/pods/8218b6ff-f982-48d1-8fa5-82ea8e531fe6/volumes" Feb 20 12:33:04.138779 master-0 kubenswrapper[31420]: I0220 12:33:04.138070 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-lnnfr"] Feb 20 12:33:04.153460 master-0 kubenswrapper[31420]: I0220 12:33:04.153253 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-lnnfr"] Feb 20 12:33:05.514140 master-0 kubenswrapper[31420]: I0220 12:33:05.514070 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d111110-d012-4ec5-9c26-0701910b11b2" path="/var/lib/kubelet/pods/1d111110-d012-4ec5-9c26-0701910b11b2/volumes" Feb 20 12:34:00.287512 master-0 kubenswrapper[31420]: I0220 12:34:00.287415 31420 scope.go:117] "RemoveContainer" containerID="d296df2bdf90e964d7294b8678e3869cb6f5a9977d3ce43f146fcfd250bbf994" Feb 20 12:34:00.363444 master-0 kubenswrapper[31420]: I0220 12:34:00.363361 31420 scope.go:117] "RemoveContainer" containerID="5c792a8d70d7c3874e833eb39c1bb9aa91d63055305749d26df7026bed545b13" Feb 20 12:40:52.202709 master-0 kubenswrapper[31420]: E0220 12:40:52.202598 31420 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.32.10:37942->192.168.32.10:45797: read tcp 192.168.32.10:37942->192.168.32.10:45797: read: connection reset by peer Feb 20 12:45:00.192562 master-0 kubenswrapper[31420]: I0220 12:45:00.192450 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2"] Feb 20 12:45:00.193669 master-0 kubenswrapper[31420]: E0220 12:45:00.193261 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a62233-9dca-412d-bb01-6be0e63093a2" containerName="collect-profiles" Feb 20 12:45:00.193669 master-0 kubenswrapper[31420]: I0220 12:45:00.193287 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a62233-9dca-412d-bb01-6be0e63093a2" containerName="collect-profiles" Feb 20 12:45:00.193830 master-0 kubenswrapper[31420]: I0220 12:45:00.193675 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="19a62233-9dca-412d-bb01-6be0e63093a2" containerName="collect-profiles" Feb 20 12:45:00.194764 master-0 kubenswrapper[31420]: I0220 12:45:00.194719 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" Feb 20 12:45:00.198746 master-0 kubenswrapper[31420]: I0220 12:45:00.198675 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 20 12:45:00.198887 master-0 kubenswrapper[31420]: I0220 12:45:00.198759 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8xt\" (UniqueName: \"kubernetes.io/projected/1e7c90d8-6653-4682-b89d-6aa5421c3b29-kube-api-access-rt8xt\") pod \"collect-profiles-29526525-txvm2\" (UID: \"1e7c90d8-6653-4682-b89d-6aa5421c3b29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" Feb 20 12:45:00.198973 master-0 kubenswrapper[31420]: I0220 12:45:00.198879 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e7c90d8-6653-4682-b89d-6aa5421c3b29-config-volume\") pod \"collect-profiles-29526525-txvm2\" (UID: \"1e7c90d8-6653-4682-b89d-6aa5421c3b29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" Feb 20 12:45:00.198973 master-0 kubenswrapper[31420]: I0220 12:45:00.198918 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1e7c90d8-6653-4682-b89d-6aa5421c3b29-secret-volume\") pod \"collect-profiles-29526525-txvm2\" (UID: \"1e7c90d8-6653-4682-b89d-6aa5421c3b29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" Feb 20 12:45:00.198973 master-0 kubenswrapper[31420]: I0220 12:45:00.198959 31420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kmfb6" Feb 20 12:45:00.213195 master-0 kubenswrapper[31420]: I0220 12:45:00.205133 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2"] Feb 20 12:45:00.302179 master-0 kubenswrapper[31420]: I0220 12:45:00.302106 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt8xt\" (UniqueName: \"kubernetes.io/projected/1e7c90d8-6653-4682-b89d-6aa5421c3b29-kube-api-access-rt8xt\") pod \"collect-profiles-29526525-txvm2\" (UID: \"1e7c90d8-6653-4682-b89d-6aa5421c3b29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" Feb 20 12:45:00.302396 master-0 kubenswrapper[31420]: I0220 12:45:00.302227 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e7c90d8-6653-4682-b89d-6aa5421c3b29-config-volume\") pod \"collect-profiles-29526525-txvm2\" (UID: \"1e7c90d8-6653-4682-b89d-6aa5421c3b29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" Feb 20 12:45:00.302396 master-0 kubenswrapper[31420]: I0220 12:45:00.302256 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1e7c90d8-6653-4682-b89d-6aa5421c3b29-secret-volume\") pod \"collect-profiles-29526525-txvm2\" (UID: \"1e7c90d8-6653-4682-b89d-6aa5421c3b29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" Feb 20 12:45:00.304895 master-0 kubenswrapper[31420]: I0220 12:45:00.304838 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e7c90d8-6653-4682-b89d-6aa5421c3b29-config-volume\") pod \"collect-profiles-29526525-txvm2\" (UID: \"1e7c90d8-6653-4682-b89d-6aa5421c3b29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" Feb 20 12:45:00.312375 master-0 kubenswrapper[31420]: I0220 12:45:00.312305 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1e7c90d8-6653-4682-b89d-6aa5421c3b29-secret-volume\") pod \"collect-profiles-29526525-txvm2\" (UID: \"1e7c90d8-6653-4682-b89d-6aa5421c3b29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" Feb 20 12:45:00.320710 master-0 kubenswrapper[31420]: I0220 12:45:00.320663 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt8xt\" (UniqueName: \"kubernetes.io/projected/1e7c90d8-6653-4682-b89d-6aa5421c3b29-kube-api-access-rt8xt\") pod \"collect-profiles-29526525-txvm2\" (UID: \"1e7c90d8-6653-4682-b89d-6aa5421c3b29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" Feb 20 12:45:00.551201 master-0 kubenswrapper[31420]: I0220 12:45:00.551123 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" Feb 20 12:45:01.135039 master-0 kubenswrapper[31420]: W0220 12:45:01.130411 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e7c90d8_6653_4682_b89d_6aa5421c3b29.slice/crio-1c525c58d880c0bf116adcb58c7a645fd7b02ea932d746142b8493a218b997ba WatchSource:0}: Error finding container 1c525c58d880c0bf116adcb58c7a645fd7b02ea932d746142b8493a218b997ba: Status 404 returned error can't find the container with id 1c525c58d880c0bf116adcb58c7a645fd7b02ea932d746142b8493a218b997ba Feb 20 12:45:01.135039 master-0 kubenswrapper[31420]: I0220 12:45:01.131292 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2"] Feb 20 12:45:01.462066 master-0 kubenswrapper[31420]: I0220 12:45:01.461948 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" event={"ID":"1e7c90d8-6653-4682-b89d-6aa5421c3b29","Type":"ContainerStarted","Data":"13d15b7feabeac57e172c6301e1ffe5243f9d6f15e9810ee1c70113166b3cf21"} Feb 20 12:45:01.462066 master-0 kubenswrapper[31420]: I0220 12:45:01.462017 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" event={"ID":"1e7c90d8-6653-4682-b89d-6aa5421c3b29","Type":"ContainerStarted","Data":"1c525c58d880c0bf116adcb58c7a645fd7b02ea932d746142b8493a218b997ba"} Feb 20 12:45:01.498565 master-0 kubenswrapper[31420]: I0220 12:45:01.496874 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" podStartSLOduration=1.496848464 podStartE2EDuration="1.496848464s" podCreationTimestamp="2026-02-20 12:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:45:01.485922014 +0000 UTC m=+2406.205160295" watchObservedRunningTime="2026-02-20 12:45:01.496848464 +0000 UTC m=+2406.216086715" Feb 20 12:45:02.481917 master-0 kubenswrapper[31420]: I0220 12:45:02.481808 31420 generic.go:334] "Generic (PLEG): container finished" podID="1e7c90d8-6653-4682-b89d-6aa5421c3b29" containerID="13d15b7feabeac57e172c6301e1ffe5243f9d6f15e9810ee1c70113166b3cf21" exitCode=0 Feb 20 12:45:02.482791 master-0 kubenswrapper[31420]: I0220 12:45:02.481931 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" event={"ID":"1e7c90d8-6653-4682-b89d-6aa5421c3b29","Type":"ContainerDied","Data":"13d15b7feabeac57e172c6301e1ffe5243f9d6f15e9810ee1c70113166b3cf21"} Feb 20 12:45:04.025420 master-0 kubenswrapper[31420]: I0220 12:45:04.025324 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" Feb 20 12:45:04.037783 master-0 kubenswrapper[31420]: I0220 12:45:04.035270 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt8xt\" (UniqueName: \"kubernetes.io/projected/1e7c90d8-6653-4682-b89d-6aa5421c3b29-kube-api-access-rt8xt\") pod \"1e7c90d8-6653-4682-b89d-6aa5421c3b29\" (UID: \"1e7c90d8-6653-4682-b89d-6aa5421c3b29\") " Feb 20 12:45:04.037783 master-0 kubenswrapper[31420]: I0220 12:45:04.035498 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1e7c90d8-6653-4682-b89d-6aa5421c3b29-secret-volume\") pod \"1e7c90d8-6653-4682-b89d-6aa5421c3b29\" (UID: \"1e7c90d8-6653-4682-b89d-6aa5421c3b29\") " Feb 20 12:45:04.037783 master-0 kubenswrapper[31420]: I0220 12:45:04.035714 31420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e7c90d8-6653-4682-b89d-6aa5421c3b29-config-volume\") pod \"1e7c90d8-6653-4682-b89d-6aa5421c3b29\" (UID: \"1e7c90d8-6653-4682-b89d-6aa5421c3b29\") " Feb 20 12:45:04.037783 master-0 kubenswrapper[31420]: I0220 12:45:04.036481 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e7c90d8-6653-4682-b89d-6aa5421c3b29-config-volume" (OuterVolumeSpecName: "config-volume") pod "1e7c90d8-6653-4682-b89d-6aa5421c3b29" (UID: "1e7c90d8-6653-4682-b89d-6aa5421c3b29"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 12:45:04.037783 master-0 kubenswrapper[31420]: I0220 12:45:04.037694 31420 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e7c90d8-6653-4682-b89d-6aa5421c3b29-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 12:45:04.040972 master-0 kubenswrapper[31420]: I0220 12:45:04.040864 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e7c90d8-6653-4682-b89d-6aa5421c3b29-kube-api-access-rt8xt" (OuterVolumeSpecName: "kube-api-access-rt8xt") pod "1e7c90d8-6653-4682-b89d-6aa5421c3b29" (UID: "1e7c90d8-6653-4682-b89d-6aa5421c3b29"). InnerVolumeSpecName "kube-api-access-rt8xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 12:45:04.042747 master-0 kubenswrapper[31420]: I0220 12:45:04.042670 31420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e7c90d8-6653-4682-b89d-6aa5421c3b29-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1e7c90d8-6653-4682-b89d-6aa5421c3b29" (UID: "1e7c90d8-6653-4682-b89d-6aa5421c3b29"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 12:45:04.139940 master-0 kubenswrapper[31420]: I0220 12:45:04.139810 31420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt8xt\" (UniqueName: \"kubernetes.io/projected/1e7c90d8-6653-4682-b89d-6aa5421c3b29-kube-api-access-rt8xt\") on node \"master-0\" DevicePath \"\"" Feb 20 12:45:04.140306 master-0 kubenswrapper[31420]: I0220 12:45:04.140133 31420 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1e7c90d8-6653-4682-b89d-6aa5421c3b29-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 12:45:04.511939 master-0 kubenswrapper[31420]: I0220 12:45:04.511800 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" event={"ID":"1e7c90d8-6653-4682-b89d-6aa5421c3b29","Type":"ContainerDied","Data":"1c525c58d880c0bf116adcb58c7a645fd7b02ea932d746142b8493a218b997ba"} Feb 20 12:45:04.511939 master-0 kubenswrapper[31420]: I0220 12:45:04.511905 31420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c525c58d880c0bf116adcb58c7a645fd7b02ea932d746142b8493a218b997ba" Feb 20 12:45:04.512362 master-0 kubenswrapper[31420]: I0220 12:45:04.512013 31420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526525-txvm2" Feb 20 12:45:04.619453 master-0 kubenswrapper[31420]: I0220 12:45:04.619357 31420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk"] Feb 20 12:45:04.637115 master-0 kubenswrapper[31420]: I0220 12:45:04.636819 31420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526480-9s2sk"] Feb 20 12:45:05.523824 master-0 kubenswrapper[31420]: I0220 12:45:05.523712 31420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cec8d018-dd5b-4607-b3d7-1c824aa9a193" path="/var/lib/kubelet/pods/cec8d018-dd5b-4607-b3d7-1c824aa9a193/volumes" Feb 20 12:46:00.929841 master-0 kubenswrapper[31420]: I0220 12:46:00.929742 31420 scope.go:117] "RemoveContainer" containerID="9c597a65a05cee62b3e0960e640acde8f3c03a2720e3886a29813cc02d33c3b4" Feb 20 12:57:38.575930 master-0 kubenswrapper[31420]: I0220 12:57:38.575845 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gbq7x/must-gather-4w8l7"] Feb 20 12:57:38.580162 master-0 kubenswrapper[31420]: E0220 12:57:38.580008 31420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e7c90d8-6653-4682-b89d-6aa5421c3b29" containerName="collect-profiles" Feb 20 12:57:38.580162 master-0 kubenswrapper[31420]: I0220 12:57:38.580046 31420 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e7c90d8-6653-4682-b89d-6aa5421c3b29" containerName="collect-profiles" Feb 20 12:57:38.580472 master-0 kubenswrapper[31420]: I0220 12:57:38.580447 31420 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e7c90d8-6653-4682-b89d-6aa5421c3b29" containerName="collect-profiles" Feb 20 12:57:38.594767 master-0 kubenswrapper[31420]: I0220 12:57:38.594689 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gbq7x/must-gather-7gvpz"] Feb 20 12:57:38.595005 master-0 kubenswrapper[31420]: I0220 12:57:38.594878 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gbq7x/must-gather-4w8l7" Feb 20 12:57:38.597038 master-0 kubenswrapper[31420]: I0220 12:57:38.597000 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gbq7x/must-gather-7gvpz" Feb 20 12:57:38.600735 master-0 kubenswrapper[31420]: I0220 12:57:38.600688 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gbq7x"/"openshift-service-ca.crt" Feb 20 12:57:38.600974 master-0 kubenswrapper[31420]: I0220 12:57:38.600951 31420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gbq7x"/"kube-root-ca.crt" Feb 20 12:57:38.604311 master-0 kubenswrapper[31420]: I0220 12:57:38.604270 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gbq7x/must-gather-4w8l7"] Feb 20 12:57:38.613298 master-0 kubenswrapper[31420]: I0220 12:57:38.613234 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gbq7x/must-gather-7gvpz"] Feb 20 12:57:38.781155 master-0 kubenswrapper[31420]: I0220 12:57:38.781092 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smd6n\" (UniqueName: \"kubernetes.io/projected/8d6e3d21-13a3-488e-9836-0fb29dbd2da4-kube-api-access-smd6n\") pod \"must-gather-7gvpz\" (UID: \"8d6e3d21-13a3-488e-9836-0fb29dbd2da4\") " pod="openshift-must-gather-gbq7x/must-gather-7gvpz" Feb 20 12:57:38.781377 master-0 kubenswrapper[31420]: I0220 12:57:38.781236 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65d50790-e6da-4ab0-a311-a6012af250b3-must-gather-output\") pod \"must-gather-4w8l7\" (UID: \"65d50790-e6da-4ab0-a311-a6012af250b3\") " pod="openshift-must-gather-gbq7x/must-gather-4w8l7" Feb 20 12:57:38.781377 master-0 kubenswrapper[31420]: I0220 12:57:38.781282 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8d6e3d21-13a3-488e-9836-0fb29dbd2da4-must-gather-output\") pod \"must-gather-7gvpz\" (UID: \"8d6e3d21-13a3-488e-9836-0fb29dbd2da4\") " pod="openshift-must-gather-gbq7x/must-gather-7gvpz" Feb 20 12:57:38.781497 master-0 kubenswrapper[31420]: I0220 12:57:38.781445 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8msdh\" (UniqueName: \"kubernetes.io/projected/65d50790-e6da-4ab0-a311-a6012af250b3-kube-api-access-8msdh\") pod \"must-gather-4w8l7\" (UID: \"65d50790-e6da-4ab0-a311-a6012af250b3\") " pod="openshift-must-gather-gbq7x/must-gather-4w8l7" Feb 20 12:57:38.884740 master-0 kubenswrapper[31420]: I0220 12:57:38.884607 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smd6n\" (UniqueName: \"kubernetes.io/projected/8d6e3d21-13a3-488e-9836-0fb29dbd2da4-kube-api-access-smd6n\") pod \"must-gather-7gvpz\" (UID: \"8d6e3d21-13a3-488e-9836-0fb29dbd2da4\") " pod="openshift-must-gather-gbq7x/must-gather-7gvpz" Feb 20 12:57:38.884920 master-0 kubenswrapper[31420]: I0220 12:57:38.884779 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65d50790-e6da-4ab0-a311-a6012af250b3-must-gather-output\") pod \"must-gather-4w8l7\" (UID: \"65d50790-e6da-4ab0-a311-a6012af250b3\") " pod="openshift-must-gather-gbq7x/must-gather-4w8l7" Feb 20 12:57:38.884920 master-0 kubenswrapper[31420]: I0220 12:57:38.884830 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8d6e3d21-13a3-488e-9836-0fb29dbd2da4-must-gather-output\") pod \"must-gather-7gvpz\" (UID: \"8d6e3d21-13a3-488e-9836-0fb29dbd2da4\") " pod="openshift-must-gather-gbq7x/must-gather-7gvpz" Feb 20 12:57:38.884986 master-0 kubenswrapper[31420]: I0220 12:57:38.884912 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8msdh\" (UniqueName: \"kubernetes.io/projected/65d50790-e6da-4ab0-a311-a6012af250b3-kube-api-access-8msdh\") pod \"must-gather-4w8l7\" (UID: \"65d50790-e6da-4ab0-a311-a6012af250b3\") " pod="openshift-must-gather-gbq7x/must-gather-4w8l7" Feb 20 12:57:38.886546 master-0 kubenswrapper[31420]: I0220 12:57:38.886495 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65d50790-e6da-4ab0-a311-a6012af250b3-must-gather-output\") pod \"must-gather-4w8l7\" (UID: \"65d50790-e6da-4ab0-a311-a6012af250b3\") " pod="openshift-must-gather-gbq7x/must-gather-4w8l7" Feb 20 12:57:38.887049 master-0 kubenswrapper[31420]: I0220 12:57:38.887015 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8d6e3d21-13a3-488e-9836-0fb29dbd2da4-must-gather-output\") pod \"must-gather-7gvpz\" (UID: \"8d6e3d21-13a3-488e-9836-0fb29dbd2da4\") " pod="openshift-must-gather-gbq7x/must-gather-7gvpz" Feb 20 12:57:38.902827 master-0 kubenswrapper[31420]: I0220 12:57:38.902775 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smd6n\" (UniqueName: \"kubernetes.io/projected/8d6e3d21-13a3-488e-9836-0fb29dbd2da4-kube-api-access-smd6n\") pod \"must-gather-7gvpz\" (UID: \"8d6e3d21-13a3-488e-9836-0fb29dbd2da4\") " pod="openshift-must-gather-gbq7x/must-gather-7gvpz" Feb 20 12:57:38.904723 master-0 kubenswrapper[31420]: I0220 12:57:38.904685 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8msdh\" (UniqueName: \"kubernetes.io/projected/65d50790-e6da-4ab0-a311-a6012af250b3-kube-api-access-8msdh\") pod \"must-gather-4w8l7\" (UID: \"65d50790-e6da-4ab0-a311-a6012af250b3\") " pod="openshift-must-gather-gbq7x/must-gather-4w8l7" Feb 20 12:57:38.926347 master-0 kubenswrapper[31420]: I0220 12:57:38.926258 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gbq7x/must-gather-4w8l7" Feb 20 12:57:38.958992 master-0 kubenswrapper[31420]: I0220 12:57:38.958915 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gbq7x/must-gather-7gvpz" Feb 20 12:57:39.513739 master-0 kubenswrapper[31420]: I0220 12:57:39.513692 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gbq7x/must-gather-7gvpz"] Feb 20 12:57:39.519602 master-0 kubenswrapper[31420]: I0220 12:57:39.519574 31420 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 12:57:39.636256 master-0 kubenswrapper[31420]: W0220 12:57:39.636148 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65d50790_e6da_4ab0_a311_a6012af250b3.slice/crio-3f4b301c30f3f50f1850a4713116dbc6d237b5e92607cde011e089636b5c5bb7 WatchSource:0}: Error finding container 3f4b301c30f3f50f1850a4713116dbc6d237b5e92607cde011e089636b5c5bb7: Status 404 returned error can't find the container with id 3f4b301c30f3f50f1850a4713116dbc6d237b5e92607cde011e089636b5c5bb7 Feb 20 12:57:39.636767 master-0 kubenswrapper[31420]: I0220 12:57:39.636411 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gbq7x/must-gather-4w8l7"] Feb 20 12:57:40.358847 master-0 kubenswrapper[31420]: I0220 12:57:40.358750 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gbq7x/must-gather-7gvpz" event={"ID":"8d6e3d21-13a3-488e-9836-0fb29dbd2da4","Type":"ContainerStarted","Data":"72a0d3ad8450d547db3cb4e8e4595e4ff703d3c239968a56fcd89f56a07fb85d"} Feb 20 12:57:40.380814 master-0 kubenswrapper[31420]: I0220 12:57:40.380714 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gbq7x/must-gather-4w8l7" event={"ID":"65d50790-e6da-4ab0-a311-a6012af250b3","Type":"ContainerStarted","Data":"3f4b301c30f3f50f1850a4713116dbc6d237b5e92607cde011e089636b5c5bb7"} Feb 20 12:57:42.409480 master-0 kubenswrapper[31420]: I0220 12:57:42.409407 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gbq7x/must-gather-4w8l7" event={"ID":"65d50790-e6da-4ab0-a311-a6012af250b3","Type":"ContainerStarted","Data":"efb3ce45dd55ec49ce2922090ed5e773192db18ede008be9c43fafb0b8ca8296"} Feb 20 12:57:42.409480 master-0 kubenswrapper[31420]: I0220 12:57:42.409475 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gbq7x/must-gather-4w8l7" event={"ID":"65d50790-e6da-4ab0-a311-a6012af250b3","Type":"ContainerStarted","Data":"b14abf8d79a26b89ee6e22658edd6250cdc067ee4d4ce8855f817bd0f56ac216"} Feb 20 12:57:42.444025 master-0 kubenswrapper[31420]: I0220 12:57:42.443924 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gbq7x/must-gather-4w8l7" podStartSLOduration=2.903385864 podStartE2EDuration="4.443899689s" podCreationTimestamp="2026-02-20 12:57:38 +0000 UTC" firstStartedPulling="2026-02-20 12:57:39.640279826 +0000 UTC m=+3164.359518077" lastFinishedPulling="2026-02-20 12:57:41.180793661 +0000 UTC m=+3165.900031902" observedRunningTime="2026-02-20 12:57:42.426687227 +0000 UTC m=+3167.145925478" watchObservedRunningTime="2026-02-20 12:57:42.443899689 +0000 UTC m=+3167.163137930" Feb 20 12:57:44.292688 master-0 kubenswrapper[31420]: I0220 12:57:44.287810 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-57476485-dwvgg_89383482-190e-4f74-a81e-b1547e5b9ae6/cluster-version-operator/0.log" Feb 20 12:57:44.880635 master-0 kubenswrapper[31420]: I0220 12:57:44.878622 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-57476485-dwvgg_89383482-190e-4f74-a81e-b1547e5b9ae6/cluster-version-operator/1.log" Feb 20 12:57:48.040600 master-0 kubenswrapper[31420]: I0220 12:57:48.040518 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-vdrkc_d8a5df14-16b6-4d50-900b-8f0c241b1d1b/controller/0.log" Feb 20 12:57:48.054322 master-0 kubenswrapper[31420]: I0220 12:57:48.054272 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-dml4g_8585d57e-59ce-4616-9c40-80fa1d13357c/nmstate-console-plugin/0.log" Feb 20 12:57:48.054642 master-0 kubenswrapper[31420]: I0220 12:57:48.054435 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-vdrkc_d8a5df14-16b6-4d50-900b-8f0c241b1d1b/kube-rbac-proxy/0.log" Feb 20 12:57:48.096607 master-0 kubenswrapper[31420]: I0220 12:57:48.091274 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-k6pl2_b686af0c-791e-42be-b608-e1a265d973a0/nmstate-handler/0.log" Feb 20 12:57:48.112595 master-0 kubenswrapper[31420]: I0220 12:57:48.112234 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-r22kg_a3c9202b-541d-4ec7-9ef5-d5da935ad5d9/nmstate-metrics/0.log" Feb 20 12:57:48.133677 master-0 kubenswrapper[31420]: I0220 12:57:48.131416 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-r22kg_a3c9202b-541d-4ec7-9ef5-d5da935ad5d9/kube-rbac-proxy/0.log" Feb 20 12:57:48.133677 master-0 kubenswrapper[31420]: I0220 12:57:48.132440 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/controller/0.log" Feb 20 12:57:48.165581 master-0 kubenswrapper[31420]: I0220 12:57:48.163576 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-slnc4_ba64f39d-a56c-45b2-8dcb-b796be88d71b/nmstate-operator/0.log" Feb 20 12:57:48.274556 master-0 kubenswrapper[31420]: I0220 12:57:48.273853 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-qj8cb_6cd442ad-ad65-497a-b5eb-bc79c3023466/nmstate-webhook/0.log" Feb 20 12:57:49.426543 master-0 kubenswrapper[31420]: I0220 12:57:49.425842 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/frr/0.log" Feb 20 12:57:49.441542 master-0 kubenswrapper[31420]: I0220 12:57:49.437847 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/reloader/0.log" Feb 20 12:57:49.451625 master-0 kubenswrapper[31420]: I0220 12:57:49.446859 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/frr-metrics/0.log" Feb 20 12:57:49.460555 master-0 kubenswrapper[31420]: I0220 12:57:49.457583 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/kube-rbac-proxy/0.log" Feb 20 12:57:49.476548 master-0 kubenswrapper[31420]: I0220 12:57:49.470881 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/kube-rbac-proxy-frr/0.log" Feb 20 12:57:49.490545 master-0 kubenswrapper[31420]: I0220 12:57:49.482876 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/cp-frr-files/0.log" Feb 20 12:57:49.494539 master-0 kubenswrapper[31420]: I0220 12:57:49.490935 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/cp-reloader/0.log" Feb 20 12:57:49.515553 master-0 kubenswrapper[31420]: I0220 12:57:49.511719 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/cp-metrics/0.log" Feb 20 12:57:49.532549 master-0 kubenswrapper[31420]: I0220 12:57:49.532293 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-d7llz_91455b18-03a0-49c7-aa61-59b91e88a5fe/frr-k8s-webhook-server/0.log" Feb 20 12:57:49.553547 master-0 kubenswrapper[31420]: I0220 12:57:49.552731 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7865667bdc-lwg78_05b963e1-7eca-4b48-b411-ce2bbf48fbf2/manager/0.log" Feb 20 12:57:49.570553 master-0 kubenswrapper[31420]: I0220 12:57:49.565366 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8486f65d77-9ck87_882e49fa-c8b8-4f18-a340-4dfdd950a449/webhook-server/0.log" Feb 20 12:57:49.983727 master-0 kubenswrapper[31420]: I0220 12:57:49.983640 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r94p4_b91f2548-98e3-418c-9a05-58502d67d66f/speaker/0.log" Feb 20 12:57:49.993552 master-0 kubenswrapper[31420]: I0220 12:57:49.989673 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r94p4_b91f2548-98e3-418c-9a05-58502d67d66f/kube-rbac-proxy/0.log" Feb 20 12:57:51.132756 master-0 kubenswrapper[31420]: I0220 12:57:51.132720 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcdctl/0.log" Feb 20 12:57:51.203545 master-0 kubenswrapper[31420]: I0220 12:57:51.200703 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-ffd7cb8f5-hmkkp_b5175581-36b3-4313-99aa-2e404a4c38cb/oauth-openshift/0.log" Feb 20 12:57:51.446870 master-0 kubenswrapper[31420]: I0220 12:57:51.446750 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd/0.log" Feb 20 12:57:51.474174 master-0 kubenswrapper[31420]: I0220 12:57:51.473799 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-metrics/0.log" Feb 20 12:57:51.491036 master-0 kubenswrapper[31420]: I0220 12:57:51.488922 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-readyz/0.log" Feb 20 12:57:51.515671 master-0 kubenswrapper[31420]: I0220 12:57:51.509346 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-rev/0.log" Feb 20 12:57:51.539973 master-0 kubenswrapper[31420]: I0220 12:57:51.539928 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/setup/0.log" Feb 20 12:57:51.551179 master-0 kubenswrapper[31420]: I0220 12:57:51.551141 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-ensure-env-vars/0.log" Feb 20 12:57:51.565555 master-0 kubenswrapper[31420]: I0220 12:57:51.565452 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-resources-copy/0.log" Feb 20 12:57:51.607551 master-0 kubenswrapper[31420]: I0220 12:57:51.607264 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_5710eb66-9717-4beb-a8b2-19f6886376b3/installer/0.log" Feb 20 12:57:51.671795 master-0 kubenswrapper[31420]: I0220 12:57:51.671749 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_305f625e-16b0-4840-a9e2-25571b49ad2a/installer/0.log" Feb 20 12:57:52.409419 master-0 kubenswrapper[31420]: I0220 12:57:52.409379 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-vtcnw_6c3aa45a-44cc-48fb-a478-ce01a70c4b02/authentication-operator/1.log" Feb 20 12:57:52.442738 master-0 kubenswrapper[31420]: I0220 12:57:52.442687 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-vtcnw_6c3aa45a-44cc-48fb-a478-ce01a70c4b02/authentication-operator/2.log" Feb 20 12:57:52.647558 master-0 kubenswrapper[31420]: I0220 12:57:52.645311 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gbq7x/must-gather-7gvpz" event={"ID":"8d6e3d21-13a3-488e-9836-0fb29dbd2da4","Type":"ContainerStarted","Data":"7ab208aade38178fdfb123f247fe46ebee6cb663c6ff972f270836b22e80d950"} Feb 20 12:57:52.647558 master-0 kubenswrapper[31420]: I0220 12:57:52.645382 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gbq7x/must-gather-7gvpz" event={"ID":"8d6e3d21-13a3-488e-9836-0fb29dbd2da4","Type":"ContainerStarted","Data":"284265c85a4d3c28f4d3940098ba674f3a9828fc75f0291a0de78ce525096e89"} Feb 20 12:57:52.678607 master-0 kubenswrapper[31420]: I0220 12:57:52.678141 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gbq7x/must-gather-7gvpz" podStartSLOduration=2.673825477 podStartE2EDuration="14.678123415s" podCreationTimestamp="2026-02-20 12:57:38 +0000 UTC" firstStartedPulling="2026-02-20 12:57:39.519502591 +0000 UTC m=+3164.238740822" lastFinishedPulling="2026-02-20 12:57:51.523800519 +0000 UTC m=+3176.243038760" observedRunningTime="2026-02-20 12:57:52.672157704 +0000 UTC m=+3177.391395945" watchObservedRunningTime="2026-02-20 12:57:52.678123415 +0000 UTC m=+3177.397361656" Feb 20 12:57:52.704714 master-0 kubenswrapper[31420]: I0220 12:57:52.704650 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-s6zmp_ab7ffa68-5f62-4dc8-a24a-9988f3bb1edd/assisted-installer-controller/0.log" Feb 20 12:57:53.337611 master-0 kubenswrapper[31420]: I0220 12:57:53.337331 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7b65dc9fcb-fkkd5_9c078827-3bdb-4509-aeb3-eb558df1f6e7/router/4.log" Feb 20 12:57:53.340334 master-0 kubenswrapper[31420]: I0220 12:57:53.340281 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7b65dc9fcb-fkkd5_9c078827-3bdb-4509-aeb3-eb558df1f6e7/router/3.log" Feb 20 12:57:53.398695 master-0 kubenswrapper[31420]: I0220 12:57:53.398587 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p"] Feb 20 12:57:53.400143 master-0 kubenswrapper[31420]: I0220 12:57:53.400112 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.475623 master-0 kubenswrapper[31420]: I0220 12:57:53.475566 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p"] Feb 20 12:57:53.491604 master-0 kubenswrapper[31420]: I0220 12:57:53.489947 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b6954b56-864b-46f8-99e2-822d254db6e6-sys\") pod \"perf-node-gather-daemonset-l6w9p\" (UID: \"b6954b56-864b-46f8-99e2-822d254db6e6\") " pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.491604 master-0 kubenswrapper[31420]: I0220 12:57:53.490014 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjfx5\" (UniqueName: \"kubernetes.io/projected/b6954b56-864b-46f8-99e2-822d254db6e6-kube-api-access-sjfx5\") pod \"perf-node-gather-daemonset-l6w9p\" (UID: \"b6954b56-864b-46f8-99e2-822d254db6e6\") " pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.491604 master-0 kubenswrapper[31420]: I0220 12:57:53.490048 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/b6954b56-864b-46f8-99e2-822d254db6e6-proc\") pod \"perf-node-gather-daemonset-l6w9p\" (UID: \"b6954b56-864b-46f8-99e2-822d254db6e6\") " pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.491604 master-0 kubenswrapper[31420]: I0220 12:57:53.490081 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/b6954b56-864b-46f8-99e2-822d254db6e6-podres\") pod \"perf-node-gather-daemonset-l6w9p\" (UID: \"b6954b56-864b-46f8-99e2-822d254db6e6\") " pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.491604 master-0 kubenswrapper[31420]: I0220 12:57:53.490111 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6954b56-864b-46f8-99e2-822d254db6e6-lib-modules\") pod \"perf-node-gather-daemonset-l6w9p\" (UID: \"b6954b56-864b-46f8-99e2-822d254db6e6\") " pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.592606 master-0 kubenswrapper[31420]: I0220 12:57:53.591859 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b6954b56-864b-46f8-99e2-822d254db6e6-sys\") pod \"perf-node-gather-daemonset-l6w9p\" (UID: \"b6954b56-864b-46f8-99e2-822d254db6e6\") " pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.592606 master-0 kubenswrapper[31420]: I0220 12:57:53.591975 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b6954b56-864b-46f8-99e2-822d254db6e6-sys\") pod \"perf-node-gather-daemonset-l6w9p\" (UID: \"b6954b56-864b-46f8-99e2-822d254db6e6\") " pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.592606 master-0 kubenswrapper[31420]: I0220 12:57:53.592349 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjfx5\" (UniqueName: \"kubernetes.io/projected/b6954b56-864b-46f8-99e2-822d254db6e6-kube-api-access-sjfx5\") pod \"perf-node-gather-daemonset-l6w9p\" (UID: \"b6954b56-864b-46f8-99e2-822d254db6e6\") " pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.592606 master-0 kubenswrapper[31420]: I0220 12:57:53.592475 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/b6954b56-864b-46f8-99e2-822d254db6e6-proc\") pod \"perf-node-gather-daemonset-l6w9p\" (UID: \"b6954b56-864b-46f8-99e2-822d254db6e6\") " pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.593001 master-0 kubenswrapper[31420]: I0220 12:57:53.592617 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/b6954b56-864b-46f8-99e2-822d254db6e6-podres\") pod \"perf-node-gather-daemonset-l6w9p\" (UID: \"b6954b56-864b-46f8-99e2-822d254db6e6\") " pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.593001 master-0 kubenswrapper[31420]: I0220 12:57:53.592693 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6954b56-864b-46f8-99e2-822d254db6e6-lib-modules\") pod \"perf-node-gather-daemonset-l6w9p\" (UID: \"b6954b56-864b-46f8-99e2-822d254db6e6\") " pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.593001 master-0 kubenswrapper[31420]: I0220 12:57:53.592985 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/b6954b56-864b-46f8-99e2-822d254db6e6-podres\") pod \"perf-node-gather-daemonset-l6w9p\" (UID: \"b6954b56-864b-46f8-99e2-822d254db6e6\") " pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.593140 master-0 kubenswrapper[31420]: I0220 12:57:53.593077 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/b6954b56-864b-46f8-99e2-822d254db6e6-proc\") pod \"perf-node-gather-daemonset-l6w9p\" (UID: \"b6954b56-864b-46f8-99e2-822d254db6e6\") " pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.593387 master-0 kubenswrapper[31420]: I0220 12:57:53.593355 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6954b56-864b-46f8-99e2-822d254db6e6-lib-modules\") pod \"perf-node-gather-daemonset-l6w9p\" (UID: \"b6954b56-864b-46f8-99e2-822d254db6e6\") " pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.615598 master-0 kubenswrapper[31420]: I0220 12:57:53.615175 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjfx5\" (UniqueName: \"kubernetes.io/projected/b6954b56-864b-46f8-99e2-822d254db6e6-kube-api-access-sjfx5\") pod \"perf-node-gather-daemonset-l6w9p\" (UID: \"b6954b56-864b-46f8-99e2-822d254db6e6\") " pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:53.781664 master-0 kubenswrapper[31420]: I0220 12:57:53.781598 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:54.157416 master-0 kubenswrapper[31420]: I0220 12:57:54.157292 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-69fc79b84-rr6rh_fca213c3-42ca-4341-a2e6-a143b9389f9e/oauth-apiserver/0.log" Feb 20 12:57:54.171219 master-0 kubenswrapper[31420]: I0220 12:57:54.168224 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-69fc79b84-rr6rh_fca213c3-42ca-4341-a2e6-a143b9389f9e/fix-audit-permissions/0.log" Feb 20 12:57:54.278430 master-0 kubenswrapper[31420]: W0220 12:57:54.278301 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb6954b56_864b_46f8_99e2_822d254db6e6.slice/crio-7dadccf7a8c3cfa0a79a3347cab17141799e312ec498a1fbe6e7dd97a9358295 WatchSource:0}: Error finding container 7dadccf7a8c3cfa0a79a3347cab17141799e312ec498a1fbe6e7dd97a9358295: Status 404 returned error can't find the container with id 7dadccf7a8c3cfa0a79a3347cab17141799e312ec498a1fbe6e7dd97a9358295 Feb 20 12:57:54.281185 master-0 kubenswrapper[31420]: I0220 12:57:54.281142 31420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p"] Feb 20 12:57:54.567380 master-0 kubenswrapper[31420]: E0220 12:57:54.567309 31420 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.32.10:57314->192.168.32.10:45797: read tcp 192.168.32.10:57314->192.168.32.10:45797: read: connection reset by peer Feb 20 12:57:54.668303 master-0 kubenswrapper[31420]: I0220 12:57:54.668235 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" event={"ID":"b6954b56-864b-46f8-99e2-822d254db6e6","Type":"ContainerStarted","Data":"7dadccf7a8c3cfa0a79a3347cab17141799e312ec498a1fbe6e7dd97a9358295"} Feb 20 12:57:54.985282 master-0 kubenswrapper[31420]: I0220 12:57:54.985244 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-sksbt_8ab951b1-6898-4357-b813-16365f3f89d5/kube-rbac-proxy/0.log" Feb 20 12:57:55.013562 master-0 kubenswrapper[31420]: I0220 12:57:55.012990 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-sksbt_8ab951b1-6898-4357-b813-16365f3f89d5/cluster-autoscaler-operator/0.log" Feb 20 12:57:55.029552 master-0 kubenswrapper[31420]: I0220 12:57:55.029253 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-sksbt_8ab951b1-6898-4357-b813-16365f3f89d5/cluster-autoscaler-operator/1.log" Feb 20 12:57:55.063549 master-0 kubenswrapper[31420]: I0220 12:57:55.056014 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k95mq_bd609bd3-2525-4b88-8f07-94a0418fb582/cluster-baremetal-operator/2.log" Feb 20 12:57:55.063549 master-0 kubenswrapper[31420]: I0220 12:57:55.056169 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k95mq_bd609bd3-2525-4b88-8f07-94a0418fb582/cluster-baremetal-operator/1.log" Feb 20 12:57:55.082086 master-0 kubenswrapper[31420]: I0220 12:57:55.082030 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k95mq_bd609bd3-2525-4b88-8f07-94a0418fb582/baremetal-kube-rbac-proxy/0.log" Feb 20 12:57:55.101241 master-0 kubenswrapper[31420]: I0220 12:57:55.101186 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-fn7j5_21e8e44b-b883-4afb-af90-d6c1265edf34/control-plane-machine-set-operator/1.log" Feb 20 12:57:55.101702 master-0 kubenswrapper[31420]: I0220 12:57:55.101657 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-fn7j5_21e8e44b-b883-4afb-af90-d6c1265edf34/control-plane-machine-set-operator/0.log" Feb 20 12:57:55.120536 master-0 kubenswrapper[31420]: I0220 12:57:55.120469 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-dmvlr_62fc400b-b3dd-4134-bd27-69dd8369153a/kube-rbac-proxy/0.log" Feb 20 12:57:55.142455 master-0 kubenswrapper[31420]: I0220 12:57:55.142394 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-dmvlr_62fc400b-b3dd-4134-bd27-69dd8369153a/machine-api-operator/0.log" Feb 20 12:57:55.612628 master-0 kubenswrapper[31420]: I0220 12:57:55.605277 31420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gbq7x/master-0-debug-z7k5t"] Feb 20 12:57:55.612628 master-0 kubenswrapper[31420]: I0220 12:57:55.607307 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gbq7x/master-0-debug-z7k5t" Feb 20 12:57:55.680228 master-0 kubenswrapper[31420]: I0220 12:57:55.680146 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" event={"ID":"b6954b56-864b-46f8-99e2-822d254db6e6","Type":"ContainerStarted","Data":"66b3d5c9363991641733efd935baa019652d56011f349365c88467849bfcda81"} Feb 20 12:57:55.680461 master-0 kubenswrapper[31420]: I0220 12:57:55.680284 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:57:55.704669 master-0 kubenswrapper[31420]: I0220 12:57:55.704586 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" podStartSLOduration=2.704568933 podStartE2EDuration="2.704568933s" podCreationTimestamp="2026-02-20 12:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 12:57:55.695863064 +0000 UTC m=+3180.415101315" watchObservedRunningTime="2026-02-20 12:57:55.704568933 +0000 UTC m=+3180.423807174" Feb 20 12:57:55.749151 master-0 kubenswrapper[31420]: I0220 12:57:55.749073 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/488656f3-6df5-4d96-9e9f-84b142bde5d8-host\") pod \"master-0-debug-z7k5t\" (UID: \"488656f3-6df5-4d96-9e9f-84b142bde5d8\") " pod="openshift-must-gather-gbq7x/master-0-debug-z7k5t" Feb 20 12:57:55.749354 master-0 kubenswrapper[31420]: I0220 12:57:55.749325 31420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hbxp\" (UniqueName: \"kubernetes.io/projected/488656f3-6df5-4d96-9e9f-84b142bde5d8-kube-api-access-2hbxp\") pod \"master-0-debug-z7k5t\" (UID: \"488656f3-6df5-4d96-9e9f-84b142bde5d8\") " pod="openshift-must-gather-gbq7x/master-0-debug-z7k5t" Feb 20 12:57:55.852209 master-0 kubenswrapper[31420]: I0220 12:57:55.852135 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hbxp\" (UniqueName: \"kubernetes.io/projected/488656f3-6df5-4d96-9e9f-84b142bde5d8-kube-api-access-2hbxp\") pod \"master-0-debug-z7k5t\" (UID: \"488656f3-6df5-4d96-9e9f-84b142bde5d8\") " pod="openshift-must-gather-gbq7x/master-0-debug-z7k5t" Feb 20 12:57:55.852692 master-0 kubenswrapper[31420]: I0220 12:57:55.852661 31420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/488656f3-6df5-4d96-9e9f-84b142bde5d8-host\") pod \"master-0-debug-z7k5t\" (UID: \"488656f3-6df5-4d96-9e9f-84b142bde5d8\") " pod="openshift-must-gather-gbq7x/master-0-debug-z7k5t" Feb 20 12:57:55.852927 master-0 kubenswrapper[31420]: I0220 12:57:55.852772 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/488656f3-6df5-4d96-9e9f-84b142bde5d8-host\") pod \"master-0-debug-z7k5t\" (UID: \"488656f3-6df5-4d96-9e9f-84b142bde5d8\") " pod="openshift-must-gather-gbq7x/master-0-debug-z7k5t" Feb 20 12:57:55.871001 master-0 kubenswrapper[31420]: I0220 12:57:55.870844 31420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hbxp\" (UniqueName: \"kubernetes.io/projected/488656f3-6df5-4d96-9e9f-84b142bde5d8-kube-api-access-2hbxp\") pod \"master-0-debug-z7k5t\" (UID: \"488656f3-6df5-4d96-9e9f-84b142bde5d8\") " pod="openshift-must-gather-gbq7x/master-0-debug-z7k5t" Feb 20 12:57:55.927224 master-0 kubenswrapper[31420]: I0220 12:57:55.927124 31420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gbq7x/master-0-debug-z7k5t" Feb 20 12:57:55.972056 master-0 kubenswrapper[31420]: W0220 12:57:55.971980 31420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod488656f3_6df5_4d96_9e9f_84b142bde5d8.slice/crio-1bf55540a0653fd2ca67f0797347897d5af2d08423ddff5a48dacf4fc5e6d92f WatchSource:0}: Error finding container 1bf55540a0653fd2ca67f0797347897d5af2d08423ddff5a48dacf4fc5e6d92f: Status 404 returned error can't find the container with id 1bf55540a0653fd2ca67f0797347897d5af2d08423ddff5a48dacf4fc5e6d92f Feb 20 12:57:56.409724 master-0 kubenswrapper[31420]: I0220 12:57:56.409661 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg_e8c48a22-ed96-42c5-ac4a-dd7d4f204539/cluster-cloud-controller-manager/0.log" Feb 20 12:57:56.411519 master-0 kubenswrapper[31420]: I0220 12:57:56.411472 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg_e8c48a22-ed96-42c5-ac4a-dd7d4f204539/cluster-cloud-controller-manager/1.log" Feb 20 12:57:56.425386 master-0 kubenswrapper[31420]: I0220 12:57:56.425317 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg_e8c48a22-ed96-42c5-ac4a-dd7d4f204539/config-sync-controllers/0.log" Feb 20 12:57:56.427709 master-0 kubenswrapper[31420]: I0220 12:57:56.427664 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg_e8c48a22-ed96-42c5-ac4a-dd7d4f204539/config-sync-controllers/1.log" Feb 20 12:57:56.440366 master-0 kubenswrapper[31420]: I0220 12:57:56.440308 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-7bbvg_e8c48a22-ed96-42c5-ac4a-dd7d4f204539/kube-rbac-proxy/0.log" Feb 20 12:57:56.712607 master-0 kubenswrapper[31420]: I0220 12:57:56.708641 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gbq7x/master-0-debug-z7k5t" event={"ID":"488656f3-6df5-4d96-9e9f-84b142bde5d8","Type":"ContainerStarted","Data":"1bf55540a0653fd2ca67f0797347897d5af2d08423ddff5a48dacf4fc5e6d92f"} Feb 20 12:57:57.145463 master-0 kubenswrapper[31420]: I0220 12:57:57.145412 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-d44a4-api-0_5db8e133-5ad8-492f-8dda-44e70f29dd4d/cinder-d44a4-api-log/0.log" Feb 20 12:57:57.182807 master-0 kubenswrapper[31420]: I0220 12:57:57.182651 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-d44a4-api-0_5db8e133-5ad8-492f-8dda-44e70f29dd4d/cinder-api/0.log" Feb 20 12:57:57.258259 master-0 kubenswrapper[31420]: I0220 12:57:57.258199 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-d44a4-backup-0_d41cf8dc-8d38-4183-89fb-5d89372e867e/cinder-backup/0.log" Feb 20 12:57:57.321992 master-0 kubenswrapper[31420]: I0220 12:57:57.321925 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-d44a4-backup-0_d41cf8dc-8d38-4183-89fb-5d89372e867e/probe/0.log" Feb 20 12:57:57.385550 master-0 kubenswrapper[31420]: I0220 12:57:57.382443 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-d44a4-scheduler-0_0fcdb646-ba2b-466f-b072-1fd1b9e18a2d/cinder-scheduler/0.log" Feb 20 12:57:57.420629 master-0 kubenswrapper[31420]: I0220 12:57:57.419375 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-d44a4-scheduler-0_0fcdb646-ba2b-466f-b072-1fd1b9e18a2d/probe/0.log" Feb 20 12:57:57.515430 master-0 kubenswrapper[31420]: I0220 12:57:57.515376 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-d44a4-volume-lvm-iscsi-0_49d0bb61-aecd-4962-a916-db3bcd3d9767/cinder-volume/0.log" Feb 20 12:57:57.541780 master-0 kubenswrapper[31420]: I0220 12:57:57.541723 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-d44a4-volume-lvm-iscsi-0_49d0bb61-aecd-4962-a916-db3bcd3d9767/probe/0.log" Feb 20 12:57:57.564604 master-0 kubenswrapper[31420]: I0220 12:57:57.562948 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5b6d4b4b47-8r4td_82a3ce2c-bbec-4ef7-8975-5fbaced911cf/dnsmasq-dns/0.log" Feb 20 12:57:57.571804 master-0 kubenswrapper[31420]: I0220 12:57:57.571718 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5b6d4b4b47-8r4td_82a3ce2c-bbec-4ef7-8975-5fbaced911cf/init/0.log" Feb 20 12:57:57.650956 master-0 kubenswrapper[31420]: I0220 12:57:57.650892 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-e60fa-default-external-api-0_d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d/glance-log/0.log" Feb 20 12:57:57.672554 master-0 kubenswrapper[31420]: I0220 12:57:57.672384 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-e60fa-default-external-api-0_d4a93cdc-dd9e-4d69-8dac-9ed7914ca20d/glance-httpd/0.log" Feb 20 12:57:57.735011 master-0 kubenswrapper[31420]: I0220 12:57:57.734945 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-e60fa-default-internal-api-0_f49670ed-6985-4875-98d4-8edc26c85fa7/glance-log/0.log" Feb 20 12:57:57.764437 master-0 kubenswrapper[31420]: I0220 12:57:57.764381 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-e60fa-default-internal-api-0_f49670ed-6985-4875-98d4-8edc26c85fa7/glance-httpd/0.log" Feb 20 12:57:57.778327 master-0 kubenswrapper[31420]: I0220 12:57:57.778244 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-584bd6df9d-zt8sf_056164a2-48ed-4f27-80d4-a1fae8ebba54/ironic-api-log/0.log" Feb 20 12:57:57.839502 master-0 kubenswrapper[31420]: I0220 12:57:57.839431 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-584bd6df9d-zt8sf_056164a2-48ed-4f27-80d4-a1fae8ebba54/ironic-api/0.log" Feb 20 12:57:57.848586 master-0 kubenswrapper[31420]: I0220 12:57:57.848519 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-584bd6df9d-zt8sf_056164a2-48ed-4f27-80d4-a1fae8ebba54/init/0.log" Feb 20 12:57:57.877189 master-0 kubenswrapper[31420]: I0220 12:57:57.877126 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1c15d66e-eaa8-4305-a5cb-1fa14e718d2c/ironic-conductor/0.log" Feb 20 12:57:57.892668 master-0 kubenswrapper[31420]: I0220 12:57:57.892622 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1c15d66e-eaa8-4305-a5cb-1fa14e718d2c/httpboot/0.log" Feb 20 12:57:57.899537 master-0 kubenswrapper[31420]: I0220 12:57:57.898950 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1c15d66e-eaa8-4305-a5cb-1fa14e718d2c/dnsmasq/0.log" Feb 20 12:57:57.906542 master-0 kubenswrapper[31420]: I0220 12:57:57.906069 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1c15d66e-eaa8-4305-a5cb-1fa14e718d2c/init/0.log" Feb 20 12:57:57.915516 master-0 kubenswrapper[31420]: I0220 12:57:57.915462 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1c15d66e-eaa8-4305-a5cb-1fa14e718d2c/ironic-python-agent-init/0.log" Feb 20 12:57:58.650636 master-0 kubenswrapper[31420]: I0220 12:57:58.650493 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-6968c58f46-fq68q_ef18ace4-7316-4600-9be9-2adc792705e9/kube-rbac-proxy/0.log" Feb 20 12:57:58.721236 master-0 kubenswrapper[31420]: I0220 12:57:58.720863 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1c15d66e-eaa8-4305-a5cb-1fa14e718d2c/pxe-init/0.log" Feb 20 12:57:58.727366 master-0 kubenswrapper[31420]: I0220 12:57:58.727318 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-6968c58f46-fq68q_ef18ace4-7316-4600-9be9-2adc792705e9/cloud-credential-operator/0.log" Feb 20 12:57:58.809442 master-0 kubenswrapper[31420]: I0220 12:57:58.809362 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f61d99d3-557f-4054-9f41-b5fa83cb1ba9/ironic-inspector-httpd/0.log" Feb 20 12:57:58.876624 master-0 kubenswrapper[31420]: I0220 12:57:58.876443 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f61d99d3-557f-4054-9f41-b5fa83cb1ba9/ironic-inspector/0.log" Feb 20 12:57:58.889622 master-0 kubenswrapper[31420]: I0220 12:57:58.888764 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f61d99d3-557f-4054-9f41-b5fa83cb1ba9/inspector-httpboot/0.log" Feb 20 12:57:58.899303 master-0 kubenswrapper[31420]: I0220 12:57:58.897128 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f61d99d3-557f-4054-9f41-b5fa83cb1ba9/ramdisk-logs/0.log" Feb 20 12:57:58.907895 master-0 kubenswrapper[31420]: I0220 12:57:58.907791 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f61d99d3-557f-4054-9f41-b5fa83cb1ba9/inspector-dnsmasq/0.log" Feb 20 12:57:58.915697 master-0 kubenswrapper[31420]: I0220 12:57:58.915647 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f61d99d3-557f-4054-9f41-b5fa83cb1ba9/ironic-python-agent-init/0.log" Feb 20 12:57:58.933256 master-0 kubenswrapper[31420]: I0220 12:57:58.931108 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f61d99d3-557f-4054-9f41-b5fa83cb1ba9/inspector-pxe-init/0.log" Feb 20 12:57:58.943159 master-0 kubenswrapper[31420]: I0220 12:57:58.943087 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-5db78c68bd-t4cm6_d2416044-6dc6-4ce7-8b30-574bce497d5e/ironic-neutron-agent/2.log" Feb 20 12:57:58.948758 master-0 kubenswrapper[31420]: I0220 12:57:58.947520 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-5db78c68bd-t4cm6_d2416044-6dc6-4ce7-8b30-574bce497d5e/ironic-neutron-agent/1.log" Feb 20 12:57:59.046338 master-0 kubenswrapper[31420]: I0220 12:57:59.046287 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5947585c67-kc792_a77b88bf-7a95-4f44-80c3-75df9a9a3c2b/keystone-api/0.log" Feb 20 12:58:00.695295 master-0 kubenswrapper[31420]: I0220 12:58:00.693988 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-mk9fd_02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/openshift-config-operator/2.log" Feb 20 12:58:00.701736 master-0 kubenswrapper[31420]: I0220 12:58:00.701645 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-mk9fd_02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/openshift-config-operator/3.log" Feb 20 12:58:00.713473 master-0 kubenswrapper[31420]: I0220 12:58:00.713414 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-mk9fd_02c6a0e7-6363-4d7e-a8eb-b4d38b74b145/openshift-api/0.log" Feb 20 12:58:01.689018 master-0 kubenswrapper[31420]: I0220 12:58:01.688952 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-74ql7_298cd5fa-38c1-4bd3-a300-d82166658f50/console-operator/0.log" Feb 20 12:58:02.674643 master-0 kubenswrapper[31420]: I0220 12:58:02.674603 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6784f9677c-8sx5l_ff341ee9-5a82-46f3-b6b5-4e4adb9a242e/console/0.log" Feb 20 12:58:02.726519 master-0 kubenswrapper[31420]: I0220 12:58:02.726474 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-955b69498-tnjkt_2497e863-ea03-4513-8d7a-3b5fef6f323a/download-server/0.log" Feb 20 12:58:03.785909 master-0 kubenswrapper[31420]: I0220 12:58:03.785861 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-d9vsg_bbdbadd9-eeaa-46ef-936e-5db8d395c118/cluster-storage-operator/1.log" Feb 20 12:58:03.787780 master-0 kubenswrapper[31420]: I0220 12:58:03.787725 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-d9vsg_bbdbadd9-eeaa-46ef-936e-5db8d395c118/cluster-storage-operator/0.log" Feb 20 12:58:03.809566 master-0 kubenswrapper[31420]: I0220 12:58:03.807576 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/3.log" Feb 20 12:58:03.809566 master-0 kubenswrapper[31420]: I0220 12:58:03.807969 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-792hn_bf8dc2a9-fcc6-41b4-ae05-ed27cc60a2f4/snapshot-controller/4.log" Feb 20 12:58:03.822898 master-0 kubenswrapper[31420]: I0220 12:58:03.822832 31420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-gbq7x/perf-node-gather-daemonset-l6w9p" Feb 20 12:58:03.843840 master-0 kubenswrapper[31420]: I0220 12:58:03.843792 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-6fb4df594f-8x7xw_839bf5b1-b242-4bbd-bc09-cf6abcf7f734/csi-snapshot-controller-operator/1.log" Feb 20 12:58:03.845960 master-0 kubenswrapper[31420]: I0220 12:58:03.844981 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-6fb4df594f-8x7xw_839bf5b1-b242-4bbd-bc09-cf6abcf7f734/csi-snapshot-controller-operator/0.log" Feb 20 12:58:04.725219 master-0 kubenswrapper[31420]: I0220 12:58:04.725153 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-8c7d49845-qhx9j_b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8/dns-operator/0.log" Feb 20 12:58:04.743015 master-0 kubenswrapper[31420]: I0220 12:58:04.742964 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-8c7d49845-qhx9j_b67e36cd-ffcf-4e37-8ea3-8f949f2e93b8/kube-rbac-proxy/0.log" Feb 20 12:58:05.531545 master-0 kubenswrapper[31420]: I0220 12:58:05.531491 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-kx4ch_af18215b-e749-4565-bb6c-24e92c452817/dns/0.log" Feb 20 12:58:05.542621 master-0 kubenswrapper[31420]: I0220 12:58:05.542566 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-kx4ch_af18215b-e749-4565-bb6c-24e92c452817/kube-rbac-proxy/0.log" Feb 20 12:58:05.568537 master-0 kubenswrapper[31420]: I0220 12:58:05.568482 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-jlp7n_afa174b3-912c-4b56-b5eb-f3e3df012c11/dns-node-resolver/0.log" Feb 20 12:58:06.376186 master-0 kubenswrapper[31420]: I0220 12:58:06.376063 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-d69w2_1d3a36bb-9d11-48b3-a3b5-07b47738ef97/etcd-operator/2.log" Feb 20 12:58:06.397346 master-0 kubenswrapper[31420]: I0220 12:58:06.390047 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-d69w2_1d3a36bb-9d11-48b3-a3b5-07b47738ef97/etcd-operator/1.log" Feb 20 12:58:07.235470 master-0 kubenswrapper[31420]: I0220 12:58:07.235371 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcdctl/0.log" Feb 20 12:58:07.699714 master-0 kubenswrapper[31420]: I0220 12:58:07.698794 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd/0.log" Feb 20 12:58:07.714860 master-0 kubenswrapper[31420]: I0220 12:58:07.714802 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-metrics/0.log" Feb 20 12:58:07.726807 master-0 kubenswrapper[31420]: I0220 12:58:07.726773 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-readyz/0.log" Feb 20 12:58:07.742517 master-0 kubenswrapper[31420]: I0220 12:58:07.742484 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-rev/0.log" Feb 20 12:58:07.753750 master-0 kubenswrapper[31420]: I0220 12:58:07.753726 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/setup/0.log" Feb 20 12:58:07.771587 master-0 kubenswrapper[31420]: I0220 12:58:07.771549 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-ensure-env-vars/0.log" Feb 20 12:58:07.785777 master-0 kubenswrapper[31420]: I0220 12:58:07.784819 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-resources-copy/0.log" Feb 20 12:58:07.865739 master-0 kubenswrapper[31420]: I0220 12:58:07.865676 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_5710eb66-9717-4beb-a8b2-19f6886376b3/installer/0.log" Feb 20 12:58:07.972631 master-0 kubenswrapper[31420]: I0220 12:58:07.972265 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_305f625e-16b0-4840-a9e2-25571b49ad2a/installer/0.log" Feb 20 12:58:09.015996 master-0 kubenswrapper[31420]: I0220 12:58:09.015948 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-779979bdf7-r9ntt_7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/cluster-image-registry-operator/0.log" Feb 20 12:58:09.019975 master-0 kubenswrapper[31420]: I0220 12:58:09.019891 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-779979bdf7-r9ntt_7b31b66a-29ea-4c0d-b5a3-a7ed4af1daca/cluster-image-registry-operator/1.log" Feb 20 12:58:09.040041 master-0 kubenswrapper[31420]: I0220 12:58:09.039999 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-8p77l_034ed75f-05ba-4a92-8fba-40b9ee2155bf/node-ca/0.log" Feb 20 12:58:09.920643 master-0 kubenswrapper[31420]: I0220 12:58:09.920593 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/4.log" Feb 20 12:58:09.933580 master-0 kubenswrapper[31420]: I0220 12:58:09.933509 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/ingress-operator/5.log" Feb 20 12:58:09.944614 master-0 kubenswrapper[31420]: I0220 12:58:09.944320 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-kw2v6_db2a7cb1-1d05-4b24-86ed-f823fad5013e/kube-rbac-proxy/0.log" Feb 20 12:58:10.862044 master-0 kubenswrapper[31420]: I0220 12:58:10.861957 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-f6xzr_39790258-73bc-4c37-a935-e8d3c2a2d5c6/serve-healthcheck-canary/0.log" Feb 20 12:58:10.875415 master-0 kubenswrapper[31420]: I0220 12:58:10.875329 31420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gbq7x/master-0-debug-z7k5t" event={"ID":"488656f3-6df5-4d96-9e9f-84b142bde5d8","Type":"ContainerStarted","Data":"5cd724bccbb63e8de7125f1aabb42c3f8e7c91f8ce4095d7ba43fbb4334b6530"} Feb 20 12:58:10.899013 master-0 kubenswrapper[31420]: I0220 12:58:10.898921 31420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gbq7x/master-0-debug-z7k5t" podStartSLOduration=1.3786189690000001 podStartE2EDuration="15.898904961s" podCreationTimestamp="2026-02-20 12:57:55 +0000 UTC" firstStartedPulling="2026-02-20 12:57:55.974837006 +0000 UTC m=+3180.694075257" lastFinishedPulling="2026-02-20 12:58:10.495123008 +0000 UTC m=+3195.214361249" observedRunningTime="2026-02-20 12:58:10.895786541 +0000 UTC m=+3195.615024782" watchObservedRunningTime="2026-02-20 12:58:10.898904961 +0000 UTC m=+3195.618143202" Feb 20 12:58:11.589757 master-0 kubenswrapper[31420]: I0220 12:58:11.589467 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-59b498fcfb-hsjr7_daf25ef5-8247-4dbb-bdc1-55104b1015b7/insights-operator/0.log" Feb 20 12:58:11.807564 master-0 kubenswrapper[31420]: I0220 12:58:11.805615 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d420cbb8-46c6-400b-b143-ab6a11e0ac04/memcached/0.log" Feb 20 12:58:11.919854 master-0 kubenswrapper[31420]: I0220 12:58:11.918441 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c4cff4645-lz9x7_9a420cc4-49a1-449c-8180-213048aef749/neutron-api/0.log" Feb 20 12:58:11.932737 master-0 kubenswrapper[31420]: I0220 12:58:11.932689 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c4cff4645-lz9x7_9a420cc4-49a1-449c-8180-213048aef749/neutron-httpd/0.log" Feb 20 12:58:12.032816 master-0 kubenswrapper[31420]: I0220 12:58:12.032758 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b6679d6f-5fd2-407d-96b4-2dcea806dec6/nova-api-log/0.log" Feb 20 12:58:12.233060 master-0 kubenswrapper[31420]: I0220 12:58:12.232947 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b6679d6f-5fd2-407d-96b4-2dcea806dec6/nova-api-api/0.log" Feb 20 12:58:12.311600 master-0 kubenswrapper[31420]: I0220 12:58:12.311558 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_6c77eb66-18a8-40b3-8194-ce0160ccfe8c/nova-cell0-conductor-conductor/0.log" Feb 20 12:58:12.435387 master-0 kubenswrapper[31420]: I0220 12:58:12.435329 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-compute-ironic-compute-0_aba54209-fbee-41c6-b8fa-a82b2534d9d7/nova-cell1-compute-ironic-compute-compute/0.log" Feb 20 12:58:12.515856 master-0 kubenswrapper[31420]: I0220 12:58:12.515791 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_929378b8-f28c-4558-8b42-8b8a297e63d9/nova-cell1-conductor-conductor/0.log" Feb 20 12:58:12.577937 master-0 kubenswrapper[31420]: I0220 12:58:12.577837 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7b2a9edf-20d3-48c8-99bf-4628575dbd9f/nova-cell1-novncproxy-novncproxy/0.log" Feb 20 12:58:12.667595 master-0 kubenswrapper[31420]: I0220 12:58:12.667548 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0dd027dd-3995-4508-bbee-7c776c2d6fe4/nova-metadata-log/0.log" Feb 20 12:58:13.544557 master-0 kubenswrapper[31420]: I0220 12:58:13.544215 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0dd027dd-3995-4508-bbee-7c776c2d6fe4/nova-metadata-metadata/0.log" Feb 20 12:58:13.651849 master-0 kubenswrapper[31420]: I0220 12:58:13.650842 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_299e8ff2-72bd-4426-bc93-4bd2c89197cd/nova-scheduler-scheduler/0.log" Feb 20 12:58:13.669809 master-0 kubenswrapper[31420]: I0220 12:58:13.669755 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c5e587a0-149e-4023-9766-0ac33a7a5d4d/galera/0.log" Feb 20 12:58:13.681095 master-0 kubenswrapper[31420]: I0220 12:58:13.681037 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c5e587a0-149e-4023-9766-0ac33a7a5d4d/mysql-bootstrap/0.log" Feb 20 12:58:13.703520 master-0 kubenswrapper[31420]: I0220 12:58:13.702794 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a777598a-6198-4385-860b-a04696e29a88/galera/0.log" Feb 20 12:58:13.715463 master-0 kubenswrapper[31420]: I0220 12:58:13.715420 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a777598a-6198-4385-860b-a04696e29a88/mysql-bootstrap/0.log" Feb 20 12:58:13.722212 master-0 kubenswrapper[31420]: I0220 12:58:13.722189 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_50774a30-2089-4dcc-9b00-51a5a600c68b/openstackclient/0.log" Feb 20 12:58:13.744327 master-0 kubenswrapper[31420]: I0220 12:58:13.744267 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-4twdx_d653c352-bccc-4fb7-bba0-97ad923e92e4/ovn-controller/0.log" Feb 20 12:58:13.749325 master-0 kubenswrapper[31420]: I0220 12:58:13.749128 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_4d6df1a9-67a1-4776-917e-aa4aa6424faf/alertmanager/0.log" Feb 20 12:58:13.750830 master-0 kubenswrapper[31420]: I0220 12:58:13.750791 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vjbdv_87719869-6d9c-4b7e-b423-1c3c97c501c6/openstack-network-exporter/0.log" Feb 20 12:58:13.762432 master-0 kubenswrapper[31420]: I0220 12:58:13.762381 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_4d6df1a9-67a1-4776-917e-aa4aa6424faf/config-reloader/0.log" Feb 20 12:58:13.764293 master-0 kubenswrapper[31420]: I0220 12:58:13.764193 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hzmsb_c0dc8f5a-78ac-4bdf-9b05-953e0edf6616/ovsdb-server/0.log" Feb 20 12:58:13.776413 master-0 kubenswrapper[31420]: I0220 12:58:13.776377 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hzmsb_c0dc8f5a-78ac-4bdf-9b05-953e0edf6616/ovs-vswitchd/0.log" Feb 20 12:58:13.784331 master-0 kubenswrapper[31420]: I0220 12:58:13.784288 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_4d6df1a9-67a1-4776-917e-aa4aa6424faf/kube-rbac-proxy-web/0.log" Feb 20 12:58:13.785776 master-0 kubenswrapper[31420]: I0220 12:58:13.785752 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hzmsb_c0dc8f5a-78ac-4bdf-9b05-953e0edf6616/ovsdb-server-init/0.log" Feb 20 12:58:13.802286 master-0 kubenswrapper[31420]: I0220 12:58:13.802189 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_4d6df1a9-67a1-4776-917e-aa4aa6424faf/kube-rbac-proxy/0.log" Feb 20 12:58:13.802684 master-0 kubenswrapper[31420]: I0220 12:58:13.802650 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_20df0749-3b28-484d-905e-4c9027c36fb3/ovn-northd/0.log" Feb 20 12:58:13.812481 master-0 kubenswrapper[31420]: I0220 12:58:13.812443 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_20df0749-3b28-484d-905e-4c9027c36fb3/openstack-network-exporter/0.log" Feb 20 12:58:13.818137 master-0 kubenswrapper[31420]: I0220 12:58:13.818109 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_4d6df1a9-67a1-4776-917e-aa4aa6424faf/kube-rbac-proxy-metric/0.log" Feb 20 12:58:13.824775 master-0 kubenswrapper[31420]: I0220 12:58:13.824740 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e5704bbe-5b81-411c-9641-e54f70784e12/ovsdbserver-nb/0.log" Feb 20 12:58:13.830477 master-0 kubenswrapper[31420]: I0220 12:58:13.830302 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e5704bbe-5b81-411c-9641-e54f70784e12/openstack-network-exporter/0.log" Feb 20 12:58:13.840786 master-0 kubenswrapper[31420]: I0220 12:58:13.840744 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_4d6df1a9-67a1-4776-917e-aa4aa6424faf/prom-label-proxy/0.log" Feb 20 12:58:13.846886 master-0 kubenswrapper[31420]: I0220 12:58:13.846860 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a534afa2-10da-4837-9cf8-6b2416df04dd/ovsdbserver-sb/0.log" Feb 20 12:58:13.852210 master-0 kubenswrapper[31420]: I0220 12:58:13.852188 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a534afa2-10da-4837-9cf8-6b2416df04dd/openstack-network-exporter/0.log" Feb 20 12:58:13.853244 master-0 kubenswrapper[31420]: I0220 12:58:13.853225 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_4d6df1a9-67a1-4776-917e-aa4aa6424faf/init-config-reloader/0.log" Feb 20 12:58:13.954025 master-0 kubenswrapper[31420]: I0220 12:58:13.953929 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c4f4ddf86-w8dng_d19c8d22-6d80-4412-bf5e-11082d827b39/placement-log/0.log" Feb 20 12:58:13.964885 master-0 kubenswrapper[31420]: I0220 12:58:13.964810 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-6bb6d78bf-5zl5l_22bba1b3-587d-4802-b4ae-946827c3fa7a/cluster-monitoring-operator/0.log" Feb 20 12:58:13.979691 master-0 kubenswrapper[31420]: I0220 12:58:13.979661 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-59584d565f-9fdgm_042d8457-04dc-4171-8b0f-f9e3de695c46/kube-state-metrics/0.log" Feb 20 12:58:13.988464 master-0 kubenswrapper[31420]: I0220 12:58:13.988385 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-59584d565f-9fdgm_042d8457-04dc-4171-8b0f-f9e3de695c46/kube-rbac-proxy-main/0.log" Feb 20 12:58:14.000058 master-0 kubenswrapper[31420]: I0220 12:58:13.999651 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-59584d565f-9fdgm_042d8457-04dc-4171-8b0f-f9e3de695c46/kube-rbac-proxy-self/0.log" Feb 20 12:58:14.006722 master-0 kubenswrapper[31420]: I0220 12:58:14.006501 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c4f4ddf86-w8dng_d19c8d22-6d80-4412-bf5e-11082d827b39/placement-api/0.log" Feb 20 12:58:14.014957 master-0 kubenswrapper[31420]: I0220 12:58:14.014879 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-65bb9698b4-rf9nz_164fcfe3-a130-4e21-afdf-3bafadeef238/metrics-server/0.log" Feb 20 12:58:14.029002 master-0 kubenswrapper[31420]: I0220 12:58:14.028964 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-6cf879bbbd-4fqq9_6e272535-93c8-4259-8775-f61f62b07be7/monitoring-plugin/0.log" Feb 20 12:58:14.037302 master-0 kubenswrapper[31420]: I0220 12:58:14.037256 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_de0e242c-6018-42c0-8a59-b755e2bd36b0/rabbitmq/0.log" Feb 20 12:58:14.042483 master-0 kubenswrapper[31420]: I0220 12:58:14.042455 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_de0e242c-6018-42c0-8a59-b755e2bd36b0/setup-container/0.log" Feb 20 12:58:14.052870 master-0 kubenswrapper[31420]: I0220 12:58:14.052480 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-8d7nc_62ba4bae-a5e1-4c4d-b544-25d0e59eeac2/node-exporter/0.log" Feb 20 12:58:14.067273 master-0 kubenswrapper[31420]: I0220 12:58:14.067203 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-8d7nc_62ba4bae-a5e1-4c4d-b544-25d0e59eeac2/kube-rbac-proxy/0.log" Feb 20 12:58:14.081607 master-0 kubenswrapper[31420]: I0220 12:58:14.081571 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-8d7nc_62ba4bae-a5e1-4c4d-b544-25d0e59eeac2/init-textfile/0.log" Feb 20 12:58:14.084801 master-0 kubenswrapper[31420]: I0220 12:58:14.084705 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3027dc76-27b3-44c4-b217-885670c3e29e/rabbitmq/0.log" Feb 20 12:58:14.088300 master-0 kubenswrapper[31420]: I0220 12:58:14.088266 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3027dc76-27b3-44c4-b217-885670c3e29e/setup-container/0.log" Feb 20 12:58:14.095867 master-0 kubenswrapper[31420]: I0220 12:58:14.095759 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-6dbff8cb4c-zbh2z_89ed6373-78f8-4d77-82b2-1ab055b5b862/kube-rbac-proxy-main/0.log" Feb 20 12:58:14.106147 master-0 kubenswrapper[31420]: I0220 12:58:14.106097 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-6dbff8cb4c-zbh2z_89ed6373-78f8-4d77-82b2-1ab055b5b862/kube-rbac-proxy-self/0.log" Feb 20 12:58:14.131610 master-0 kubenswrapper[31420]: I0220 12:58:14.131512 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-6dbff8cb4c-zbh2z_89ed6373-78f8-4d77-82b2-1ab055b5b862/openshift-state-metrics/0.log" Feb 20 12:58:14.167814 master-0 kubenswrapper[31420]: I0220 12:58:14.167723 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_e2efa6b2-2332-40b8-8e94-e7d8552ab973/prometheus/0.log" Feb 20 12:58:14.181549 master-0 kubenswrapper[31420]: I0220 12:58:14.178613 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_e2efa6b2-2332-40b8-8e94-e7d8552ab973/config-reloader/0.log" Feb 20 12:58:14.195186 master-0 kubenswrapper[31420]: I0220 12:58:14.194500 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c6dd6f84-vtwv4_eb0da685-95b9-432d-85ba-f6d0389844cb/proxy-httpd/0.log" Feb 20 12:58:14.196331 master-0 kubenswrapper[31420]: I0220 12:58:14.196296 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_e2efa6b2-2332-40b8-8e94-e7d8552ab973/thanos-sidecar/0.log" Feb 20 12:58:14.206180 master-0 kubenswrapper[31420]: I0220 12:58:14.206144 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_e2efa6b2-2332-40b8-8e94-e7d8552ab973/kube-rbac-proxy-web/0.log" Feb 20 12:58:14.208315 master-0 kubenswrapper[31420]: I0220 12:58:14.208271 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c6dd6f84-vtwv4_eb0da685-95b9-432d-85ba-f6d0389844cb/proxy-server/0.log" Feb 20 12:58:14.218736 master-0 kubenswrapper[31420]: I0220 12:58:14.218622 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_e2efa6b2-2332-40b8-8e94-e7d8552ab973/kube-rbac-proxy/0.log" Feb 20 12:58:14.223970 master-0 kubenswrapper[31420]: I0220 12:58:14.223933 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-lk7sj_a64c3aa0-12b3-412e-804c-0fd4a79bc80f/swift-ring-rebalance/0.log" Feb 20 12:58:14.228119 master-0 kubenswrapper[31420]: I0220 12:58:14.228091 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_e2efa6b2-2332-40b8-8e94-e7d8552ab973/kube-rbac-proxy-thanos/0.log" Feb 20 12:58:14.241065 master-0 kubenswrapper[31420]: I0220 12:58:14.240969 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_e2efa6b2-2332-40b8-8e94-e7d8552ab973/init-config-reloader/0.log" Feb 20 12:58:14.244549 master-0 kubenswrapper[31420]: I0220 12:58:14.241501 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad/account-server/0.log" Feb 20 12:58:14.262504 master-0 kubenswrapper[31420]: I0220 12:58:14.262320 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-754bc4d665-5kbrl_b9fe0660-fae4-4f97-8895-dbc4845cee40/prometheus-operator/0.log" Feb 20 12:58:14.266547 master-0 kubenswrapper[31420]: I0220 12:58:14.263920 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad/account-replicator/0.log" Feb 20 12:58:14.270545 master-0 kubenswrapper[31420]: I0220 12:58:14.267990 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad/account-auditor/0.log" Feb 20 12:58:14.274547 master-0 kubenswrapper[31420]: I0220 12:58:14.272686 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-754bc4d665-5kbrl_b9fe0660-fae4-4f97-8895-dbc4845cee40/kube-rbac-proxy/0.log" Feb 20 12:58:14.275685 master-0 kubenswrapper[31420]: I0220 12:58:14.275654 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad/account-reaper/0.log" Feb 20 12:58:14.284990 master-0 kubenswrapper[31420]: I0220 12:58:14.284944 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-75d56db95f-s57jn_4d8cd7c5-31fd-4dca-b39b-6d62eb573707/prometheus-operator-admission-webhook/0.log" Feb 20 12:58:14.285226 master-0 kubenswrapper[31420]: I0220 12:58:14.285181 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad/container-server/0.log" Feb 20 12:58:14.307420 master-0 kubenswrapper[31420]: I0220 12:58:14.307292 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-796b9bd86f-sp4fc_aae1df07-cf9f-47a3-b146-2a0adb182660/telemeter-client/0.log" Feb 20 12:58:14.308755 master-0 kubenswrapper[31420]: I0220 12:58:14.308702 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad/container-replicator/0.log" Feb 20 12:58:14.313221 master-0 kubenswrapper[31420]: I0220 12:58:14.313178 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad/container-auditor/0.log" Feb 20 12:58:14.320344 master-0 kubenswrapper[31420]: I0220 12:58:14.320078 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad/container-updater/0.log" Feb 20 12:58:14.324637 master-0 kubenswrapper[31420]: I0220 12:58:14.324607 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-796b9bd86f-sp4fc_aae1df07-cf9f-47a3-b146-2a0adb182660/reload/0.log" Feb 20 12:58:14.326395 master-0 kubenswrapper[31420]: I0220 12:58:14.326351 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad/object-server/0.log" Feb 20 12:58:14.337485 master-0 kubenswrapper[31420]: I0220 12:58:14.337441 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-796b9bd86f-sp4fc_aae1df07-cf9f-47a3-b146-2a0adb182660/kube-rbac-proxy/0.log" Feb 20 12:58:14.341410 master-0 kubenswrapper[31420]: I0220 12:58:14.341373 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad/object-replicator/0.log" Feb 20 12:58:14.352612 master-0 kubenswrapper[31420]: I0220 12:58:14.352504 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad/object-auditor/0.log" Feb 20 12:58:14.360894 master-0 kubenswrapper[31420]: I0220 12:58:14.360856 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad/object-updater/0.log" Feb 20 12:58:14.367095 master-0 kubenswrapper[31420]: I0220 12:58:14.367058 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-57dfb4b6b4-gvjmn_b228c455-3f6c-4557-8bf1-e7b2fe45f275/thanos-query/0.log" Feb 20 12:58:14.370969 master-0 kubenswrapper[31420]: I0220 12:58:14.370936 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad/object-expirer/0.log" Feb 20 12:58:14.378415 master-0 kubenswrapper[31420]: I0220 12:58:14.378369 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-57dfb4b6b4-gvjmn_b228c455-3f6c-4557-8bf1-e7b2fe45f275/kube-rbac-proxy-web/0.log" Feb 20 12:58:14.378611 master-0 kubenswrapper[31420]: I0220 12:58:14.378586 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad/rsync/0.log" Feb 20 12:58:14.386830 master-0 kubenswrapper[31420]: I0220 12:58:14.386404 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6f9175e-c0a8-46d8-908a-5ffcf1bfc7ad/swift-recon-cron/0.log" Feb 20 12:58:14.389500 master-0 kubenswrapper[31420]: I0220 12:58:14.389467 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-57dfb4b6b4-gvjmn_b228c455-3f6c-4557-8bf1-e7b2fe45f275/kube-rbac-proxy/0.log" Feb 20 12:58:14.400847 master-0 kubenswrapper[31420]: I0220 12:58:14.400798 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-57dfb4b6b4-gvjmn_b228c455-3f6c-4557-8bf1-e7b2fe45f275/prom-label-proxy/0.log" Feb 20 12:58:14.414489 master-0 kubenswrapper[31420]: I0220 12:58:14.414446 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-57dfb4b6b4-gvjmn_b228c455-3f6c-4557-8bf1-e7b2fe45f275/kube-rbac-proxy-rules/0.log" Feb 20 12:58:14.425509 master-0 kubenswrapper[31420]: I0220 12:58:14.425482 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-57dfb4b6b4-gvjmn_b228c455-3f6c-4557-8bf1-e7b2fe45f275/kube-rbac-proxy-metrics/0.log" Feb 20 12:58:17.123330 master-0 kubenswrapper[31420]: I0220 12:58:17.123279 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-vdrkc_d8a5df14-16b6-4d50-900b-8f0c241b1d1b/controller/0.log" Feb 20 12:58:17.139560 master-0 kubenswrapper[31420]: I0220 12:58:17.136111 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-vdrkc_d8a5df14-16b6-4d50-900b-8f0c241b1d1b/kube-rbac-proxy/0.log" Feb 20 12:58:17.160550 master-0 kubenswrapper[31420]: I0220 12:58:17.158322 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/controller/0.log" Feb 20 12:58:18.444013 master-0 kubenswrapper[31420]: I0220 12:58:18.443967 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/frr/0.log" Feb 20 12:58:18.464296 master-0 kubenswrapper[31420]: I0220 12:58:18.464155 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/reloader/0.log" Feb 20 12:58:18.478314 master-0 kubenswrapper[31420]: I0220 12:58:18.477888 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/frr-metrics/0.log" Feb 20 12:58:18.509281 master-0 kubenswrapper[31420]: I0220 12:58:18.509234 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/kube-rbac-proxy/0.log" Feb 20 12:58:18.524517 master-0 kubenswrapper[31420]: I0220 12:58:18.524472 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/kube-rbac-proxy-frr/0.log" Feb 20 12:58:18.537472 master-0 kubenswrapper[31420]: I0220 12:58:18.537410 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/cp-frr-files/0.log" Feb 20 12:58:18.549622 master-0 kubenswrapper[31420]: I0220 12:58:18.549509 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/cp-reloader/0.log" Feb 20 12:58:18.566078 master-0 kubenswrapper[31420]: I0220 12:58:18.566021 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/cp-metrics/0.log" Feb 20 12:58:18.582256 master-0 kubenswrapper[31420]: I0220 12:58:18.582207 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-d7llz_91455b18-03a0-49c7-aa61-59b91e88a5fe/frr-k8s-webhook-server/0.log" Feb 20 12:58:18.606364 master-0 kubenswrapper[31420]: I0220 12:58:18.606290 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7865667bdc-lwg78_05b963e1-7eca-4b48-b411-ce2bbf48fbf2/manager/0.log" Feb 20 12:58:18.628253 master-0 kubenswrapper[31420]: I0220 12:58:18.628212 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr_8de82d75-91b3-4f8d-8bfd-e67bca18fa5e/extract/0.log" Feb 20 12:58:18.631389 master-0 kubenswrapper[31420]: I0220 12:58:18.631354 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8486f65d77-9ck87_882e49fa-c8b8-4f18-a340-4dfdd950a449/webhook-server/0.log" Feb 20 12:58:18.639682 master-0 kubenswrapper[31420]: I0220 12:58:18.639635 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr_8de82d75-91b3-4f8d-8bfd-e67bca18fa5e/util/0.log" Feb 20 12:58:18.656958 master-0 kubenswrapper[31420]: I0220 12:58:18.656912 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967kncmr_8de82d75-91b3-4f8d-8bfd-e67bca18fa5e/pull/0.log" Feb 20 12:58:19.196335 master-0 kubenswrapper[31420]: I0220 12:58:19.196281 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r94p4_b91f2548-98e3-418c-9a05-58502d67d66f/speaker/0.log" Feb 20 12:58:19.211831 master-0 kubenswrapper[31420]: I0220 12:58:19.211790 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r94p4_b91f2548-98e3-418c-9a05-58502d67d66f/kube-rbac-proxy/0.log" Feb 20 12:58:21.997721 master-0 kubenswrapper[31420]: I0220 12:58:21.997682 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-gwpst_4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/cluster-node-tuning-operator/1.log" Feb 20 12:58:21.998736 master-0 kubenswrapper[31420]: I0220 12:58:21.998676 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-gwpst_4cbb46f1-1c33-42fc-8371-6a1bea8c28ff/cluster-node-tuning-operator/0.log" Feb 20 12:58:22.017400 master-0 kubenswrapper[31420]: I0220 12:58:22.017350 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-z82cm_b9eb45bd-fc01-4707-87ea-64f07f72f6f9/tuned/0.log" Feb 20 12:58:22.142982 master-0 kubenswrapper[31420]: I0220 12:58:22.140711 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-vdrkc_d8a5df14-16b6-4d50-900b-8f0c241b1d1b/controller/0.log" Feb 20 12:58:22.148260 master-0 kubenswrapper[31420]: I0220 12:58:22.148229 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-vdrkc_d8a5df14-16b6-4d50-900b-8f0c241b1d1b/kube-rbac-proxy/0.log" Feb 20 12:58:22.173024 master-0 kubenswrapper[31420]: I0220 12:58:22.172226 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/controller/0.log" Feb 20 12:58:22.257703 master-0 kubenswrapper[31420]: I0220 12:58:22.257081 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-2db7x_5b412160-9ed7-4c10-9dc9-7fbe93d45803/manager/0.log" Feb 20 12:58:23.111726 master-0 kubenswrapper[31420]: I0220 12:58:23.111672 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-8xjmf_f0fda7fa-0935-47fc-8c9b-723d5b352c04/manager/0.log" Feb 20 12:58:23.128287 master-0 kubenswrapper[31420]: I0220 12:58:23.127640 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-f7vz4_3e4015cc-c404-4a2d-8ac0-a550b2b168f3/manager/0.log" Feb 20 12:58:23.256457 master-0 kubenswrapper[31420]: I0220 12:58:23.256408 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-v6x7r_61d84cd4-22bd-4958-8c16-ea0edee7180e/manager/0.log" Feb 20 12:58:23.266936 master-0 kubenswrapper[31420]: I0220 12:58:23.266889 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-6slqx_7d5ba596-526c-42b9-845a-9a4ec0b084e9/manager/0.log" Feb 20 12:58:23.279989 master-0 kubenswrapper[31420]: I0220 12:58:23.279950 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-5qkng_a86f22c3-c162-407b-9f7c-ee9fec02d78e/manager/0.log" Feb 20 12:58:23.636996 master-0 kubenswrapper[31420]: I0220 12:58:23.636868 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-kg75v_e0b28c90-d5b6-44f3-867c-020ece32ac7d/kube-apiserver-operator/1.log" Feb 20 12:58:23.658313 master-0 kubenswrapper[31420]: I0220 12:58:23.658261 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-kg75v_e0b28c90-d5b6-44f3-867c-020ece32ac7d/kube-apiserver-operator/2.log" Feb 20 12:58:23.660910 master-0 kubenswrapper[31420]: I0220 12:58:23.660873 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5f879c76b6-bn5dg_5d18777a-1196-401b-b94c-6c8504f5ce3b/manager/0.log" Feb 20 12:58:23.797330 master-0 kubenswrapper[31420]: I0220 12:58:23.797260 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-bhhzk_c4b62567-b85d-476e-a92a-24b43173afd3/manager/0.log" Feb 20 12:58:23.908861 master-0 kubenswrapper[31420]: I0220 12:58:23.908738 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-xdnq5_ec3aef87-8ef5-4e4c-a06e-3d9424c62df6/manager/0.log" Feb 20 12:58:23.911750 master-0 kubenswrapper[31420]: I0220 12:58:23.911707 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/frr/0.log" Feb 20 12:58:23.925665 master-0 kubenswrapper[31420]: I0220 12:58:23.925599 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/reloader/0.log" Feb 20 12:58:23.925932 master-0 kubenswrapper[31420]: I0220 12:58:23.925737 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-57jpf_25065d47-a25e-4035-8c33-c73eb191f1b2/manager/0.log" Feb 20 12:58:23.935208 master-0 kubenswrapper[31420]: I0220 12:58:23.934744 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/frr-metrics/0.log" Feb 20 12:58:23.948734 master-0 kubenswrapper[31420]: I0220 12:58:23.947758 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/kube-rbac-proxy/0.log" Feb 20 12:58:23.961864 master-0 kubenswrapper[31420]: I0220 12:58:23.960499 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/kube-rbac-proxy-frr/0.log" Feb 20 12:58:23.965980 master-0 kubenswrapper[31420]: I0220 12:58:23.964933 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-rn9t8_7e9508f3-a3ab-4df1-b9fb-775bba9a0f43/manager/0.log" Feb 20 12:58:23.970752 master-0 kubenswrapper[31420]: I0220 12:58:23.970700 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/cp-frr-files/0.log" Feb 20 12:58:23.977228 master-0 kubenswrapper[31420]: I0220 12:58:23.977188 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/cp-reloader/0.log" Feb 20 12:58:23.990323 master-0 kubenswrapper[31420]: I0220 12:58:23.990282 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cfkwh_ebec7408-42ea-4bdd-9cc9-a42caaefe664/cp-metrics/0.log" Feb 20 12:58:24.010591 master-0 kubenswrapper[31420]: I0220 12:58:24.010541 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-d7llz_91455b18-03a0-49c7-aa61-59b91e88a5fe/frr-k8s-webhook-server/0.log" Feb 20 12:58:24.018562 master-0 kubenswrapper[31420]: I0220 12:58:24.017150 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-rpcfg_38f6b140-e4b4-4999-af19-6dc2973ca6ed/manager/0.log" Feb 20 12:58:24.066386 master-0 kubenswrapper[31420]: I0220 12:58:24.063181 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7865667bdc-lwg78_05b963e1-7eca-4b48-b411-ce2bbf48fbf2/manager/0.log" Feb 20 12:58:24.084332 master-0 kubenswrapper[31420]: I0220 12:58:24.084247 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8486f65d77-9ck87_882e49fa-c8b8-4f18-a340-4dfdd950a449/webhook-server/0.log" Feb 20 12:58:24.150084 master-0 kubenswrapper[31420]: I0220 12:58:24.150018 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-mt88g_4bec2508-5bbe-4c35-8292-94a77950167a/manager/0.log" Feb 20 12:58:24.543580 master-0 kubenswrapper[31420]: I0220 12:58:24.543511 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r94p4_b91f2548-98e3-418c-9a05-58502d67d66f/speaker/0.log" Feb 20 12:58:24.554196 master-0 kubenswrapper[31420]: I0220 12:58:24.554161 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r94p4_b91f2548-98e3-418c-9a05-58502d67d66f/kube-rbac-proxy/0.log" Feb 20 12:58:24.782359 master-0 kubenswrapper[31420]: I0220 12:58:24.782222 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_74e9ba02-39d0-41fb-aed1-39923698bc0b/installer/0.log" Feb 20 12:58:24.800397 master-0 kubenswrapper[31420]: I0220 12:58:24.800340 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-rgsrf_b55110b9-7c65-46ee-a4f2-4e9b6a69158e/manager/0.log" Feb 20 12:58:24.803973 master-0 kubenswrapper[31420]: I0220 12:58:24.803910 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_7de8fb9d-34f7-49bc-867d-827a0f9a11e7/installer/0.log" Feb 20 12:58:24.816480 master-0 kubenswrapper[31420]: I0220 12:58:24.814832 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-fb5fcc5b8-nrvcg_1db07cb7-a520-4044-95b9-05f1ec724217/manager/0.log" Feb 20 12:58:24.827394 master-0 kubenswrapper[31420]: I0220 12:58:24.827086 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_97095f88-ee81-4a47-9bd7-1dbe71ec8d4d/installer/0.log" Feb 20 12:58:24.857591 master-0 kubenswrapper[31420]: I0220 12:58:24.855788 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-6-master-0_bf6108c5-19ba-4f99-9f75-6e02fa5876f2/installer/0.log" Feb 20 12:58:24.986867 master-0 kubenswrapper[31420]: I0220 12:58:24.986105 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6679bf9b57-zcfvv_493ecdb3-0ff0-4c1f-8e5b-b713b7d9bc91/operator/0.log" Feb 20 12:58:25.151617 master-0 kubenswrapper[31420]: E0220 12:58:25.147141 31420 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.32.10:49048->192.168.32.10:45797: read tcp 192.168.32.10:49048->192.168.32.10:45797: read: connection reset by peer Feb 20 12:58:25.391545 master-0 kubenswrapper[31420]: I0220 12:58:25.390644 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_2202ebf88dd4d5cadde1ad8cb2bbaddc/kube-apiserver/0.log" Feb 20 12:58:25.415535 master-0 kubenswrapper[31420]: I0220 12:58:25.414388 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_2202ebf88dd4d5cadde1ad8cb2bbaddc/kube-apiserver-cert-syncer/0.log" Feb 20 12:58:25.457556 master-0 kubenswrapper[31420]: I0220 12:58:25.451295 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_2202ebf88dd4d5cadde1ad8cb2bbaddc/kube-apiserver-cert-regeneration-controller/0.log" Feb 20 12:58:25.467546 master-0 kubenswrapper[31420]: I0220 12:58:25.465997 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_2202ebf88dd4d5cadde1ad8cb2bbaddc/kube-apiserver-insecure-readyz/0.log" Feb 20 12:58:25.489543 master-0 kubenswrapper[31420]: I0220 12:58:25.489073 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_2202ebf88dd4d5cadde1ad8cb2bbaddc/kube-apiserver-check-endpoints/0.log" Feb 20 12:58:25.536159 master-0 kubenswrapper[31420]: I0220 12:58:25.536115 31420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_2202ebf88dd4d5cadde1ad8cb2bbaddc/setup/0.log"